Microsoft, Apple, and the Government - Oh My!

I try to stay out of politics as much as I can, and focus on technology that cane make the world better, etc. - but the problem is that sometimes politics interferes with technology - and I'd like to make the blanket statement that when that happens, politics is generally in the wrong.  Of course politics means politicians, the general public, and the government - three groups of people who don't necessarily understand technology.  And I'm sorry, but people who don't understand something shouldn't be able to make decisions about it.

There has been a lot of news about the Apple iPhone unlocking case - but before I get into that, I'd like to talk about another case: The case of the Microsoft Ireland Server.  To make a long story short, most large providers of what are called "cloud services" these days typically have servers spread across the globe.  The original reasons centered around speed and connectivity issues, however these days legal jurisdiction is also an issue.  For example, a user in Japan would normally be directed to a Japanese server, and a user in the US would be directed to a US server.  This allows faster response time (lower latency), as well as continued usability in case of limited or disrupted connectivity between continents.

Also, given the fact that data privacy laws in places like the EU are much stronger than the US, European customers have been reluctant to store data with US based services, as it would mean breaking the EU law if they were to place data in a place where foreign law enforcement with lower privacy standards could easily access it.  An easy solution to this is simply to locate the relevant servers in the EU.

Google also at one time moved all of the servers for their Chinese office to Hong Kong (where the laws are more reasonable) in order to avoid heavy-handed censorship.

In fact, Amazon Web Services (which offers storage space, etc. to other service providers including DropBox, etc.) also allows users to choose where their server should be located.

The point is that location matters - not only for technical reasons, but for legal reasons.  The laws of different countries are different, and there would be no way to comply with all of them while having all of the servers in one location - because the country where the servers were physically located would almost certainly demand legal access to those servers in some cases.  (Hopefully with a legally obtained warrant).

There were occasionally arguments about "Just where is Cyberspace...?", etc. - but the reality is that wherever the servers are located is also a place where courts and police with guns are.  At the end of the day, they can probably physically confiscate your servers if need-be, so it seemed clear that they had at least some kind of jurisdiction.  At the same time, if an American user logged into the American site of an American WebMail provider from his home in America and accessed his email, also stored on a server in America - well is Syria came knocking and said to the WebMail provider "Hey, we want all the mail about Mr. John Smith in America..." - well let's just say the webmail provider would say "uh, no, I don't think so" - after they were able to stop laughing.  It really wouldn't matter whether they had a warrant or not.

Unfortunately, that's basically what happened with Microsoft and US law enforcement.  Law enforcement came to Microsoft and said "We have a warrant for all of the data for user X, give us his data."  Microsoft said "Sure thing..." and started to search for the user's data.  It turned out that the actual email data for that user was located in Dublin.  Now it seems simple to me.  The government here wants to access data that it isn't entitled to.  The data is stored outside the US, so the warrant doesn't apply.  Plain and simple.  Anyone who says otherwise is either trying to push their own agenda (i.e. twist things so that they can get the outcome they want), or doesn't understand the situation.

So let's think about this for a moment:
1. If the US thinks it should have access to the data just because it's the government, then we should sit back and realize what that means.  The US is only one government out of many, so if companies like Microsoft say "No problem, you are the government, so we will give you anything you want, so long as you have a warrant" no matter where the data is located, then they will soon be getting requests from governments all over the world.  How would the US government like if Microsoft handed over American data to Syria because "Well they had a warrant..."   It might be easy to say that my example with Syria is silly and "Obviously that example is so crazy that they could just ignore the request." - but the point is really that Microsoft and other companies shouldn't have to be in the position if deciding which requests to comply with on a case by case basis.  It's much better to simply have a rule that makes sense.  Saying that you will only provide data from a server if the government in question would have the power to physically impound that server makes as much sense as any.  On the other hand, saying that you will honor any requests from any government for any data clearly does not make sense.  Think about the slippery slope this case would lead to.  First the US gets access to data in Dublin.  Next, England wants the same rights - but England is an upstanding democratic country, right?  Let's comply for their requests for data in Spain or somewhere.  Next, Canada wants in on the action, and then France and Germany, etc.  Once there is a huge precedent for handing over data to lots of governments, it will be difficult to tell Syria, China or North Korea "No, sorry, we just don't like you."  On the other hand, saying yes means that the dictator of North Korea can read the emails of Obama's teenage kids or something along those lines.  Hey, he had a warrant, right?

Honestly, there are countries (like China), where I don't think any company should comply with the local law and hand over the data even if they have a warrant and even if it is for their own citizen's data stored in their own country.  When the country involved lacks due process and people "disappear", (as with the Yahoo in China case), then I personally believe the moral duty to protect people trumps any legal obligation - but that's another issue for another day.

The government in this case clearly wants what it wants, and like an unruly child, it is trying to throw a tantrum to get its way.  To be specific, the government in this case has tried to use very twisted logic to say that the "Search" of the search warrant wouldn't happen in Dublin, but would happen in the US, since Microsoft would hand the data over to the law enforcement officials in the US and they would examine it in the US.  That's akin to saying that if US law enforcement wants to examine the contents of a safe held by Citibank in Japan, they should be able to get a search warrant for it, serve the warrant to Citibank US, and then somehow Citibank Japan would be compelled to ship the unlock the safe to the US.  Thinking that something like that can be compelled is wishful thinking at best, and law enforcement's wet dream at worst.  If it could be compelled, then:
1. Nobody's stuff would be safe, anywhere in the world.
2. In order to get around the law, companies would probably break up into smaller officially un-related companies in order to make warrant serving more difficult.

Yet luckily, as of now, such a request can't be enforced and wouldn't even be made - so there is no worry - yet law enforcement feels compelled to try its luck with data, as if the Internet is magically "different".  I thought we learned that wasn't true back when "big ideas" like Toothpaste.com all failed with a bang during the dot com bust.

What really gets me about the Dublin case is that one has to wonder if the government has actually thought about it?  I mean how would the propose to prevent random countries from getting warrants and subpoenas for the data of random American politicians, etc.?  It would seem they just haven't thought that far ahead at all.

Now, onto the Apple case.  Apple figured out that the best way around the problem above (and to help assure people that you aren't participating in illegal data collection programs, spying on their chats, or helping the government more than required legally) is easy - make sure you don't have that data.

If you don't have the data to begin with, then it's nor your problem when a warrant arrives.  You simply reply "I don't have it" and then forget about it.  Completely legal, and you can expect the government to leave you alone.

The problem with not having the data at all is that Apple needs to store some of it in order to offer many of their services.  For example, in order to sync Safari passwords between computers, and have a copy for back-up, Apple needs to not only facilitate transfer of the data between different machines, but also hold a copy of the data.

There are some ways to have the machines send data directly to each other without involving Apple, but this is difficult to due 100% of the time due to Firewalls, NAT constraints, etc.

The way to have the data physically, not not have access to it is actually simple: Encrypt it.

Now, the details of what the service provider can see or not see depends very much on the security architecture.  For example, with a typical secure web server, the data is not encrypted on the disk, but only in transit (with SSL).  This ensures that customer credit cards, etc. are secure while crossing the internet, but means that the owner of the server can store the unencrypted data in a database, etc.  The fact that ecommerce server operators like Amazon need access to the data you send them is obvious.  With banks and such, the situation is similar.  Assuming the companies in question can be trusted Since they hold the unencrypted data, they would certainly be subject to warrants, and potentially available to hackers, however.

But let's consider a type of business which does not need access to your data: Dropbox.com and similar services.  Dropbox not only encrypts the data in transit, but they also encrypt the data on the drive (storage).  This is particularly important since they store the data on places like Amazon, where it might be more vulnerable to access.  However in the case of Dropbox, they hold the encryption keys, so they can access any of your data at any time.  This means that if you forget your password, Dropbox can reset it, and you don't lose access to your data.

Alternatively, they could use encryption handles entirely by the end user devices (so called "end to end" encryption), which would mean two things:
1. Dropbox wouldn't be able to access any of your data, ever, for any reason.
2. If you forget your password, etc., DropBox wouldn't be able to rest your password.

It would also make sharing, etc. more complex, since you would have to share a password with anyone who you shared data with.  This means more work for the end user, or a more complex system behind the scenes.

In light of the NSA spying scandals and given fear of over governmental forces, as well as resistance to adoption of cloud services caused by fears of hacking and employee misconduct, the best way to achieve customer confidence is to ensure that only they have the technical means to access their data.  Then suddenly, employees of Apple misusing their access isn't a concern - they don't have anything useful of yours anyway.  Hackers gaining access to Apple's servers mains that at worst they delete your data, but at least they won't have access to it.  And warrants - well Apple can't give the government anything other than your encrypted data - which they won't find very useful.

In fact, Apple has very cleverly engineered the latest iOS versions to be very secure in many ways.  Essentially, all data on the phones is encrypted by default, and with multiple classes of encryption depending on what data it is.  Even on a phone without a password set, the encryption facilitates the instant wipe feature, where users can wipe their phone by remote online.  The implementation only has to delete the encryption key, instead of wiping the entire device.

When a device is protected with a password or PIN, then this PIN is used to encrypt the encryption key, which in turn protects the content of the phone.   (There are multiple keys protecting different content, however we will simplify the system for purposes of this explanation).

Now, normally there is a trade-off with passwords:  If you make them too short, then you have very little protection.  If you make them too long, then they are inconvenient, and users tend not to use them.

Apple has tackled this in a few ways:
1. With fingerprint recognition - Since the user can unlock their device with only their fingerprint most of the time, the PIN code now becomes a back-up, and so a user doesn't mind using a longer PIN now since it is only used occasionally.
2. Apple has changed the standard PIN length from 4 digits to 6 digits in keeping with #1 above.
3. Finally, even a 6 digit code can be brute forced in short order on a reasonably fast machine - especially if it is known to be entirely numeric.  In order to prevent this, the latest versions of iOS do one of two things: a. They slow down more and more as the number of attempts increases, such that it would take several years to brute force the password.  b. There is an option to wipe the device after a several incorrect attempts.  Both of these changes are designed to make brute forcing unfeasible.

In the court case at hand, the US government wants the data, and they have the phone.  That means they also have the data - but it's encrypted and thus useless.  They understand that Apple also does not have the password, and thus can not decrypt the data for the government, or anyone else.

So, the government wants to brute-force the password on the phone - but the catch is that the phone will self-destruct after 10 tries.  In other words, the security is working perfectly as designed - the US government simply doesn't like that.  It's pretty clear that the law as written is designed so that companies only have to hand over whatever they have.  What the government is asking now is different - they want to have Apple develop a new version of iOS that will ignore the self destruct setting (and not wipe the phone after 10 tries), and also facilitate brute-forcing the password (via the USB connector or similar?)  Not only would this mean betraying all customers who thought they bought a system designed to be secure to everyone (including governments), but at that point - why not just build the brute-forcing option into the phone itself?  There could be a button at the bottom of the screen "Forgot my PIN code - crack now!"

Why doesn't the government write the crack itself?  Mainly because recent iOS devices will only accept firmware updates which are signed by Apple.  The government can't fake that, so they need Apple's support.

What is Apple did comply?  Well they would built a new version of the OS, with security features disabled, and then release it to the government, which could install it on the phone, and then crack away.

However, once this version of the software existed, it would almost certainly be leaked.  The FBI would have it, so they could use it for other phones as they wanted.  Even if Apple installed it and there was no way for the FBI to extract it, they would go to Apple next time and say "Well you did it last time..."  What's more, other governments would start requiring access to this cracked OS as well.  For example, China would ask for it, and there would be nothing to stop them from using it on the phones of US embassy employees.  Worse yet, hackers would eventually get a hold of it.

Most of the politicians commenting on this issue have simply done a lot of hand waving saying "Well I am sure they could do something about that..." - no, they can't.  This is something that is all or nothing from a technical perspective.

And that is why politicians shouldn't make decisions about technical things.










Comments

Popular Posts