A few weeks ago, Apple published a message about Apple’s commitment to your privacy. In the section on Government Information Requests, Apple made the following somewhat startling statement:

On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.

What exactly does this mean? And what was Apple doing before to support law enforcement? Well, to really understand that, we have to go kind of deep into how iOS encryption works. It’s complicated, and I’m not always the best at explaining things, but I’ll try my best.

To really understand it all, I highly recommend Apple’s iOS Security whitepaper. First released in May of 2012, with updates in October 2012, February 2014, and September 2014, the newest version (dated October 2014) includes changes to iOS 8. Also a great reference is “iPhone data protection in depth” by Jean-Baptiste Bédrune and Jean Sigwald of Sogeti, presented at HITB Amsterdam in 2011. Keep these open in other windows as I struggle to explain things. To truly understand what’s going on, those two references are your best bet.

In fact, I’ll probably find it convenient to refer back to these from time to time. I’ll call the iOS paper “ISG” and and the HITB presentation “Sogeti”. I’m sure I’m totally butchering the proper way to cite sources, but I’ve been out of school for over 20 years, so give me a break, okay?

Another good reference is this talk from Black Hat Abu Dhabi 2011, by Andrey Belenko and Dmitry Sklyarov. The diagram on page 46 may be particularly helpful.

Too Long, Didn’t Read

Jump to the bottom.

Or go read Matthew Green’s much simpler explanation, which was posted after I’d finished writing my first draft of this post…

Where to begin?

Let me start by saying that I’m going to gloss over a lot of stuff here. This isn’t a formal presentation, it’s not a whitepaper, it’s not even meant to be a serious reference. This is in response to frustration trying to discuss this on twitter in 140-character bites.

So this is my “Tweet Longer” response. Think of it as the conversation (well, more like endless monologue you’re too polite to extricate yourself from) that I’d have with you if I ran into you at a con and you asked me how all this works.

Full Disk Encryption

Data on iPhones is encrypted.

Okay, glad that’s cleared up.

Well, it wasn’t at first. But starting with iOS 3.0 and the iPhone 3GS, the full filesystem was encrypted. The key for this encryption is not user-selectable, but depends on a UID which is “burned” into the phone’s chips at the factory (Sogeti, pp 4 and 5, also ISG, p 9). The UID is a 256-bit key “fused into the application processor during manufacturing.” Apple further stages that “no software or firmware can read them directly” but can only see the results of encryption using those keys.

The UID key is used to create a key called “key0x89b.” Key0x89b is used in encrypting the device’s flash disk. Because this key is unique to the device, and cannot be extracted from the device, it is impossible to remove the flash memory from one iPhone and transfer it to another, or to read it offline. (And when I say “Impossible,” what I really mean is “Really damned hard because you’d have to brute force a 256-bit AES key.")

The exact mechanisms used to encrypt the storage are pretty complicated (see Sogeti, pp 31-39). The over-simplified answer is something like this:

  • A random key is generated and used as basis for encrypting the entire disk
  • This key is itself encrypted using key0x89b, and stored in a special form of memory called “effaceable storage”
  • Because key0x89b is tied to the device (being derived from UID), this disk key can’t be decrypted on any other device – the memory chips must stay in the device they were formatted on – unless you extract key0x89b
    • (which may be possible on jailbroken devices – but if you’ve jailbroken the device, you already have access to the decrypted filesystem anyway)
  • To wipe the device, simply wipe the effaceable storage, and the key goes away.

Of course, all of this is fully automatic. If you can get access to the file system, you can read the data – the decryption “just works.” The primary protections this adds are:

  • Can’t move the disk from one phone to another (not that this was a huge concern)
  • Can completely wipe the disk pretty much instantly

But the data itself isn’t terribly well protected. If the device is unlocked, or if you can boot off an external drive and read the filesystem, then you can read everything.

Data Protection API

In iOS 4, Apple introduced the Data Protection API (DPAPI). Under DPAPI, several classes of protection were introduced for both files and keychain entries. I’ll focus on just files, but the concepts map relatively cleanly to keychain data as well.

Each file is individually encrypted with a “Class Key.” The class key is simply another random key, but which is applied to any and all files which share the same DPAPI level. For example, all files marked as “FileProtectionComplete” use class 1 (Sogeti, p 15). There are currently four file protection classes, and four (nearly) analogous keychain protection classes (ISG, pp 10-13). The three classes most important to this discussion are:

  • Complete (locked when the device is locked)
  • Complete until First Authentication (locked, until the user’s unlocked the device once after the most recent reboot)
  • None (no additional protections)

The keys for all these classes are stored in a “keybag.” There are several keybags – one on the device (the “system keybag,") another stored on trusted computers (for syncing and backing up devices), and another stored on MDM servers (to remotely unlock a device, in the event you’ve forgotten your passcode). (ISG, pp 14-15).

When a file is encrypted under (for example) “Complete” protection, the system extracts the appropriate class key from the keybag, and encrypts the file using that key. To decrypt the file, the key is again read from the keybag, and the file decrypted.

When you set (or change) a passcode, a key is derived using the system UID, and this key is then used to encrypt individual class keys within the keybag. The key derivation process is complicated, but essentially expands the passcode and a salt, using multiple rounds designed to take about 80 milliseconds, no matter the device. On newer devices (A7 or later processors, which is to say, iPhone 5S, 6, and 6+, the iPad Air, and the Retina iPad Mini) this is augmented with a 5-second delay between failed requests. This delay is added at the hardware level, while the escalating delays seen by the user at the lock screen are all part of the operating system. (ISG, p 11).

Because the passcode key is “entangled” with the UID, it’s not possible to simply extract the encrypted keybag and brute force the passcode on a fast password cracking machine. The key must be decrypted on the device itself, which requires either a jailbroken device or a trusted external boot image (more on those later).

When you lock the device, the decrypted “complete protection” class key is wiped from memory. So the device can no longer read any file encrypted with that protection level, until it’s been unlocked again.

Default Data Protection

When Apple debuted the DPAPI, it was entirely an optional feature, and virtually no applications took advantage of it. For some time, the only Apple application which used any data protection was the Mail app. Under iOS 7, Apple changed the default to “Complete until first authentication”, and any new applications should use this protection level automatically.

Unfortunately, though 3rd party apps would inherit somewhat better protections, it wasn’t the best possible mode. Perhaps Apple left that as an option for developers to avoid making background use too difficult, or perhaps there were other reasons. And Apple opted to exclude most of their own applications from this new default.

Support to Law Enforcement

So what exactly was Apple doing to support police who need access to data on a seized iPhone or iPad? This has never been terribly clear. Every few months, a story seemed to hit the press about Apple unlocking phones for the police, but details were scarce, and speculation rampant.

As far as I have been able to guess, Apple had three avenues for extracting data from a seized iPhone:

  • Extract unprotected data over USB using forensic tools
  • Boot the device using a signed external disk image and extract data directly from the filesystem
  • Boot the device from the signed external image and brute force the passcode

Forensics

As for the first item, I don’t work in forensics and so can’t really speak to what these tools can do. But several open-source tools exist (in particular, the iphone-dataprotection kit by our friends at Sogeti) which can illustrate some of what’s possible.

However, much of the data extracted by these tools may be limited when the device is locked, and no forensic tool can directly bypass encryption provided on a locked device by DPAPI. (this changes if the forensic examiner has access to a desktop used to sync the device, but that’s a whole different blog post.)

Booting a Trusted Image

The second is a bit more complicated. Essentially, you’re booting the device using an external drive as the operating system. But since you’re still “on” the device, the locally-stored keys and UID are still available, and so the entire filesystem can be mounted and read. To prevent just anyone from doing this, iOS devices require the external image to be signed by Apple (so we can’t simply create our own drive and boot off that).

Fortunately (or unfortunately, depending on your point of view) there was a bug in the bootrom on several early iOS devices that allowed an attacker to bypass this signature requirement. So up until (and including) iPhone 4 and iPad 1, it was possible for anyone to perform this attack and extract any non-encrypted data (DPAPI protection level “None”) from the phone. Because the phone has to be rebooted in order to perform this attack, however, in-memory keys for the “complete” and “complete until first authentication” are lost, and so any data protected in those modes cannot be read, even using this approach.

Even Apple, booting from a trusted external image, can’t unlock those protected files. The class keys needed for decryption are stored in the system keybag, which is encrypted using the user’s passcode.

Brute Forcing a Passcode

Well, what about brute forcing a passcode? As I said above, older devices could be booted from an external drive, allowing full access to unencrypted files on the device filesystem. Also available are the library routines needed to decrypt the system keybag. And these routines (as far as we know) don’t have any rate limiting, escalating delays, or lockouts, so a program with access to the filesystem and this API can brute force as long as it needs to.

But the key derivation still needs to happen on the device itself, and each device has its work factor tailored so this takes about 80 mS per guess. So though a weak four-digit number could be cracked in 20 minutes or less, a strong alphanumeric passcode could still take months, years, or centuries to break.

And once Apple patched the bootrom hole (beginning with iPhone 5 and iPad 2), this became impossible for anyone outside of Apple to do anyway.

However, because the possibility remained that Apple could crack the passcode, most iOS security experts still recommend that users choose a strong passcode. Just in case.

There’s no evidence that Apple ever actually offered this as a service to law enforcement. I could see where they might, under the right circumstances, but I can also understand where they might be reluctant to ever offer such a service, for fear of it being abused (or just over-requested).

But by brute forcing the passcode, all the data protection is rendered useless.

What changed with iOS 8?

Two things changed with iOS 8:

  • A 5 second delay was added, at the hardware level, for passcode attempts (newer hardware only)
  • The default data protection mode was extended to more of Apple’s apps

The first change only affects anyone trying to brute force a passcode directly on a device, which means this only affects Apple (unless law enforcement, forensics teams, or wily hackers have access to a signed image).

The second change is somewhat more significant, under certain circumstances. A device which has been unlocked once already (since the last reboot) will behave exactly the same as iOS 7. That is, anything which is under the “Complete until first authentication” protection level will be (essentially) unencrypted, since the keys will remain in memory after the first time the user unlocks the device.

Much of the built-in application data was moved under the stricter controls: photos, messages, contacts, call history, etc. – all items which were described in the privacy message quoted way back when I started this. This data, once the phone’s been unlocked once, may still be available using 3rd party forensic tools. However, and here’s what I think is probably key: Once the phone is rebooted, that class key is lost, and the data is unreadable until the user enters their passcode again.

So even Apple, with a trusted external boot image, can’t access the data unless they crack the user’s passcode.

I think this is what Apple referred to when they said that it is “not technically feasible” to respond to warrants for this data.

Could Apple still attempt to brute force the passcode? Yes, possibly. If they’ve ever done that before.

It’s also possible that Apple could (or already has) added brute-force protections to newer iOS hardware, which would prevent even Apple from breaking a user’s passcode. They’ve already added the 5-second delay (for A7-based devices), but whether the hardware enforces escalating delays for consecutive bad attempts has not been disclosed. I suspect that if it were the case, we’d’ve heard by now, but I’ve not personally tried brute forcing passcodes on anything newer than an iPad 1. Maybe this was added in iPhone 6 – but we’ll probably have to wait until they’re jailbroken to be sure.

A Quick Demo

If you’d like to see the difference between iOS 7 and iOS 8 for yourself, here’s a simple test.

  1. Get an iPhone running iOS 7, and another running iOS 8.
  2. Get a landline (or a 3rd cell phone) and make sure each iPhone has that number in its Contacts database (add a name, picture, etc.)
  3. Reboot both phones. Do not unlock either phone.
  4. Call the iOS 7 phone from the 3rd phone. You should see not only the phone’s number, but also the name and picture you put into the Contacts database.
  5. Call the iOS 8 phone. You should see the phone number, and nothing else (since most cellular providers don’t provide name over caller ID).
  6. Unlock the iOS 8 phone, then lock it again.
  7. Call the iOS 8 phone again, and this time, you should see the Contacts entry appear on the locked screen.

This shows how the Contacts database is locked with “complete until first authentication.” After rebooting, the phone simply does not have access to the Contacts database, because the class key is still safely encrypted in the system keybag. Once you unlock it, however, the key is extracted, decrypted, and retained in memory, so the next time you call, the phone can read Contacts and display the information.

(This is also why you can’t connect to your home Wi-Fi after rebooting the phone, but after you’ve unlocked it once, it remains connected even when locked. The keychain entries for Wi-Fi are stored with “complete until first authentication.") (ISG, p13).

So what about law enforcement?

Well, if the assumptions I am making here are correct (and it’s not just me – I believe many in the iOS security community have come to the same conclusion), then Apple simply cannot provide much beyond very basic forensic-level data on phones, especially once they’ve been powered off.

But to my mind, any such service from Apple was always just a matter of convenience anyway. If a warrant can be issued to seize the data on a phone, then one can be issued to compel the owner of the phone to unlock it. (Yes, I’m aware that this is a legally murky area. Some recent decisions have upheld such orders, while at least one other has struck such an order down. One might be able to fight a court-imposed unlock demand, but it’d almost certainly take a lot of time, and be quite expensive in the long run). (And it should go without saying that there are very many good reasons that I am not a lawyer).

So if the police absolutely need access to the data on a phone, they can (probably) compel the owner to unlock it, and so Apple’s inability to help becomes irrelevant.

They can also (presumably) serve warrants to collect any computers belonging to that user, extract the pairing records from them, and then collect just about anything they want from the phone over USB.

Finally, much of the data on an iOS device will also exist “in the cloud” somewhere. So police can certainly go after those providers as well.

So the bottom line here is that, even without Apple’s help, law enforcement still has many ways to get at the data on a locked iOS device.

Unanswered Questions

Even with the very good documentation from Apple, several questions remain.

Where exactly does the low-level encryption happen? The iOS Security guide says that the UID is “fused” into the application co-processor, and that “no software or firmware can read them directly” (p 9), but on page 6 it says it’s stored in the “Secure Enclave” (which is not to be confused with the “Secure Element” used by NFC and Apple Pay), and that the Secure Enclave has its own software update process.

So is the UID available to the software within the Secure Enclave (SE)? Or are operations utilizing the UID handled by a “black box” within the SE, and only the results of these operations visible to SE firmware (and, as moved from SE to the main processor, to the OS in general)?

Or is it possible that the UID in the context of the Secure Enclave is not the UID used to generate the file protection keys? (If so, I sincerely hope Apple updates their naming system to clarify this, because it’s obviously confusing the heck out of a lot of us).

Has Apple ever provided brute-force passcode breaking services to law enforcement? Has iOS been modified to restrict or eliminate this attack? If not, can Apple theoretically perform such an attack today? If asked, would they refuse? Could they?

What does it all mean?

So, what’s the bottom line? Is there a “TL;DR” summary?

  1. Since the iPhone 3GS, all iOS devices have used a hardware-based AES full-disk encryption that prevents storage from being moved from one device to another, and facilitates a fast wipe of the disk.
  2. Since iOS 4 (iPhone 4), additional protections have been available using the Data Protection API (DPAPI).
  3. The DPAPI allows files to be flagged such that they are always encrypted when the device is locked, or encrypted after reboot (but not encrypted after the user enters their passcode once).
  4. On older devices (up to and including iPhone 4 and iPad 1), a bootrom bug could be exploited to boot off an external image and brute force weak passcodes, as well as read non-encrypted data from the filesystem.
  5. It remains possible (but unproven) that Apple retains the capability to do this on modern devices using a trusted external boot image.
  6. Once a user locks a device with a passcode, the class keys for “complete” encryption are wiped from memory, and so that data is unreadable, even when booting from a trusted external image.
  7. Once a user reboots a device, the “complete until first authentication” keys are lost from memory, and any files under that DPAPI protection level will be unreadable even when booted from an external image.
  8. Under iOS 8, many built-in applications received the “complete until first authentication” protection.
  9. This new protection level means that even when booting from a trusted external image, Apple cannot read data encrypted using that protection, unless they have the user’s passcode.
  10. It may still be possible to use forensic tools to extract data from a locked device. In many ways, iOS 8 behaves exactly like iOS 7 for such tools, if the user has unlocked it at least once after a reboot.
  11. The entire hierarchy of encryption keys, class keys, and keybags, is entangled with a device-specific UID that cannot be extracted from the device nor accessed by on-device software.
  12. Many of the keys are further protected by a key derived from the passcode (and the internal UID).
  13. It is not entirely clear whether the Secure Enclave can be manipulated by Apple or an attacker to bypass any or all of the encryption key hierarchy by gaining direct or indirect access to the UID or derived keys.
  14. It is also unknown whether current devices are vulnerable to a passcode brute force attack by Apple (or anyone with access to a trusted external boot image).
  15. Many of these protections are rendered (somewhat) irrelevant if law enforcement (or a determined adversary) have access to a trusted computer used to sync the device, or potentially to copies of the data existing in cloud-based services.

The bottom line, the real “too long, didn’t read”:

  • Apple does seem to have made it much more difficult for anyone to get at data on a locked phone
  • Some of these protections are reduced once you’ve unlocked the phone once
  • Many of these protections are aimed at attacks requiring a reboot of the device
    • Apple may be able to access the filesystem on a device, but this would require a reboot
    • So unless they crack your passcode, they won’t be able to read the protected files.
  • Apple may be able to crack a passcode
    • But each attempt takes at least 80 mS, and as much as 5 seconds on newer devices
    • A strong passcode (6 or more letters or 8 or more numbers) can take years to break

To sum it up in one sentence:

  • Use a strong passcode, and power off your device whenever it’s out of your control.