Earlier today, it was reported that a hacker/researcher called “xerub” had released the encryption key, and tools to use it, for the firmware that runs the Secure Enclave Processor (SEP) on iPhone 5S. Reporting was…breathless. Stories suggested that this move was “destroying key piece of iOS mobile security,” and that we should “be on the lookout for Touch ID hacks” and “password harvesting scams.”

Is it really that bad? No, not really.

Some of the articles have been improved (slightly) with additional information from xerub and anonymous sources at Apple, but I thought it’d be a good idea to review this at a more technical level. I’ve written about the SEP before, especially here and here, so I won’t get too deep into the weeds. Another excellent reference is the regularly-updated iOS Security document produced by Apple. As I write this, I hope to keep it short, but make no promises.

Firmware Basics

First, what exactly is this we’re talking about? The iPhone has many different elements that together make up the firmware for the device. The overall firmware package includes the root filesystem (the actual unix system that the phone runs), update and restore ramdisks (used, predictably, when updating the phone or restoring from backup), multiple levels of boot code, recovery firmware, and finally, the firmware for the Secure Enclave.

Each release of the OS is tailored to individual devices – the firmware for iPhone 5S is different from that for iPhone 6 and iPhone 6S. Also, each element of the firmware release may be individually encrypted, with a cryptographic IV and 256-bit AES key. So, even though you can download any of these firmware images from Apple, you can’t actually look inside them unless you have the appropriate key, and these keys vary from release to release and device to device.

It used to be that we could extract the decryption keys directly from devices due to long-standing bugs in the permanent bootrom (the part that’s burned into the device and can’t be updated). But that bug was finally fixed in the iPhone 4S, so we couldn’t decrypt firmware for many later devices.

The iPhone 5S was the first to include the Secure Enclave, but because the boot bug had been fixed, we didn’t have any (easy) way to extract the encryption keys for that device. Though I see now that keys exist for just that device, but not 4S or 6+, and I confess I haven’t been paying attention lately to know just what the current situation is.

What was released today, then, is the encryption key specific to iOS version 10.3.3 on product “iPhone6,1” (the GSM-only version of iPhone 5S).

Why encrypted?

There are two reasons why iOS firmware is encrypted. One is to protect the software from hackers who may try to jailbreak the software. Obviously that hasn’t really stopped such practices, and some recent releases of the boot disk haven’t even been encrypted at all. The other reason is to make it more difficult for an attacker to modify and replace the firmware on the phone, but that’s also accomplished using strong software signature validation. So in grand scheme of things, encrypting the firmware doesn’t add too much to the security of the device.

What’s the Secure Enclave?

To improve the security of existing elements on the iPhone, and to secure new features such as Touch ID, Apple introduced the Secure Enclave with iPhone 5S. Essentially, the SE is a special area of storage on the device that’s encrypted with its own key, and that key is not available to the main application processor. Which means that applications on the phone, even running at the most privileged level, can not access any of the data within the enclave.

To actually process data within the SE, a separate computer, the Secure Enclave Processor, has been added to the phone’s System on a Chip (SoC). This processor has its own operating system and disk partition, and runs the firmware we’re talking about today. The encryption keys to read and write to the SE portion of the device disk are accessible only to the SEP, and so only the firmware running on the SE can access that data.

The data stored in the SE includes TouchID information and certain keys that can be accessed, and processed, only within the SE. The Tidas authentication system, for example, makes use of the SE for storing private keys. The SE then includes functions to access or work with these hidden bits of security information. In Tidas’ case, an application, running on the AP (application processor – the main CPU that runs iOS) can send data to the SE, and say “encrypt it with key number 6.” The SE will dutifully carry out this action and return the result, but will never actually disclose the private key used in the process.

The SE is also responsible for part of the overall keychain locking and unlocking, and is the only part of the SoC with access to the devices “Unique ID” (uid) that’s entangled in the keychain encryption system. However, that particular key is not stored in the “normal” SE dataspace, but elsewhere on the SoC, and it’s (as far as I understand) directly wired into dedicated encryption hardware just for this purpose.

So, yes, lots of incredibly important functionality is wrapped up in the SE. It’s understandable that gaining access to the software which runs this part of the phone would be newsworthy.

But what does it mean?!?

As mentioned before, the iPhone accesses the SE, and manipulates data within it, by sending commands to the SE and awaiting responses back. This protocol has not been publicly documented, but a great talk at Black Hat 2016 (Demystifying the Secure Enclave Processor, video here] went into considerable detail describing how it all works.

Basically, the operating system can put bits of data to be processed in a special region of memory, add a short command flag to the start of that region, then send a signal to the SE. The SE reads the memory, gets the command, performs the requisite actions, and then returns the response via a similar memory-based exchange. Everything that happens in between is run on the SE, using the SE-specific firmware.

What was released today was the key to decrypt that firmware, but not a key to decrypt the region of disk used by the SE to store data. So now we can actually reverse-engineer the SE system, and hopefully gain a much better understanding of how it works. But we can’t decrypt the data it processes.

Could an attacker now modify the SE firmware to, for example, return the actual secrets stored there? The short answer is no, because the firmware is also signed by Apple, and as far as I know, nobody’s found a bug in that signature verification algorithm (or compromised the private signing keys).

But even if we could modify the firmware, we also don’t know how the data is even accessed. It’s entirely possible that even the SE processor won’t have direct access to the secrets, but instead that specific operations may be handled by dedicated hardware which can directly access, and decrypt, the SE memory. I do not believe that the SE has been described to quite this level of detail by Apple. This is one thing that we may be able to ascertain by disassembling the firmware.

Is this a good thing, or a bad thing?

I think it’s a good thing. For one, if the security of the Secure Enclave is in any way directly reduced by the disclosure of the firmware, then it wasn’t truly secure in the first place.

Is it possible that there are bugs in the firmware that can lead to the leakage of secret data stored with the SE? Certainly. There could be direct flaws that simply read and return information, or poorly-built algorithms that leak data through timing- or power-based side-channel attacks. We won’t know until the firmware has been carefully reviewed, and even then, there’s no guarantee that bugs will be discovered or fully understood (since we still don’t have a detailed understanding of the hardware within the SE).

Could there be significant undocumented functionality with the SE? Yes, possibly. There might even be a “smoking gun backdoor,” but I personally find that highly unlikely.

Also, the SE hardware has very likely evolved in both design and capability since the iPhone 5S (which will be, in just a few weeks, 4 generations behind the newest iPhones). And we haven’t found keys for the firmware for these newer versions. So any bugs or surprising capabilities found in this firmware may simply not apply at all to current devices.

Conclusion

So, to wrap up what (predictably) has turned out to be not a very short post:

Bottom line: I think this is a good thing, in the long run. This should have very little practical effect on the security of individual iOS devices, unless a very significant flaw is uncovered. Even then, the potential scope of the finding may be limited to only older devices.