I’m lucky to live near a really good information security group, NoVA Hackers. We meet once a month, and usually have 6-10 speakers of all levels, speaking on just about anything they’d like.
I thought this might be an audience who’d be interested in learning how the recent iOS security changes actually worked, and so threw together a quick talk based mostly on my blog post of a week before.
It was well received, and I had a lot of really good questions during and after the talk. One question I didn’t have the answer for at the time was:
- What AES mode is used for file data protection?
A quick review of the iOS Security guide from Apple reveals:
- Per-file encryption: AES-256-CBC, IV “is calculated with the block offset into the file, encrypted with SHA-1 hash of the per-file key”
- Filesystem: (not specified, Sogeti slides say AES-256-CBC)
- Keychain items: AES-128-GCM
- Wrapped encryption keys: RFC 3394 (AES wrapping)
- Encrypted encryption keys: AES-256 (Again, Sogeti slides say AES-256-CBC)
Another question regarded the new “5-second” delay for repeated failed passcode attempts. Quick testing during the talk showed that we’re not seeing the delay (which I’d previously presumed would be “invisible” to the user as they take time to enter a new passcode). Apple’s documentation states:
On a device with an A7 or later A-series processor, the key operations are performed by the Secure Enclave, which also enforces a 5-second delay between repeated failed unlocking requests. This provides a governor against brute-force attacks in addition to safeguards enforced by iOS.
It may be that the word “repeated” is key here – perhaps the delay doesn’t kick in until after a few attempts (perhaps even until after the iOS UI would have locked the user out anyway). It might be good for Apple to clarify this.
Finally, there was some discussion about the UID “fused” into the System on a Chip (SoC), and how we know that Apple doesn’t retain a copy of it. Well, so far the only answer (which people hate me for giving) is “Because Apple said they don’t.” I think in general, Apple’s been very above-board with regards to security and privacy, especially with the iOS Security documentation, and I’m (personally) inclined to take them at their word.
Nonetheless, there remains a possibility that a UID database may be maintained somewhere. Perhaps an undocumented CPU instruction will reveal a chip-ID that can be used to look it up, for example. How that database would be produced as chips are rolled out of the factory seems logistically quite difficult, though.
But one intriguing possibility occurred to me after I’d left the meeting. The Apple docs say:
The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor during manufacturing.
What exactly does “fused” mean, in regards to chips? I believe that, for FPGAs and other field-programmable devices, “fusing” refers to the act of burning a configuration or data directly into the chip, and is a one-time, irreversible action. What if “during manufacture” doesn’t mean “as we actually fabricate the chip” but instead “after it rolls off the line and goes through quality checking”?
The chip could be built with an all-blank UID area, and the first time it powers up, it generates a random number and fuses that into the UID. In that way, neither Apple nor the chip manufacturer could ever have a way to determine the UID (aside from destroying the chip to visually identify the fuses directly).
Again, I’d love it if Apple could clarify this, as really the security of the UID is the linchpin for the entire ecosystem’s security.
Thanks to the NoVAHackers crowd for the great discussion afterwards, which clearly led directly to new ideas and questions that are definitely worth exploring.
If you’d like to see the slides, you can download them here (or click on the title link for this blog entry).