The assertion recently made by Apple that “it’s not technically feasible” to decrypt phones for law enforcement has really stirred up several pots.

Many in law enforcement are upset that Apple is “unilaterally” removing a key tool in their investigations (whether that tool has ever been truly “key” is another debate). Some privacy experts hail it as a great step forward. Others say “it’s about time.” And still others debate whether it’s quite as absolute a change as Apple’s making it sound.

I wrote extensively about this earlier this week, trying to pull together technical details from Apple’s “iOS Security” whitepaper and some key conference presentations. What’s amusing, now that I look through my archives, is that I said a lot of the same things 18 months ago.

As I was finishing this weekend’s post, Matthew Green posted a very good explanation as well, a bit higher on the readability scale without losing too many of technical details. He later referred to my own post (thanks!) with an accurate note that we don’t know for certain whether the “5 second delay” in consecutive attempts can be overridden by Apple with a new Secure Enclave firmware.

Also later on Monday, Julian Sanchez published a less technical, much more analytic piece that’s worth reading for some of the bigger picture issues. His Cato Institute post is also a good read, to help understand why backdoors in general are a bad idea, and how this may turn out to be a rerun of the 1990’s Crypto Wars.

And just this morning, Joseph Bonneau posted a great practical analysis of the implications of self-chosen passcodes on the Freedom to Tinker blog. This latest story shows how even though, at a technical level, some strong passcodes may take years to break, in practical terms users don’t pick passcodes that are “random enough”. It even has a pretty graph.

One final suggestion made in Mr. Bonneau’s post (and also voiced by many others in posts or on twitter, including myself) is that a hardware-level “wrong passcode count” seems like a great idea. I’d been concerned about how to integrate that count with the user interface, but then he estimates that “A hard limit of 100 guesses would leave about 3% of users vulnerable” (based on the statistics he presents).

This almost throwaway comment made me wonder – if the user interface is (typically) configured to completely lock, or even wipe, a phone after 10 guesses, then why not let OS-level brute force attempts (initiated through the mythical Apple-signed external boot image) continue until 20 attempts? Then the hardware can simply refuse to attempt any further passcode key derivations, and not even worry about what to do with the phone (lock, wipe, or whatever). If the user has already hit 10 attempts through the UI, this count will never be reached in hardware anyway.

The only hard part about this idea would be finding a secure way for the secure element to know that the passcode was properly entered. If we rely on the operating system to actually verify the passcode, and then notify the secure element, then that notifcation may be subject to spoofing by an attacker. This may be an intractable problem, but I’m confident that it wouldn’t be, and that a workable (or even elegant) solution may be found.

If Apple could add that level of protection, then even a 4-digit numeric passcode could be “strong enough” (provided they stay away from the top-50 or so bad passcodes). And at that point, it would absolutely be “technically infeasible” for Apple to do anything with a locked phone, other than retrieve totally unencrypted data.