Darth Null’s Ramblings
 

DarthNull.org • About Ⓘ

Hello! I'm David Schuetz.
This is where I ramble about...stuff.




Lenovo, CA Certs, and Trust

It’s been a fun week for information security: @yawnbox - A Bad Week

Arguably one of the more interesting developments (aside from the SIM thing, which I’m not even going to touch) was the decision by Lenovo to pwn all of their customers with a TLS Man-In-The-Middle attack. The problem here was two-fold: That Lenovo was deliberately snooping on their customer’s traffic (even “benignly,” as I’m sure they’re claiming), and that the method used was trivial to put to malicious use.

Which has me thinking again about the nature of the Certificate Authority infrastructure. In this particular case, Lenovo laptops are explicitly trusting sites signed with a private key that’s now floating around in the wild, ready to be abused by just about anyone. But it’s more than just that — our browsers are already incredibly trusting.

On my Mac OS X Yosemite box, I count (well, the Keychain app counts, but whatever) 214 different trusted root certificate authorities. That means that any website signed by any of those 214 authorities…or anyone that those authorities have delegated as trustworthy….or anyone those have trusted…will be trusted by my system.

That’s great, if you trust the CAs. But we’ve seen many times that we probably shouldn’t. And even if you do trust the root CAs on your system, there are other issues, like if a corporation or wifi provider prompts the user to install a custom MITM CA cert. (Or just MITMs without even bothering with a real cert).

I’ve been trying to bang the drum on certificate pinning for a while, and I still think that’s the best approach to security in the long run. But there’s just no easy way for end users to handle it at the browser level. Some kind of “Trust on First Use” model would seem to make sense, where the browser tracks the certificate (or certificates) seen when you first visit a site, and warns if they change. Of course, you have to be certain your connection wasn’t intercepted in the first place, but that’s another problem entirely.

Some will inevitably argue that ubiquitous certificate pinning will break applications in a corporate environment, and yes, that’s true. If an organization feels they have the right to snoop on all their users’ TLS-secured traffic, then pinned certificates on mobile apps or browsers will be broken by those proxies. Oh, well. Either they’ll stop their snooping, or people will stop using those apps at work. (I’m hoping that the snooping goes away, but I’m probably being naïve).

When a bunch of CA-related hacks and breaches happened in 2011, we saw a flurry of work on “replacements,” or at least enhancements, of the current CA system. A good example is Convergence, a distributed notary system to endorse or disavow certificates. There’s also Certificate Transparency, which is more of an open audited log. I think I’ve even seen something akin to SPF proposed, where a specific pinned certificate fingerprint could be put into a site’s DNS record. (Of course, this re-opens the whole question of trusting DNS, but that’s yet another problem).

But as far as I know, none of these ideas have reached mainstream browsers yet. And they’re certainly not something that non-security-geeks are going to be able to set up and use.

So in the meantime, I thought back to my post from 2011, where I have a script that dumps out all the root CAs used by the TLS sites you’ve recently visited. Amazingly enough, the script still works for me, and also interestingly, the results were about the same. In 2011, I found that all the sites I’ve visited eventually traced back to 20 different root certificate authorities. Today, it’s 22. (and in both cases, some of those are internal CAs that don’t really “count”). (It’s also worth noting — in that blog post, I reported that I had 175 roots on my OS X Lion system. So nearly 40 new roots have been added to my certificate store in just 3 years).

So of the 214 roots on my system, I could “safely” remove 192. Or probably somewhat fewer, since the history file I pulled from probably isn’t that comprehensive (and my script didn’t pull from Safari too). But still, it helps to demonstrate that a significantly large percentage (like on the order of 90%) of the trust my computer has in the rest of the Internet is unnecessary in my usual daily use.

Now, if I remove those 190ish superfluous roots, what happens? I won’t be quite as vulnerable to malware or MITM attacks using certs signed by, say, an attacker using China’s CA. Or maybe the next time I visit Alibaba I’ll get a warning. But I’d bet that most of the time, I’ll be just fine. Of course, if I do hit a site that uses a CA I’ve removed, I’d like the option to put it back, which simply brings me back to the “Trust on First Use” certificate option I mentioned earlier. If we’re to go that route, might just as well set it up to allow for site-level cert pinning, rather than adding their cert provider’s CA, to “limit the damage” as it were. (Otherwise, over time, you’d just be back to trusting every CA on the planet again).

And of course, even if I wanted to do this, there’s no (easy) way to do this on my iOS devices. And the next time I got a system update, I’d bet the root store on my system would be restored to its original state anyway (well, original plus some annual delta of new root certs).

So nearly four years on since the Comodo and Diginotar hack (to say nothing of private companies selling signing wildcard certificates), and we still haven’t “reshaped browser security”.

What’s it going to take, already?







Bypassing the lockout delay on iOS devices

Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:

Lock Screen
Available for:  iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact:  An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description:  In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-ID
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney

We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.

Well, the video was just released on YouTube, and it’s pretty interesting. Briefly:

This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.

Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)

Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.

Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.

I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.







Why I hate voting.

I just voted, even though pundits and statisticians have proven fairly definitively that my particular vote won’t matter. My district has had a Republican congressman for 30 years and his hand-picked heir is likely to win, and I don’t live in one of the 6 states all the news organizations tell me will decide control of the Senate. I voted because it’s the right thing to do, and because if I don’t vote, I lose the moral right to complain about the idiots in power (and anyone who knows me knows I love to complain.)

But why I hate voting isn’t the issues, or the parties, or the polarized electorate, or the aforementioned futility of my particular involvement. It’s the process. The process makes my blood boil.

For months, we are subjected to constant attack ads, literally he-said-she-said finger pointing about which candidate is the bigger idiot for siding with whichever other idiots are in power.

For weeks, the candidates clutter the countryside with illegally placed campaign signs that aren’t just an eyesore, but can seriously impede traffic safety simply by blocking drivers’ view of oncoming traffic. (Though to be fair, this has gotten much better in Fairfax County over the last few years…I don’t know how they got the candidates to stop, but I’m glad they did it).

I work at home, in my basement. When the doorbell rings, I answer it. Which means I have to interrupt my work, walk upstairs, and attend to whoever is at the door. And then get annoyed when it’s just someone stumping for a politician I don’t care about (or even one I do like). And then they get annoyed when I’m annoyed at them — as if they weren’t the ones being rude by disturbing me in the first place.

Go Away Humans

Then, finally, election day. That’s the worst.

Rather than experiencing relief that it’s all about to be over, my annoyance level spikes to new highs. First, I drop the kids off at their school (for school-provided daycare while the school is closed for election day). There’s no way to get through the front door without running a gauntlet of partisan party representatives handing you their “Sample Ballots” (which conveniently exclude all other parties — not actually a sample at all, but I suppose we’re used to the lies). Sure, there’s a “50 foot exclusion zone” around the entrance, but it’s not possible to park within that zone. So all they have to do is hover around the perimeters and they get you.

But at this point I’m not even there to vote — I’m just there to drop off my kids. (In fact, two Republican candidates even had people camped out in front of the school on Back to School night this year, so even then we weren’t able to escape their harassment). Why the school system doesn’t kick these people off their property is beyond me. (And don’t tell me it’s because of First Amendment rights — politicians can still express their views…they just shouldn’t be allowed to interrupt voters on their way to the polls).

It’s even worse today, because I’ll have to sneak past the same people for parent/teacher conferences this afternoon.

Then when I actually do go to vote, I have to navigate a different set of politicians’ antagonists (because my polling place is in a different school). And I have to present an ID to vote, because there’s an astronomically small chance that someone could be trying to vote illegally (which Never Ever Happens. Seriously.) And after I present my ID, the poll workers ask me to tell them my address — as if it weren’t already printed on my ID. Somehow, going to vote where the poll workers can’t even read the address on my ID doesn’t fill me with confidence.

(No, I know it’s because they want to be sure that I really know my address and am not simply taking someone else’s identity. It’s still bullshit. Next year, I’m reading the address from my ID before I even hand it to them. See what happens then.)

So by the time I’m done, I’ve been harassed by politicians on the radio, on the TV, in my mail, at my front door, on the way to drop off the kids, on my way to conferences with my kids’ teachers, on the way to actually vote, and then while voting, I’m told pretty clearly that the state doesn’t think I’m actually me and am trying to fraudulently cast a ballot. All this after being told again and again by, well, Science, that my vote really doesn’t matter.

It’s amazing that anyone votes at all.




What’s the deal with keyless entry car thefts?

In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)

Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.

It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).

But just recently, it’s been making the news again, with some insurers even reportedly refusing insurance for some vehicles.

But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:

In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry.

[...]

“The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”

This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.

Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.

This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?

I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).

Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.

But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.

The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).

Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.

It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?

This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.

It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.

The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”

It’d be nice if we could figure out what’s going on and actually fix things.







iPhone SMS forwarding — cool, but may be risky

The recent release of iOS 8 brought with it several cool new features, especially some which more tightly integrate the iOS world with the OS X desktop world. Some of these are limited by physical proximity (like handing off email drafts among devices), while others are require being on the same local subnet (forwarding phone calls to the desktop).

However, one feature apparently Just Works all the time, and that’s SMS message forwarding. If you have an iPhone, running iOS 8, then you can send and receive normal text messages (to your “Green bubble friends”) from your iPad or Yosemite desktop. Even if the phone is the next town over.

This is actually pretty cool — I use text messaging a lot, and while most of the people I communicate with use iPhones, a fair number (especially customers) don’t. If I need to send them something securely, like a password to a document I just emailed them, I have to manually type the password into my iPhone and hope I don’t mess it up. With SMS messages bridged between the systems, now I can just copy out of my password safe and paste right into iMessage.

However, this does raise one possible security issue. Many services which offer Two-Factor Authentication (2FA, or as many are preferring to all this particular brand of 2FA, “two step authentication”), send the 2FA confirmation codes over SMS. The theory being that only the authorized user will have access to that user’s cell phone, and so the SMS will only be seen by the intended person.

But if your SMS messages are also copied to your iPad (which you left on your desk at work) or your laptop or desktop (which, likewise, may be left in the office, out of your control) then password reset messages sent over SMS will appear on those devices too.

Which means that your [fr]enemies at work may be able to easily gain control over some of your accounts, simply by requesting a password reset while you’re at lunch. And, since you’re really enjoying your three-bourbon lunch, you don’t even notice the messages appearing on your phone until it’s too late (at which point you’re alerted, not by the Twitter account reset, but by dozens of replies to the “I’m an idiot!” tweet your co-workers posted on your behalf.)

Fortunately, there’s an easy way to correct this.

In OS X Yosemite, go into the System Preferences application and select “Notifications.” Then go down to “Messages,” and where it says “Show message preview” make sure the pop-up is “when unlocked,” not “always.” If this is set to “when unlocked,” then the contents of SMS messages won’t be displayed when the desktop is locked, only a “you got a message” sort of notification. You might also consider disabling the “Show notifications on lock screen” button just above it, which will even disable the notification of the notification.

Yosemite SMS Notification Settings

In iOS, a similar setting can be found in Settings, also under Notifications:

iOS SMS Notification Settings

However, the control here isn’t quite as fine-grained — you can either show notifications on the lock screen, or not, and if they’re shown at all, then the contents will be displayd as well.

You might consider even preventing SMS notifications from displaying on your primary phone when locked, but if it’s almost never out of your control, then perhaps that’s not a big risk to worry about.

Note that both of these settings apply to iMessages as well as SMS messages.

If you never use SMS messages for account validation (whether you call them 2FA or 2SV or just “validation messages,” then you might not need to worry about this at all. Though it’s probably a good idea to at least consider disabling these notifications anyway…










Even more posts about iOS encryption

The assertion recently made by Apple that “it’s not technically feasible” to decrypt phones for law enforcement has really stirred up several pots.

Many in law enforcement are upset that Apple is “unilaterally” removing a key tool in their investigations (whether that tool has ever been truly “key” is another debate). Some privacy experts hail it as a great step forward. Others say “it’s about time.” And still others debate whether it’s quite as absolute a change as Apple’s making it sound.

I wrote extensively about this earlier this week, trying to pull together technical details from Apple’s “iOS Security” whitepaper and some key conference presentations. What’s amusing, now that I look through my archives, is that I said a lot of the same things 18 months ago.

As I was finishing this weekend’s post, Matthew Green posted a very good explanation as well, a bit higher on the readability scale without losing too many of technical details. He later referred to my own post (thanks!) with an accurate note that we don’t know for certain whether the “5 second delay” in consecutive attempts can be overridden by Apple with a new Secure Enclave firmware.

Also later on Monday, Julian Sanchez published a less technical, much more analytic piece that’s worth reading for some of the bigger picture issues. His Cato Institute post is also a good read, to help understand why backdoors in general are a bad idea, and how this may turn out to be a rerun of the 1990’s Crypto Wars.

And just this morning, Joseph Bonneau posted a great practical analysis of the implications of self-chosen passcodes on the Freedom to Tinker blog. This latest story shows how even though, at a technical level, some strong passcodes may take years to break, in practical terms users don’t pick passcodes that are “random enough”. It even has a pretty graph.

One final suggestion made in Mr. Bonneau’s post (and also voiced by many others in posts or on twitter, including myself) is that a hardware-level “wrong passcode count” seems like a great idea. I’d been concerned about how to integrate that count with the user interface, but then he estimates that “A hard limit of 100 guesses would leave about 3% of users vulnerable” (based on the statistics he presents).

This almost throwaway comment made me wonder — if the user interface is (typically) configured to completely lock, or even wipe, a phone after 10 guesses, then why not let OS-level brute force attempts (initiated through the mythical Apple-signed external boot image) continue until 20 attempts? Then the hardware can simply refuse to attempt any further passcode key derivations, and not even worry about what to do with the phone (lock, wipe, or whatever). If the user has already hit 10 attempts through the UI, this count will never be reached in hardware anyway.

The only hard part about this idea would be finding a secure way for the secure element to know that the passcode was properly entered. If we rely on the operating system to actually verify the passcode, and then notify the secure element, then that notifcation may be subject to spoofing by an attacker. This may be an intractable problem, but I’m confident that it wouldn’t be, and that a workable (or even elegant) solution may be found.

If Apple could add that level of protection, then even a 4-digit numeric passcode could be “strong enough” (provided they stay away from the top-50 or so bad passcodes). And at that point, it would absolutely be “technically infeasible” for Apple to do anything with a locked phone, other than retrieve totally unencrypted data.







A (not so) quick primer on iOS encryption

A few weeks ago, Apple published a message about Apple’s commitment to your privacy. In the section on Government Information Requests, Apple made the following somewhat startling statement:

On devices running iOS 8, your personal data such as photos, messages (including attachments), email, contacts, call history, iTunes content, notes, and reminders is placed under the protection of your passcode. Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data. So it's not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.

What exactly does this mean? And what was Apple doing before to support law enforcement? Well, to really understand that, we have to go kind of deep into how iOS encryption works. It’s complicated, and I’m not always the best at explaining things, but I’ll try my best.

To really understand it all, I highly recommend Apple’s iOS Security whitepaper. First released in May of 2012, with updates in October 2012, February 2014, and September 2014, the newest version (dated October 2014) includes changes to iOS 8. Also a great reference is “iPhone data protection in depth” by Jean-Baptiste Bédrune and Jean Sigwald of Sogeti, presented at HITB Amsterdam in 2011. Keep these open in other windows as I struggle to explain things. To truly understand what’s going on, those two references are your best bet.

In fact, I’ll probably find it convenient to refer back to these from time to time. I’ll call the iOS paper “ISG” and and the HITB presentation “Sogeti”. I’m sure I’m totally butchering the proper way to cite sources, but I’ve been out of school for over 20 years, so give me a break, okay?

Another good reference is this talk from Black Hat Abu Dhabi 2011, by Andrey Belenko and Dmitry Sklyarov. The diagram on page 46 may be particularly helpful.

Too Long, Didn’t Read

Jump to the bottom.

Or go read Matthew Green’s much simpler explanation, which was posted after I’d finished writing my first draft of this post…

Where to begin?

Let me start by saying that I’m going to gloss over a lot of stuff here. This isn’t a formal presentation, it’s not a whitepaper, it’s not even meant to be a serious reference. This is in response to frustration trying to discuss this on twitter in 140-character bites.

So this is my “Tweet Longer” response. Think of it as the conversation (well, more like endless monologue you’re too polite to extricate yourself from) that I’d have with you if I ran into you at a con and you asked me how all this works.

Full Disk Encryption

Data on iPhones is encrypted.

Okay, glad that’s cleared up.

Well, it wasn’t at first. But starting with iOS 3.0 and the iPhone 3GS, the full filesystem was encrypted. The key for this encryption is not user-selectable, but depends on a UID which is “burned” into the phone’s chips at the factory (Sogeti, pp 4 and 5, also ISG, p 9). The UID is a 256-bit key “fused into the application processor during manufacturing.” Apple further stages that “no software or firmware can read them directly” but can only see the results of encryption using those keys.

The UID key is used to create a key called “key0x89b.” Key0x89b is used in encrypting the device’s flash disk. Because this key is unique to the device, and cannot be extracted from the device, it is impossible to remove the flash memory from one iPhone and transfer it to another, or to read it offline. (And when I say “Impossible,” what I really mean is “Really damned hard because you’d have to brute force a 256-bit AES key.”)

The exact mechanisms used to encrypt the storage are pretty complicated (see Sogeti, pp 31-39). The over-simplified answer is something like this:

Of course, all of this is fully automatic. If you can get access to the file system, you can read the data — the decryption “just works.” The primary protections this adds are:

But the data itself isn’t terribly well protected. If the device is unlocked, or if you can boot off an external drive and read the filesystem, then you can read everything.

Data Protection API

In iOS 4, Apple introduced the Data Protection API (DPAPI). Under DPAPI, several classes of protection were introduced for both files and keychain entries. I’ll focus on just files, but the concepts map relatively cleanly to keychain data as well.

Each file is individually encrypted with a “Class Key.” The class key is simply another random key, but which is applied to any and all files which share the same DPAPI level. For example, all files marked as “FileProtectionComplete” use class 1 (Sogeti, p 15). There are currently four file protection classes, and four (nearly) analogous keychain protection classes (ISG, pp 10-13). The three classes most important to this discussion are:

The keys for all these classes are stored in a “keybag.” There are several keybags — one on the device (the “system keybag,”) another stored on trusted computers (for syncing and backing up devices), and another stored on MDM servers (to remotely unlock a device, in the event you’ve forgotten your passcode). (ISG, pp 14-15).

When a file is encrypted under (for example) “Complete” protection, the system extracts the appropriate class key from the keybag, and encrypts the file using that key. To decrypt the file, the key is again read from the keybag, and the file decrypted.

When you set (or change) a passcode, a key is derived using the system UID, and this key is then used to encrypt individual class keys within the keybag. The key derivation process is complicated, but essentially expands the passcode and a salt, using multiple rounds designed to take about 80 milliseconds, no matter the device. On newer devices (A7 or later processors, which is to say, iPhone 5S, 6, and 6+, the iPad Air, and the Retina iPad Mini) this is augmented with a 5-second delay between failed requests. This delay is added at the hardware level, while the escalating delays seen by the user at the lock screen are all part of the operating system. (ISG, p 11).

Because the passcode key is “entangled” with the UID, it’s not possible to simply extract the encrypted keybag and brute force the passcode on a fast password cracking machine. The key must be decrypted on the device itself, which requires either a jailbroken device or a trusted external boot image (more on those later).

When you lock the device, the decrypted “complete protection” class key is wiped from memory. So the device can no longer read any file encrypted with that protection level, until it’s been unlocked again.

Default Data Protection

When Apple debuted the DPAPI, it was entirely an optional feature, and virtually no applications took advantage of it. For some time, the only Apple application which used any data protection was the Mail app. Under iOS 7, Apple changed the default to “Complete until first authentication”, and any new applications should use this protection level automatically.

Unfortunately, though 3rd party apps would inherit somewhat better protections, it wasn’t the best possible mode. Perhaps Apple left that as an option for developers to avoid making background use too difficult, or perhaps there were other reasons. And Apple opted to exclude most of their own applications from this new default.

Support to Law Enforcement

So what exactly was Apple doing to support police who need access to data on a seized iPhone or iPad? This has never been terribly clear. Every few months, a story seemed to hit the press about Apple unlocking phones for the police, but details were scarce, and speculation rampant.

As far as I have been able to guess, Apple had three avenues for extracting data from a seized iPhone:

Forensics

As for the first item, I don’t work in forensics and so can’t really speak to what these tools can do. But several open-source tools exist (in particular, the iphone-dataprotection kit by our friends at Sogeti) which can illustrate some of what’s possible.

However, much of the data extracted by these tools may be limited when the device is locked, and no forensic tool can directly bypass encryption provided on a locked device by DPAPI. (this changes if the forensic examiner has access to a desktop used to sync the device, but that’s a whole different blog post.)

Booting a Trusted Image

The second is a bit more complicated. Essentially, you’re booting the device using an external drive as the operating system. But since you’re still “on” the device, the locally-stored keys and UID are still available, and so the entire filesystem can be mounted and read. To prevent just anyone from doing this, iOS devices require the external image to be signed by Apple (so we can’t simply create our own drive and boot off that).

Fortunately (or unfortunately, depending on your point of view) there was a bug in the bootrom on several early iOS devices that allowed an attacker to bypass this signature requirement. So up until (and including) iPhone 4 and iPad 1, it was possible for anyone to perform this attack and extract any non-encrypted data (DPAPI protection level “None”) from the phone. Because the phone has to be rebooted in order to perform this attack, however, in-memory keys for the “complete” and “complete until first authentication” are lost, and so any data protected in those modes cannot be read, even using this approach.

Even Apple, booting from a trusted external image, can’t unlock those protected files. The class keys needed for decryption are stored in the system keybag, which is encrypted using the user’s passcode.

Brute Forcing a Passcode

Well, what about brute forcing a passcode? As I said above, older devices could be booted from an external drive, allowing full access to unencrypted files on the device filesystem. Also available are the library routines needed to decrypt the system keybag. And these routines (as far as we know) don’t have any rate limiting, escalating delays, or lockouts, so a program with access to the filesystem and this API can brute force as long as it needs to.

But the key derivation still needs to happen on the device itself, and each device has its work factor tailored so this takes about 80 mS per guess. So though a weak four-digit number could be cracked in 20 minutes or less, a strong alphanumeric passcode could still take months, years, or centuries to break.

And once Apple patched the bootrom hole (beginning with iPhone 5 and iPad 2), this became impossible for anyone outside of Apple to do anyway.

However, because the possibility remained that Apple could crack the passcode, most iOS security experts still recommend that users choose a strong passcode. Just in case.

There’s no evidence that Apple ever actually offered this as a service to law enforcement. I could see where they might, under the right circumstances, but I can also understand where they might be reluctant to ever offer such a service, for fear of it being abused (or just over-requested).

But by brute forcing the passcode, all the data protection is rendered useless.

What changed with iOS 8?

Two things changed with iOS 8:

The first change only affects anyone trying to brute force a passcode directly on a device, which means this only affects Apple (unless law enforcement, forensics teams, or wily hackers have access to a signed image).

The second change is somewhat more significant, under certain circumstances. A device which has been unlocked once already (since the last reboot) will behave exactly the same as iOS 7. That is, anything which is under the “Complete until first authentication” protection level will be (essentially) unencrypted, since the keys will remain in memory after the first time the user unlocks the device.

Much of the built-in application data was moved under the stricter controls: photos, messages, contacts, call history, etc. — all items which were described in the privacy message quoted way back when I started this. This data, once the phone’s been unlocked once, may still be available using 3rd party forensic tools. However, and here’s what I think is probably key: Once the phone is rebooted, that class key is lost, and the data is unreadable until the user enters their passcode again.

So even Apple, with a trusted external boot image, can’t access the data unless they crack the user’s passcode.

I think this is what Apple referred to when they said that it is “not technically feasible” to respond to warrants for this data.

Could Apple still attempt to brute force the passcode? Yes, possibly. If they’ve ever done that before.

It’s also possible that Apple could (or already has) added brute-force protections to newer iOS hardware, which would prevent even Apple from breaking a user’s passcode. They’ve already added the 5-second delay (for A7-based devices), but whether the hardware enforces escalating delays for consecutive bad attempts has not been disclosed. I suspect that if it were the case, we’d’ve heard by now, but I’ve not personally tried brute forcing passcodes on anything newer than an iPad 1. Maybe this was added in iPhone 6 — but we’ll probably have to wait until they’re jailbroken to be sure.

A Quick Demo

If you’d like to see the difference between iOS 7 and iOS 8 for yourself, here’s a simple test.

  1. Get an iPhone running iOS 7, and another running iOS 8.
  2. Get a landline (or a 3rd cell phone) and make sure each iPhone has that number in its Contacts database (add a name, picture, etc.)
  3. Reboot both phones. Do not unlock either phone.
  4. Call the iOS 7 phone from the 3rd phone. You should see not only the phone’s number, but also the name and picture you put into the Contacts database.
  5. Call the iOS 8 phone. You should see the phone number, and nothing else (since most cellular providers don’t provide name over caller ID).
  6. Unlock the iOS 8 phone, then lock it again.
  7. Call the iOS 8 phone again, and this time, you should see the Contacts entry appear on the locked screen.

This shows how the Contacts database is locked with “complete until first authentication.” After rebooting, the phone simply does not have access to the Contacts database, because the class key is still safely encrypted in the system keybag. Once you unlock it, however, the key is extracted, decrypted, and retained in memory, so the next time you call, the phone can read Contacts and display the information.

(This is also why you can’t connect to your home Wi-Fi after rebooting the phone, but after you’ve unlocked it once, it remains connected even when locked. The keychain entries for Wi-Fi are stored with “complete until first authentication.”) (ISG, p13).

So what about law enforcement?

Well, if the assumptions I am making here are correct (and it’s not just me — I believe many in the iOS security community have come to the same conclusion), then Apple simply cannot provide much beyond very basic forensic-level data on phones, especially once they’ve been powered off.

But to my mind, any such service from Apple was always just a matter of convenience anyway. If a warrant can be issued to seize the data on a phone, then one can be issued to compel the owner of the phone to unlock it. (Yes, I’m aware that this is a legally murky area. Some recent decisions have upheld such orders, while at least one other has struck such an order down. One might be able to fight a court-imposed unlock demand, but it’d almost certainly take a lot of time, and be quite expensive in the long run). (And it should go without saying that there are very many good reasons that I am not a lawyer).

So if the police absolutely need access to the data on a phone, they can (probably) compel the owner to unlock it, and so Apple’s inability to help becomes irrelevant.

They can also (presumably) serve warrants to collect any computers belonging to that user, extract the pairing records from them, and then collect just about anything they want from the phone over USB.

Finally, much of the data on an iOS device will also exist “in the cloud” somewhere. So police can certainly go after those providers as well.

So the bottom line here is that, even without Apple’s help, law enforcement still has many ways to get at the data on a locked iOS device.

Unanswered Questions

Even with the very good documentation from Apple, several questions remain.

Where exactly does the low-level encryption happen? The iOS Security guide says that the UID is “fused” into the application co-processor, and that “no software or firmware can read them directly” (p 9), but on page 6 it says it’s stored in the “Secure Enclave” (which is not to be confused with the “Secure Element” used by NFC and Apple Pay), and that the Secure Enclave has its own software update process.

So is the UID available to the software within the Secure Enclave (SE)? Or are operations utilizing the UID handled by a “black box” within the SE, and only the results of these operations visible to SE firmware (and, as moved from SE to the main processor, to the OS in general)?

Or is it possible that the UID in the context of the Secure Enclave is not the UID used to generate the file protection keys? (If so, I sincerely hope Apple updates their naming system to clarify this, because it’s obviously confusing the heck out of a lot of us).

Has Apple ever provided brute-force passcode breaking services to law enforcement? Has iOS been modified to restrict or eliminate this attack? If not, can Apple theoretically perform such an attack today? If asked, would they refuse? Could they?

What does it all mean?

So, what’s the bottom line? Is there a “TL;DR” summary?

  1. Since the iPhone 3GS, all iOS devices have used a hardware-based AES full-disk encryption that prevents storage from being moved from one device to another, and facilitates a fast wipe of the disk.
  2. Since iOS 4 (iPhone 4), additional protections have been available using the Data Protection API (DPAPI).
  3. The DPAPI allows files to be flagged such that they are always encrypted when the device is locked, or encrypted after reboot (but not encrypted after the user enters their passcode once).
  4. On older devices (up to and including iPhone 4 and iPad 1), a bootrom bug could be exploited to boot off an external image and brute force weak passcodes, as well as read non-encrypted data from the filesystem.
  5. It remains possible (but unproven) that Apple retains the capability to do this on modern devices using a trusted external boot image.
  6. Once a user locks a device with a passcode, the class keys for “complete” encryption are wiped from memory, and so that data is unreadable, even when booting from a trusted external image.
  7. Once a user reboots a device, the “complete until first authentication” keys are lost from memory, and any files under that DPAPI protection level will be unreadable even when booted from an external image.
  8. Under iOS 8, many built-in applications received the “complete until first authentication” protection.
  9. This new protection level means that even when booting from a trusted external image, Apple cannot read data encrypted using that protection, unless they have the user’s passcode.
  10. It may still be possible to use forensic tools to extract data from a locked device. In many ways, iOS 8 behaves exactly like iOS 7 for such tools, if the user has unlocked it at least once after a reboot.
  11. The entire hierarchy of encryption keys, class keys, and keybags, is entangled with a device-specific UID that cannot be extracted from the device nor accessed by on-device software.
  12. Many of the keys are further protected by a key derived from the passcode (and the internal UID).
  13. It is not entirely clear whether the Secure Enclave can be manipulated by Apple or an attacker to bypass any or all of the encryption key hierarchy by gaining direct or indirect access to the UID or derived keys.
  14. It is also unknown whether current devices are vulnerable to a passcode brute force attack by Apple (or anyone with access to a trusted external boot image).
  15. Many of these protections are rendered (somewhat) irrelevant if law enforcement (or a determined adversary) have access to a trusted computer used to sync the device, or potentially to copies of the data existing in cloud-based services.

The bottom line, the real “too long, didn’t read”:

To sum it up in one sentence:




Internet of SCADA, or, why does my HVAC blow?

We live in a house that was new-built, so it’s got all the modern trimmings. It’s also got all the modern cut corners, including an air conditioning system (two, actually) that even 12 years later we’re still struggling with. It seems that every year or two something else goes wrong, especially with the combined cooling / heat pump unit that handles the upstairs.

I’ve been thinking for a while that I should be able to build a temperature monitor to track how the system is running, to detect problems (loss of freon, etc.) early, and maybe even forestall costly repairs. Maybe. So I asked for some Arduino gear for Christmas, and earlier this summer, I finally started playing around with it.

Then…right on schedule, in the height of the summer heat, our upstairs system stopped cooling again. Our HVAC company came out, pumped two pounds of freon into the system (I really gotta start doing that myself — far cheaper), and scheduled a comprehensive leak search for mid-September (just in case we have to disable the system for a long stretch, we wanted it to be in a season where we might not miss it).

Then just before I went to DEF CON, I noticed (using my 20-year-old Radio Shack thermometer) that the AC unit didn’t seem to be cooling as much as before. After returning, it seemed…okay…but still not ideal, so I rushed a (greatly simplified) monitoring circuit into play. I just got it working this week, and already I’m finding some interesting results.

I’m still trying to figure out the best way to sense thermostat calls for compressor, heat, and fans — do I use clip-on current sensors, inline current sensors, voltage drop sensors, opto-isolaters — and how do I integrate those sensors into the 1-Wire bus… So for now, I only have a few temperature sensors.

First, some eye-candy:

Two-day Stripchart

Here, the orange line is one of two sensors on a table (in the next graph they’re individually shown as red and blue). The green line is an outside temperature taken as an average of a few web-accessible weather stations in the area (a few in nearby neighborhoods, plus Dulles airport), so it’s a reasonable approximation of the temperature near my home. Blue is the air temperature at the cold air return directly above the desk (and thermostat), and red is the supply register (output vent) directly above a window, maybe 8 feet from the other three sensors.

One important measurement is the cooling drop produced by the A/C system. Because it’s currently malfunctioning I don’t have the compressor running. But I ran it for three brief periods, about a half hour each, just to see what it looks like on the graph. This is, in fact, the primary reason I wanted to start this project. One typically expects a 10-15 degree temperature drop across an A/C unit’s cooling coil, though the actual drop from cold air return back to the room might be a little less. After we had coolant added in July, my old thermometer measured that drop at just about 10 degrees.

When the compressor ran from about 1:45-2:30 on Tuesday, the supply and return lines were at the same temperature. That is, it showed ZERO cooling effect. When run twice that evening (about 8:00 and again about 11:00) the graph shows 2, maybe 3 degrees of cooling. So, obviously, it’s broken. My long term plan includes emailed and even a beeping alarm unit when this drop habitually reduces below some threshold….so I was glad to see what “broken” looks like so early in the system’s development.

What gets really fun is playing with the furnace fan. For about 90 minutes (after I first turned off the compressor) I left the fan set to “on,” that is, continuously running. The air coming out of the register by the window was consistently 5 or more degrees warmer than what went into the system at the cold air return in the same room. So either I’m getting an ambient heating effect from the vent’s location (in the ceiling, near a large window), or the duct work in the attic is heating things up significantly.

Then I turned off the furnace fan, and the register temperature continued to rise, until I switched to “Circulate,” in which the furnace fan cycles on and off. I’d had no idea how that mode actually worked (I vaguely presumed it was somewhat tied to the thermostat, and might be if the room temperature was actually close to “reasonable”) but here it seems to just be about 15 minutes on, 15 minutes off.

When the fan first kicked in, the register temperature shot up (probably expelling warm air that’s been sitting in the attic ductwork), then it drops a bit, and sort of settles for a bit. Then it drops again (I guess when the fan turns off — again, I really need a sensor on that relay), and then shoots back up again when the fan restarts. You can really see the pattern on Wednesday afternoon, where the low temperature (fan off) seems to be about equal to the room temperature, while the high temperature climbs in a fairly obvious curve.

Finally, about 2:30 on Wednesday I switched the fan back to “constantly on” and saw the temperature rise again, but then it stabilized somewhat lower than the curve I discerned before. Perhaps the constant flow kept the air in the ductwork from warming up exponentially (like in a greenhouse) but heat was still being transferred even to the moving air.

I ended the experimenting about 4:00, when I switched the fan off completely, and the register temperature dropped back to match that of the other sensors in the room (which was pretty close to the outside temperature as well).

In fact, there’s a pretty strong correlation (well, visually, anyway…I’m not enough of a data geek to quantify that correlation) between the outside temperature and that of the air coming through the register. So again, there’s something happening here, either heating in the attic, or some halo effect near the window / ceiling location of the sensor, or maybe a little of both.

Then yesterday I tried something different.

Fan Details

Here, the red and blue lines are the sensors on the table (actually in adjacent holes on a breadboard, so it’s interesting to see the blue sensor lagging the red one), the orange is the output (register) temperature, and the green is the cold air return (about 5 feet above the table). What’s really important is the relationship between the vent and the other three (which kind of give a general ambient room temperature). (these are the default colors my RRD setup uses, not the custom setup I used when I hand-crafted the first graph from logged data).

We know that our A/C will be down for a while, so we elected to just wait until the scheduled leak test in a couple weeks…partially as an experiment in A/C-free living (which our kids don’t appreciate quite as much, BTW). So we put a window fan in the bedroom, right below the A/C register I keep referring to. Overnight, it’s set to pull cooler air in from outside. During the day, it blows air out, on the theory that it’ll pull cooler air from the basement and 1st floor, which has an HVAC system that’s still working. I don’t remember when I switched direction on the fan, but it was probably between 7:30 and 8:00.

Shortly afterwards, the register temperature climbs steadily, which isn’t surprising given the past data and the fact that this window gets full sun in the morning. Then, just to verify the previous days’ data, I turned the furnace fan to continuous on at about 1:30. The temperature at the register dropped over 5 degrees, but still remained significantly higher than the temperature in the room. I turned it back off, and the line climbed back up to resume the earlier slope. Then I had a crazy idea: What if the window fan was sucking air out of the register? I turned it off, and the temperature plummeted, back to an unsteady 2-3 degrees above the room temperature. Turning the furnace fan back on again resumed the high temperature readings from that register, higher than before, but still consistent with the rising temperatures outside (not shown on this graph). When it was finally turned off, with the window turned off, the temperature fell to match the rest of the sensors in the room.

With the window fan and furnace fans both turned off, today’s graph has been four very similar lines, all within about 3 degrees of yesterday’s values at the same time. Certainly, the weather today may be different from that of yesterday or the day before (it got quite cool Tuesday night due to some rains in the area), but I’m hoping that the system will show that the room temperature is a little more stable (and hopefully lower) now that I’m not sucking hot air out of the attic ductwork.

I’m also more than a little concerned about my preliminary conclusion, that the attic adds 5 or more degrees to the air as it passes through the system. If the coil is really expected to drop air temperature by 10-15 degrees, then I’m losing a full 33% efficiency just by exposure to the attic air (and these systems are so efficient to begin with). There’s a roof-mounted ventilation fan, which should be pulling some hot air out of the attic, and monitoring that (and the attic temperature in general) is on my list for this project.

But I feel like the ductwork shouldn’t be absorbing that much heat to begin with. I don’t know if it’s a function of the air return, or the air distribution, or the furnace unit itself, but it really does seem like I may need to do some work up there. Right now, it’s a rat’s nest of flexible ductwork, leading from the furnace to smaller distribution boxes to further flexible ducts, etc. All of them are running at 4-6’ above the attic floor, with long swoops and droops. I seriously wonder whether ripping that all out and installing rigid ducts, at the floor joist level and covered with heaps of blown-in insulation, might make a significant difference here.

It’s also possible that the heat increase isn’t coming from the attic at all, but from the much larger cold air return in the hallway by the kids’ rooms. I’ll need to get another sensor over there to see if that’s the case, but generally, the master bedroom (where all these other sensors are located) feels a lot warmer than the hall, so I’m still leaning towards the attic ductwork being a problem.

Either way, this is an amazing amount of information, and may already be helping me better understand and diagnose our long-running HVAC problems, all from only a couple days’ worth of logging and an Arduino-based sensor that took less than a day to cobble together (ignoring delays from a failed WiFi breakout board). I can’t wait until I have both my HVAC systems fully instrumented, with real local outdoor and attic temperatures as well.

Yay, data!