Darth Null’s Ramblings

DarthNull.org • About Ⓘ

Hello! I'm David Schuetz.
This is where I ramble about...stuff.

DLP Considered Harmful - A Rant about Reliable Certificate Pinning


[Note: Yes, I understand the point of DLP. Yes, I’m being unrealistically idealistic. I still think this is wrong, and that we do ourselves a disservice to pretend otherwise.]

The Latest Craziness

It is happening again. A major computer manufacturer (this time, Dell, instead of Lenovo) shipped with a trusted root TLS CA certificate installed on the operating system. Again, the private key was included with the certificate. So now, anyone who wants to perform a man-in-the-middle attack against users of those devices can easily do so.

Any domain, any site (Image by Kenn White (@kennwhite))

But as shocking as that may have been, what comes next may surprise you!

Browsers let local certs override HPKP

Data Loss Prevention and Certificate Pinning

It’s (reasonably) well known that many large enterprises utilize man-in-the-middle proxies to intercept and inspect data, even TLS-encrypted data, leaving their networks. This is justified as part of a “Data Loss Prevention” (DLP) strategy, and excused by “Well, you signed a piece of paper saying you have no privacy on this network, blah blah blah.”

However, I had no idea that browser makers have conspired to allow such systems to break certificate pinning. (and apparently I wasn’t the only one surprised by this).

HPKP Wrecked

Certificate pinning can go a long way to restoring trust in the (demonstrably broken) TLS public key infrastructure, ensuring that data between an end user and internet-based servers are, in fact, properly protected.

It’s reasonably easy to implement cert pinning in mobile applications (since the app developer owns both ends of the system — the server and the mobile app), but it’s more difficult to manage in browsers. RFC 7469 defines “HPKP”, or “HTTP Public Key Pinning,” which allows a server to indicate which certificates are to be trusted for future visits to a website.

Because the browser won’t know anything about the remote site before it’s visited at least once, the protocol specifies “Trust on First Use” (TOFU). (Unless such information is bundled with the browser, which Chrome currently does for some sites). This means that if, for example, the first time you visit Facebook on a laptop is from home, the browser would “learn” the appropriate TLS certificate from that first visit, and should complain if it’s ever presented with a different cert when visiting the site in the future, like if a hacker’s attacking your connection at Starbucks.

But some browsers, by design, ignore all that when presented with a trusted root certificate, installed locally:

Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. "Data loss prevention" appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.

We deem this acceptable because the proxy or MITM can only be effective if the client machine has already been configured to trust the proxy’s issuing certificate — that is, the client is already under the control of the person who controls the proxy (e.g. the enterprise’s IT administrator). If the client does not trust the private trust anchor, the proxy’s attempt to mediate the connection will fail as it should.

What this means is that, even when a remote site specifies that a browser should only connect when it sees the correct, site-issued certificate, the browser will ignore those instructions when a corporate DLP proxy is in the mix. This allows the employer’s security team to inspect outbound traffic and (they hope) prevent proprietary information from leaving the company’s network. It also means they can see sensitive, personal, non-corporate information that should have been protected by encryption.

This Is Broken

I, personally, think that’s overstepping the line, and here’s why:

[ranty opinion section begins]

The employer’s DLP MITM inspecting proxy may be an untrusted third party to the connection. Sure, it’s trusted by the browser, that’s the point. But is it trusted by the user, and by the service to which the user is connecting?

If, for example, a user is checking their bank account from work (nevermind why, or whether that’s even a good idea). Does the user really want to allow their employer to see their bank password? Because they just did. Does the bank really want their customer to do that? Who bears the liability if the proxy is hacked and banking passwords extracted? The end-user who shouldn’t have been banking at work? The bank? The corporation which sniffed the traffic?

A corporation has some right to inspect their own traffic, to know what’s going on. But unrelated third parties also have a right to expect their customers’ data to be secure, end-to-end, without exception. If this means that some sites become unavailable within some corporate environments, so be it. But the users need be able to know that their data is secure, and as it stands, that kind of assurance seems to be impossible to provide.

Users aren’t even given a warning that this is happening. They’re told it could happen, when they sign an Acceptable Use Policy, but they aren’t given a real-time warning when it happens. They deserve to be told “Hey, someone is able to access your bank password and account information, RIGHT NOW. It’s probably just your employer, but if you don’t trust them with this information, don’t enter your password, close the browser, and wait until you get to a computer and network that you personally trust before you try this again.”

SSL Added And Removed Here

[end ranty section]

It’s Bigger Than Just The Enterprise

Unfortunately, it’s not just large corporations which are doing this kind of snooping. Just a few days ago, I was at an all-night Cub Scout “lock-in” event for my eldest son, at a local volunteer fire department. They had free Wi-Fi. Great! I’m gonna be here all night, might as well get some work done in the corner. Imagine my surprise when I got certificate trust warnings from host “”. The volunteer fire department was trying to MITM my web traffic.

Fortunately, they didn’t include any “click here to install a certificate and accept our Terms of Use” kind of captured portal, so the interception failed. If it had, I certainly wouldn’t have used the connection (and as it was, I immediately dropped it and tethered to my phone instead). But how many people would blindly accept such a certificate? How many “normal people” are putting their banking, healthcare, email, and social media identities and information at risk through such a system, every day? This sort of interception has been seen at schools, on airplanes, and many other places where “free” Wi-Fi is offered.

In my job, I frequently recommend certificate pinning as a vital mechanism to ensure that traffic is kept secure against any eavesdropper. Now, suddenly, I’m faced with the very real possibility that there’s no point, because we’re undermining our own progress in the name of DLP. Pinning can make TLS at least moderately trustworthy again, but if browsers can so easily subvert it, then we’re right back where we started.

Finally, though I’m not usually one to encourage tin foil hat conspiracy theories…with all the talk about companies taking the maximum possible steps to protect their users’ data, with iPhone and Android encryption and the government complaining about “going dark”… a DLP pinning bypass provides an easy way for the government to get at data that users might otherwise think is protected. Could the FBI, or NSA, or <insert foreign intelligence or police force> already be requesting logs from corporate MITM DLP proxies? How well is that data being protected? Who else is getting caught up in the dragnet?

Cognitive Dissonance FTW

On the one hand, we as an industry are:

But at the same time, we:

I think this is a lousy situation to be in. Who do we fight for? What matters? And how do we justify ourselves when we issue such contradictory guidance? How can we claim any moral high ground while fighting against government encryption back doors, when we recommend and build them for our own customers? How can our advice be trusted if we can’t even figure this out?

I hope and believe that in the long run, users and services will push back against this. (And, as I said at the beginning, I know that I’m probably wrong.) I suspect it will begin with the services — with banks, healthcare providers, and other online services wanting HPKP they can trust, corporate DLP polices be damned. Who knows, maybe this will be the next pressure point Apple applies.

When that happens, I just hope we can offer a solution to the data loss problem that doesn’t expect a corporation to become the NSA in order to survive.

Thoughts on CyberUL and Infosec Research

For the past year or so, I’ve been thinking about the information security research space. Certainly, with the mega-proliferation of security conferences, research is Getting Done. But is it the right kind of research? And is it of the right quality?

This has recently become a hot topic, since .mudge tweeted on June 29:

Goodbye Google ATAP, it was a blast.

The White House asked if I would kindly create a #CyberUL, so here goes!

We’ve also seen increased attention on Internet of Things, and infosec in general, from the “I Am The Cavalry” effort, and more recently, the expansion of research at Duo Labs and elsewhere.

So this seems like a good time to jot down some of my thoughts.

CyberUL and traditional research

CyberUL itself

First, the idea of an “Underwriter’s Laboratories” for infosec, or “CyberUL”: I think most people agree that it’s a good idea, at its core. John Tan outlined such a service back in 1999, and it’s been revisited many times since. However, many issues remain. I’m certainly not the first to bring these points up, but for the sake of discussion, here are some high-level problems.

For one thing, certifying (or in UL parlance, “listing”) products is difficult enough in the physical space, but even harder in CyberSpace. Software products are a quickly moving target, and it’s just not possible to keep up with all the revisions to product firmware, both during design and through after-sale udpates.

Would a CyberUL focus on end-user products, such as the “things” we keep hooking up to the Internet, or would it also review software and services in general? What about operating systems? Cloud services?

Multiple certifications of one form or another already exist in this space. The Common Criteria, for example, is very thorough and formalized. It’s also complicated, slow, and very expensive to get. The PCI and OWASP standards set bars for testers to assess against, but the actual mechanisms of testing may not be consistent across (or even within) organizations.

Finally, there’s the question of how deep testing can go. Even with support from vendors, fully understanding some systems is a daunting undertaking, and comprehensive product evaluations may require significant resources.

Ultimately, I’m afraid that a CyberUL may suffer from many of the same problems that “traditional” information security testing faces.

So, what about traditional testing?

Much (if not most) testing is paid for by the product’s creator, or by some 3rd party company considering a purchase. The time and scope of such testing is frequently limited, which drastically curtails the depth to which testers can evaluate a product, and can lead to superficial, “checkbox” security reviews. This could be especially true if vendors wind up, to be honest, frantically checking the “CyberUL” box in the last month prior to product release.

Sometimes, testing can go much deeper, but ultimately they’re limited by whoever’s paying for it. If they’ll only pay for a 2-week test, then a 2-week test is all that will happen.

Maybe independent research is the answer?

There’s obviously plenty of independent research, not directly paid for by customers. However, because it’s not paid for…it generally doesn’t pay the testers’ bills in the long term.

Usually, this work comes out of the mythical “20%” time that people may have to work on other projects (or 10%, or 5%, or just “free time at night”). If research is a tester’s primary function, then that dedicated work is often kept private: its goal is to benefit the company, sell vulnerabilities, improve detection products, etc.

Firms which pay for truly independent and published research are vanishingly rare. Today’s infosec environment steers testers towards searching for “big impact” vulnerabilities, while also encouraging frequent repeats of well-trodden topics. I see very little research into “boring” stuff: process and policy, leading-edge technologies, general analysis of commodity products, etc.

What would I like to see done?

In an ideal world, with unlimited resources, what could a company focused on independent information security research accomplish?

Manage research

They could perform a research-tracking function across the community as a whole: Manage a list of problems in need of work, new and under-researched issues, longer-term goals, even half-baked pie-in-the-sky ideas.

The execution of this list of topics could be left open for others to take on, or worked on in-house (or even both — some problems will benefit from multiple, independent efforts, confirming or refuting one another’s results).

The company could even possibly provide funding for external research efforts: Cyber Fast Track reborn!

Perform original research

At its core, though, the company would be tasked with performing new research. They’d look at current products, software, and technology. The focus wouldn’t be simply finding bugs, but also understanding how these systems work. Too many products are simply “black boxes,” and it’s important to look under the hood, since even systems which are functioning properly can present a risk. How many of today’s software and cloud offerings are truly understood by those who sign off on the risks they may introduce?

We occasionally see product space surveys (for example, EFF’s Secure Messaging Scorecard). We need more efforts like that, with sufficient depth of testing and detailed publication of methods and results, as well as regular and consistent updates. Too often such surveys are completed and briefly publicized, generating a few sales for the company which performed it, and then totally forgotten.

I’d also like to see generalized risk research across product categories — for example, what kinds of problems do Smart TVs or phone-connected door locks create? I don’t mean a regular survey of Bluetooth locks (which might be useful in itself) but a higher-level analysis of the product space, and potential issues which purchasers need to be aware of.

Specific product testing could also be an offered service, provided that the testing permits very deep reviews without significant time limitations, and that the results, regardless of outcome, be published shortly after the conclusion of the effort (naturally, giving the vendor reasonable time to address any problems).

Information sharing

And important but currently underutilized function is “research about research.” The Infosec Echo Chamber (mostly Twitter, blogs, and a few podcasts) is great about talking about other research and findings, but not very good at critically reviewing and building upon that work.

We need more methodical reviews of existing work, confirming and promoting findings when appropriate, and correcting and improving the research where problems are discovered. Currently, those best able to provide such analysis are frequently busy with paying work, and so valuable insights are delayed or lost altogether.

Related to this is doing a better job of promoting and explaining research, findings, and problems, both within the community and also to the media in general. Another related function would be managing a repository, or at least a trusted index, of security papers, conference slides, and other such information.

Tracking broader industry trends

The Verizon Data Breach Investigation Report (DBIR) provides an in-depth annual analysis of data breaches. Could the same approach be used for, say, an annual cross-industry “Bug Report,” identifying and analyzing common problems and trends? [or really, any other single topic…I don’t know whether a report focused on bugs would be worthwhile.]

The DBIR takes a team of experts months to collect, analyze, and prepare — expanding that kind of report into other arenas is something that can’t be undertaken without a significant commitment. An organization dedicated to infosec research may be among the few able to identify the need for, and ultimately deliver, such tightly-focused reporting.

Shaping research in general

Finally, I (and many others, I believe) think that the industry needs a more structured and methodical approach to security research. An organization dedicated to research can help to develop and refine such methodologies, encouraging publication of negative findings as well as cool bugs, emphasizing the repeatability of results, and guaranteeing availability of past research. The academic world has been wrestling with this for decades, but the infosec community has only begun to transition from “quick and dirty” to “rigorous and reliable” research.

How can we do this?

These goals are difficult to accomplish under our current research model: Lack of dedicated time and availability for ad-hoc work are just two of the biggest problems. Breadth, depth, and consistency of testing, and long-term availability of results, are among the other details we haven’t yet worked out.

A virtual team of volunteers might work, but they’d still be relying on stolen downtime (or after-hours work). Of course, they’d also have to worry about conflicts of interest (“Will this compete with our own sales?” and “Don’t piss off our favorite customer.” being two of my favorites.) Plus, maintaining consistency would be an issue, as team members drift in and out.

A bug-bounty kind of model might be possible, like the virtual team but even more ad-hoc (“Here’s a list of things we need to do. Sign up for something that interests you!”), and with predictably more logistical and practical problems.

Plus, for either virtual approach, you’d still need some core group to manage everything.

Ultimately, I think a non-profit company remains the only way to make this happen. This would allow the formation of a core, dedicated team of researchers and administrators. They could charge vendors for specific product tests, and possibly even receive funding from industry or government sources, though keeping such funding reliable year after year will probably be a challenge.

John Tan, author of the 1999 CyberUL paper, updated his thoughts earlier this month. A key quote, which I think drives to the heart of the problem:

"If your shareholder value is maximized by providing accurate inputs for decision making around risk management, then you're beholden only to the truth." 

Any company which can keep “Provide risk managers the best data, always” as a core mission statement, and live up to it, will, I think, be on the right track.

So, can this work?

I honestly don’t know.

There are many things our community does well with research, but a lot which we do poorly, or not at all. An independent company that can focus on issues like those I’ve described could have a significant positive impact on the industry, and on security in general. But it won’t happen easily.

According to John Tan’s initial paper, it took 30 years of insurance company subsidies before Underwriters Laboratories could reach a level of vendor-funded self-sufficiency. We don’t have that kind of time today. And the talent required to pull this off wouldn’t come cheaply (and, let’s face it, this is probably the kind of dream job that half the speakers at Black Hat would love to have, so competition would be fierce).

If anyone can run with this, my money would definitely be on Mudge. He’s got the knowledge, and especially the experience of running Cyber Fast Track, not to mention the decades of general information security experience behind him. But he’s definitely got his work cut out for him.

Hopefully he’ll come out of stealth mode soon. I’d love to see what we can do to help.

Salt as a Service: Interesting approach to hashing passwords

A new service was just announced at the RSA conference that takes an interesting approach to hashing passwords. Called “Blind Hashing,” from TapLink, the technology is fully buzzword-compliant, promising to “completely secure your passwords against offline attack.” Pretty grandiose claims, but from I’ve been able to see in their patent so far, it seems like it has some promise. With a few caveats.

Traditionally, passwords are hashed and stored in place. First we had the the Unix cyrpt() function, which, though it was specifically designed to be “slow” on systems at the time, is now hopelessly outdated and should be killed with fire at every opportunity. That gave way to unsalted MD5-based hashes (also a candidate for immediate incendiary measures), salted SHA hashes, and today’s state of the art functions bcrypt, scrypt, and PBKDF2. The common goal throughout this progression of algorithms has been to make the hashing function expensive, in either CPU time or memory requirements (or both), thus making a brute force attack to guess a user’s password prohibitive.

So far, we seem to have accomplished that goal, but a downside is that a slow hash is still, well, slow. Which can potentially add up, when you’ve got a site that processes huge numbers of logins every day.

The “Blind Hashing” system takes a different approach. Rather than handling the entire hash locally, the user’s password is, essentially, hashed a second time using data from a cloud-based service. Here’s an excerpt from the patent summary:

A blind hashing system and method are provided in which blind hashing is used for data encryption and secure data storage such as in password authentication, symmetric key encryption, revocable encryption keys, etc. The system and method include using a hash function output (digest) as an index or pointer into a huge block of random data, extracting a value from the indexed location within the random data block, using that value to salt the original password or message, and then hashing it to produce a second digest that is used to verify the password or message, encrypt or decrypt a document, and so on. A different hash function can be used at each stage in the process. The blind hashing algorithm typical runs on a dedicated server and only sees the digest and never sees the password, message, key, or the salt used to generate the digest.

Thinking through the process, here’s one way this might work:

Put in a more functional notation, this might look like:

Salt1 = salt_lookup(Userid)
Hash1 = Hash(Salt1, Password)
Salt2 = remote_blind_hash_lookup(Hash1)
Hash2 = Hash(Salt2, Hash1)

In the event of a compromise on the server, the attacker may recover all the Salt1 and Hash2 values. However, they will not be able to retrieve Salt2 without the involvement of the remote blind hash service. So a brute force attack will require cycling through all possible passwords and, for each password tested, requesting Salt2 from the remote service. This should, in theory, be significantly slower than a local hash / salt computation, and can also be rate-limited at the service to further protect against attacks.

On its surface, this seems a pretty solid idea. The second salt is deterministically derived from the first hash, but not in an algorithmic manner, so there isn’t a short-circuit that allows for immediate recovery of the salt. The database used to store Salt2 values is too large to be copied by an attacker. And the round trip process is (presumably) too slow to be practical for a brute force attack. Finally, the user’s password isn’t actually sent to the blind hash lookup service, only a hash of the password (salted with a value that is not sent to the service).

An attacker who compromises the (website) server gains only a collection of password hashes that are uncrackable without the correct password and the cooporation of the blind hash service. If they are able to collect all blind hash responses, they could build a dictionary of secondary salts to use in brute force attacks, but that would still be very slow (for a large site), as each password tested would be multiplied by the length of this secondary salt list. (Of course, if they can intercept the blind hash response data, then the attacker can probably also intercept the initial login process and just grab the passwords in plaintext.) Finally, an attacker who compromises the blind hash service gains access to a database too large to exfiltrate, and to an inbound stream of passwords hashed with unknown salts.

So in theory, at least, I can’t see anything seriously wrong with the idea.

But is it worth it? The only argument I’ve heard against “slow” hash algorithms like bcrypt or scrypt is that it may present too big a load to busy sites. But wouldn’t the constant communication with the blind hash service also present a fairly large load, both for CPU and especially for network traffic? What happens if the remote service goes down, for example, because of a DDOS attack, or network problems? This service protects against future breakthroughs that make modern hash algorithms easy to brute force, but I think we already know how to deal with that eventuality.

I think the biggest problem we have today, with regards to securely hashing passwords, isn’t the technology available, but the fact that sites still use the older, less secure approaches. If a site cares enough to move to a blind hash service, they’d certainly be able to move to bcrypt. If they haven’t already moved away from MD5 or SHA hashes, then I really don’t see them paying for a blind hashing service, either.

In the end, though I think it’s a very interesting and intriguing idea, I’m just not sure I see anything to recommend this over modern bcrypt, scrypt, or PBKDF-based password hashes.

Lenovo, CA Certs, and Trust

It’s been a fun week for information security: @yawnbox - A Bad Week

Arguably one of the more interesting developments (aside from the SIM thing, which I’m not even going to touch) was the decision by Lenovo to pwn all of their customers with a TLS Man-In-The-Middle attack. The problem here was two-fold: That Lenovo was deliberately snooping on their customer’s traffic (even “benignly,” as I’m sure they’re claiming), and that the method used was trivial to put to malicious use.

Which has me thinking again about the nature of the Certificate Authority infrastructure. In this particular case, Lenovo laptops are explicitly trusting sites signed with a private key that’s now floating around in the wild, ready to be abused by just about anyone. But it’s more than just that — our browsers are already incredibly trusting.

On my Mac OS X Yosemite box, I count (well, the Keychain app counts, but whatever) 214 different trusted root certificate authorities. That means that any website signed by any of those 214 authorities…or anyone that those authorities have delegated as trustworthy….or anyone those have trusted…will be trusted by my system.

That’s great, if you trust the CAs. But we’ve seen many times that we probably shouldn’t. And even if you do trust the root CAs on your system, there are other issues, like if a corporation or wifi provider prompts the user to install a custom MITM CA cert. (Or just MITMs without even bothering with a real cert).

I’ve been trying to bang the drum on certificate pinning for a while, and I still think that’s the best approach to security in the long run. But there’s just no easy way for end users to handle it at the browser level. Some kind of “Trust on First Use” model would seem to make sense, where the browser tracks the certificate (or certificates) seen when you first visit a site, and warns if they change. Of course, you have to be certain your connection wasn’t intercepted in the first place, but that’s another problem entirely.

Some will inevitably argue that ubiquitous certificate pinning will break applications in a corporate environment, and yes, that’s true. If an organization feels they have the right to snoop on all their users’ TLS-secured traffic, then pinned certificates on mobile apps or browsers will be broken by those proxies. Oh, well. Either they’ll stop their snooping, or people will stop using those apps at work. (I’m hoping that the snooping goes away, but I’m probably being naïve).

When a bunch of CA-related hacks and breaches happened in 2011, we saw a flurry of work on “replacements,” or at least enhancements, of the current CA system. A good example is Convergence, a distributed notary system to endorse or disavow certificates. There’s also Certificate Transparency, which is more of an open audited log. I think I’ve even seen something akin to SPF proposed, where a specific pinned certificate fingerprint could be put into a site’s DNS record. (Of course, this re-opens the whole question of trusting DNS, but that’s yet another problem).

But as far as I know, none of these ideas have reached mainstream browsers yet. And they’re certainly not something that non-security-geeks are going to be able to set up and use.

So in the meantime, I thought back to my post from 2011, where I have a script that dumps out all the root CAs used by the TLS sites you’ve recently visited. Amazingly enough, the script still works for me, and also interestingly, the results were about the same. In 2011, I found that all the sites I’ve visited eventually traced back to 20 different root certificate authorities. Today, it’s 22. (and in both cases, some of those are internal CAs that don’t really “count”). (It’s also worth noting — in that blog post, I reported that I had 175 roots on my OS X Lion system. So nearly 40 new roots have been added to my certificate store in just 3 years).

So of the 214 roots on my system, I could “safely” remove 192. Or probably somewhat fewer, since the history file I pulled from probably isn’t that comprehensive (and my script didn’t pull from Safari too). But still, it helps to demonstrate that a significantly large percentage (like on the order of 90%) of the trust my computer has in the rest of the Internet is unnecessary in my usual daily use.

Now, if I remove those 190ish superfluous roots, what happens? I won’t be quite as vulnerable to malware or MITM attacks using certs signed by, say, an attacker using China’s CA. Or maybe the next time I visit Alibaba I’ll get a warning. But I’d bet that most of the time, I’ll be just fine. Of course, if I do hit a site that uses a CA I’ve removed, I’d like the option to put it back, which simply brings me back to the “Trust on First Use” certificate option I mentioned earlier. If we’re to go that route, might just as well set it up to allow for site-level cert pinning, rather than adding their cert provider’s CA, to “limit the damage” as it were. (Otherwise, over time, you’d just be back to trusting every CA on the planet again).

And of course, even if I wanted to do this, there’s no (easy) way to do this on my iOS devices. And the next time I got a system update, I’d bet the root store on my system would be restored to its original state anyway (well, original plus some annual delta of new root certs).

So nearly four years on since the Comodo and Diginotar hack (to say nothing of private companies selling signing wildcard certificates), and we still haven’t “reshaped browser security”.

What’s it going to take, already?

Bypassing the lockout delay on iOS devices

Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:

Lock Screen
Available for:  iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact:  An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description:  In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney

We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.

Well, the video was just released on YouTube, and it’s pretty interesting. Briefly:

This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.

Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)

Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.

Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.

I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.

Why I hate voting.

I just voted, even though pundits and statisticians have proven fairly definitively that my particular vote won’t matter. My district has had a Republican congressman for 30 years and his hand-picked heir is likely to win, and I don’t live in one of the 6 states all the news organizations tell me will decide control of the Senate. I voted because it’s the right thing to do, and because if I don’t vote, I lose the moral right to complain about the idiots in power (and anyone who knows me knows I love to complain.)

But why I hate voting isn’t the issues, or the parties, or the polarized electorate, or the aforementioned futility of my particular involvement. It’s the process. The process makes my blood boil.

For months, we are subjected to constant attack ads, literally he-said-she-said finger pointing about which candidate is the bigger idiot for siding with whichever other idiots are in power.

For weeks, the candidates clutter the countryside with illegally placed campaign signs that aren’t just an eyesore, but can seriously impede traffic safety simply by blocking drivers’ view of oncoming traffic. (Though to be fair, this has gotten much better in Fairfax County over the last few years…I don’t know how they got the candidates to stop, but I’m glad they did it).

I work at home, in my basement. When the doorbell rings, I answer it. Which means I have to interrupt my work, walk upstairs, and attend to whoever is at the door. And then get annoyed when it’s just someone stumping for a politician I don’t care about (or even one I do like). And then they get annoyed when I’m annoyed at them — as if they weren’t the ones being rude by disturbing me in the first place.

Go Away Humans

Then, finally, election day. That’s the worst.

Rather than experiencing relief that it’s all about to be over, my annoyance level spikes to new highs. First, I drop the kids off at their school (for school-provided daycare while the school is closed for election day). There’s no way to get through the front door without running a gauntlet of partisan party representatives handing you their “Sample Ballots” (which conveniently exclude all other parties — not actually a sample at all, but I suppose we’re used to the lies). Sure, there’s a “50 foot exclusion zone” around the entrance, but it’s not possible to park within that zone. So all they have to do is hover around the perimeters and they get you.

But at this point I’m not even there to vote — I’m just there to drop off my kids. (In fact, two Republican candidates even had people camped out in front of the school on Back to School night this year, so even then we weren’t able to escape their harassment). Why the school system doesn’t kick these people off their property is beyond me. (And don’t tell me it’s because of First Amendment rights — politicians can still express their views…they just shouldn’t be allowed to interrupt voters on their way to the polls).

It’s even worse today, because I’ll have to sneak past the same people for parent/teacher conferences this afternoon.

Then when I actually do go to vote, I have to navigate a different set of politicians’ antagonists (because my polling place is in a different school). And I have to present an ID to vote, because there’s an astronomically small chance that someone could be trying to vote illegally (which Never Ever Happens. Seriously.) And after I present my ID, the poll workers ask me to tell them my address — as if it weren’t already printed on my ID. Somehow, going to vote where the poll workers can’t even read the address on my ID doesn’t fill me with confidence.

(No, I know it’s because they want to be sure that I really know my address and am not simply taking someone else’s identity. It’s still bullshit. Next year, I’m reading the address from my ID before I even hand it to them. See what happens then.)

So by the time I’m done, I’ve been harassed by politicians on the radio, on the TV, in my mail, at my front door, on the way to drop off the kids, on my way to conferences with my kids’ teachers, on the way to actually vote, and then while voting, I’m told pretty clearly that the state doesn’t think I’m actually me and am trying to fraudulently cast a ballot. All this after being told again and again by, well, Science, that my vote really doesn’t matter.

It’s amazing that anyone votes at all.

What’s the deal with keyless entry car thefts?

In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)

Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.

It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).

But just recently, it’s been making the news again, with some insurers even reportedly refusing insurance for some vehicles.

But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:

In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry.


“The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”

This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.

Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.

This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?

I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).

Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.

But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.

The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).

Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.

It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?

This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.

It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.

The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”

It’d be nice if we could figure out what’s going on and actually fix things.

iPhone SMS forwarding — cool, but may be risky

The recent release of iOS 8 brought with it several cool new features, especially some which more tightly integrate the iOS world with the OS X desktop world. Some of these are limited by physical proximity (like handing off email drafts among devices), while others are require being on the same local subnet (forwarding phone calls to the desktop).

However, one feature apparently Just Works all the time, and that’s SMS message forwarding. If you have an iPhone, running iOS 8, then you can send and receive normal text messages (to your “Green bubble friends”) from your iPad or Yosemite desktop. Even if the phone is the next town over.

This is actually pretty cool — I use text messaging a lot, and while most of the people I communicate with use iPhones, a fair number (especially customers) don’t. If I need to send them something securely, like a password to a document I just emailed them, I have to manually type the password into my iPhone and hope I don’t mess it up. With SMS messages bridged between the systems, now I can just copy out of my password safe and paste right into iMessage.

However, this does raise one possible security issue. Many services which offer Two-Factor Authentication (2FA, or as many are preferring to all this particular brand of 2FA, “two step authentication”), send the 2FA confirmation codes over SMS. The theory being that only the authorized user will have access to that user’s cell phone, and so the SMS will only be seen by the intended person.

But if your SMS messages are also copied to your iPad (which you left on your desk at work) or your laptop or desktop (which, likewise, may be left in the office, out of your control) then password reset messages sent over SMS will appear on those devices too.

Which means that your [fr]enemies at work may be able to easily gain control over some of your accounts, simply by requesting a password reset while you’re at lunch. And, since you’re really enjoying your three-bourbon lunch, you don’t even notice the messages appearing on your phone until it’s too late (at which point you’re alerted, not by the Twitter account reset, but by dozens of replies to the “I’m an idiot!” tweet your co-workers posted on your behalf.)

Fortunately, there’s an easy way to correct this.

In OS X Yosemite, go into the System Preferences application and select “Notifications.” Then go down to “Messages,” and where it says “Show message preview” make sure the pop-up is “when unlocked,” not “always.” If this is set to “when unlocked,” then the contents of SMS messages won’t be displayed when the desktop is locked, only a “you got a message” sort of notification. You might also consider disabling the “Show notifications on lock screen” button just above it, which will even disable the notification of the notification.

Yosemite SMS Notification Settings

In iOS, a similar setting can be found in Settings, also under Notifications:

iOS SMS Notification Settings

However, the control here isn’t quite as fine-grained — you can either show notifications on the lock screen, or not, and if they’re shown at all, then the contents will be displayd as well.

You might consider even preventing SMS notifications from displaying on your primary phone when locked, but if it’s almost never out of your control, then perhaps that’s not a big risk to worry about.

Note that both of these settings apply to iMessages as well as SMS messages.

If you never use SMS messages for account validation (whether you call them 2FA or 2SV or just “validation messages,” then you might not need to worry about this at all. Though it’s probably a good idea to at least consider disabling these notifications anyway…

Even more posts about iOS encryption

The assertion recently made by Apple that “it’s not technically feasible” to decrypt phones for law enforcement has really stirred up several pots.

Many in law enforcement are upset that Apple is “unilaterally” removing a key tool in their investigations (whether that tool has ever been truly “key” is another debate). Some privacy experts hail it as a great step forward. Others say “it’s about time.” And still others debate whether it’s quite as absolute a change as Apple’s making it sound.

I wrote extensively about this earlier this week, trying to pull together technical details from Apple’s “iOS Security” whitepaper and some key conference presentations. What’s amusing, now that I look through my archives, is that I said a lot of the same things 18 months ago.

As I was finishing this weekend’s post, Matthew Green posted a very good explanation as well, a bit higher on the readability scale without losing too many of technical details. He later referred to my own post (thanks!) with an accurate note that we don’t know for certain whether the “5 second delay” in consecutive attempts can be overridden by Apple with a new Secure Enclave firmware.

Also later on Monday, Julian Sanchez published a less technical, much more analytic piece that’s worth reading for some of the bigger picture issues. His Cato Institute post is also a good read, to help understand why backdoors in general are a bad idea, and how this may turn out to be a rerun of the 1990’s Crypto Wars.

And just this morning, Joseph Bonneau posted a great practical analysis of the implications of self-chosen passcodes on the Freedom to Tinker blog. This latest story shows how even though, at a technical level, some strong passcodes may take years to break, in practical terms users don’t pick passcodes that are “random enough”. It even has a pretty graph.

One final suggestion made in Mr. Bonneau’s post (and also voiced by many others in posts or on twitter, including myself) is that a hardware-level “wrong passcode count” seems like a great idea. I’d been concerned about how to integrate that count with the user interface, but then he estimates that “A hard limit of 100 guesses would leave about 3% of users vulnerable” (based on the statistics he presents).

This almost throwaway comment made me wonder — if the user interface is (typically) configured to completely lock, or even wipe, a phone after 10 guesses, then why not let OS-level brute force attempts (initiated through the mythical Apple-signed external boot image) continue until 20 attempts? Then the hardware can simply refuse to attempt any further passcode key derivations, and not even worry about what to do with the phone (lock, wipe, or whatever). If the user has already hit 10 attempts through the UI, this count will never be reached in hardware anyway.

The only hard part about this idea would be finding a secure way for the secure element to know that the passcode was properly entered. If we rely on the operating system to actually verify the passcode, and then notify the secure element, then that notifcation may be subject to spoofing by an attacker. This may be an intractable problem, but I’m confident that it wouldn’t be, and that a workable (or even elegant) solution may be found.

If Apple could add that level of protection, then even a 4-digit numeric passcode could be “strong enough” (provided they stay away from the top-50 or so bad passcodes). And at that point, it would absolutely be “technically infeasible” for Apple to do anything with a locked phone, other than retrieve totally unencrypted data.