Yesterday, the information security company Trail of Bits announced a new service, called Tidas. The service is intended to make it easy for developers to include a password-free authentication experience in mobile apps on the iOS platform. They’ve provided some sample code and a developer Guide / FAQ, and I’ve spent some time looking at it to try and understand how it works. Here are my first impressions.
NOTE: I haven’t actually looked at the full protocol running “in the wild” yet, so it’s quite possible I haven’t fully grokked the system. Take this with a grain of salt. I’ll try to update any egregious misunderstandings, as I become aware of them.
The heart of the Tidas system is a new feature, introduced in iOS 9, which allows for a public / private keypair to be split on an iOS device, with the private key hidden, inaccessibly, in the Secure Enclave. This feature was described in Session 706: Security and Your Apps at the 2015 WWDC (the relevant content begins about 46 minutes into the presentation, at slide 195). In this usage, the private key is never visible to the application, and can in fact never leave the Secure Enclave, even for device backups. The application can send data to the Secure Enclave, with a request to have it signed by the designated private key. The device prompts the user to authenticate with their fingerprint, and if the fingerprint matches, then the private key signs the data and returns the result to the application.
To enroll, the user must first authenticate somehow with the remote service. If their account already exists, they’ll need to log into the service, using a password, 2-factor login, or whatever other mechanisms the application provides. If this is their first time using the service, then no passwords are necessary and their enrollment is simply part of the onboarding process. The device then creates the public / private key pair (using elliptic curve P-256), and sends the public key to the server, which associates it with the user’s account.
Later, when the user wants to log into the account again, the application creates a new request. The documentation I read didn’t seem to indicate that it uses a challenge / response format, but instead, that the application creates its own message to sign. The application appends the current timestamp to the message, and sends a hash of the (message + timestamp) to the Secure Enclave. The phone then prompts the user for their fingerprint, signs the hash, and returns the signature to the application. The inclusion of the timestamp helps prevent against replay attacks using old authentication requests.
The final message sent to the Tidas server, then, includes basic header information, the new message being signed, a timestamp, the SHA1 hash of (message + timestamp), the signature of that hash, and finally the user’s public key.
The server uses the public key to look up the valid user, and validates the signature of the message. Then, the server returns a session token to the application, which allows the user to continue using the app without needing to re-authenticate for every action. The duration of this session token is also left up to the developer.
All in all, I think it’s an interesting system. I very much like the fact that it’s using a full-on public / private key system, and especially that the private key is completely inaccessible to users and attackers alike. This neatly avoids one of the primary problems with other authentication systems: compromise of user credentials when servers are hacked. There’s no password on the server to crack, and no “password equivalent” (like a hash or long-lived secret) that can just be extracted and used by an attacker (no “pass the hash” attack).
I’m a little concerned that the message is self-created, though this does eliminate a client-server round trip. I think it may be wise to set some basic standards, or at least very strong recommendations, for the content and format of that validation message. (It’s also possible that such recommendations exist and I just missed them on my first read-through). The use of the timestamp inside the signature should also help to mitigate this concern. Also, it would be nice if the session token was used more like an OAuth access token, signing each request individually, though I suppose there’s no reason that can’t be implemented at the application level.
There are still other problems that Tidas won’t directly improve: adding the service to existing accounts, enrolling additional devices to the same account, and dealing with a lost device or password, all of which have proven to be weak points in most authentication systems. Finally, this is only available on newer iOS devices with TouchID, though I would expect that it could be supported on other platforms with similar capabilities.
In some ways, Tidas feels similar to the FIDO U2F system, which also utilizes public / private key signatures, but relies on dongles, doesn’t utilize fingerprint verification, and has a more strictly-defined protocol.
I’m excited to see this new service, and hope to see it (and similar systems) move forward.
For the last several years, we’ve tried to keep a big “snow stick” out on our deck to capture images of big snowfalls. In particular, the winter of 2009-2010 was exceptional for this, with no fewer than 3 very large storms in our area (including the crazy storm which happened at ShmooCon 2010). That storm dumped nearly 30” over two days at Dulles Airport, just a few miles away from our house.
Today, we’re getting a storm that promises to rival or exceed that storm, with the Capital Weather Gang calling for as much as 40 inches in the “best case” scenario (or worst case, depending).
So I had to update the snow stick, which previously topped out at 32” or so. It now reaches a full 50”. No way we can break that (if we do, the deck will probably collapse anyway and it won’t matter). I tweeted a picture of the snow stick yesterday, and almost immediately was challenged to post a time-lapse video. Which I thought “no way,” then “maybe,” then “Oh, wait, if I do this….” After a couple hours of playing, I had an old Canon Digital Rebel running off an external power supply, with a Raspberry Pi triggering photos every minute and downloading them to the local SD card. But it crashed after about 5 minutes. I spent some hours last night trying to figure out what was up, but couldn’t make it work — the link between the rPi the camera just stops working after a few photos, whether I use the really cool script I found or just manually capture images.
After conceding defeat on the SLR front, I thought, maybe I could find an iOS app to do this. There must be one. And, sure enough, a few minutes of searching led me to TimeLapse, by xyster.net. I grabbed an iPad 3 from my drawer of crazy old iOS devices, installed it, and figured out how to get it established in the window. At first, I planned to simply tape it to the window, but then the image was framed all wrong (it had to be located above expected snow line, if I’m to get anything). But I realized that it’d just barely fit on the frame of the lower sash, and so it was off to the scrap pile to make a little shelf.
It’s actually screwed into the sash (I’m sure we’ll never notice the holes once it’s gone), though I got a little nervous when doing so that I didn’t drive the screws all the way into the windowpane. Another small strip provides a ridge to keep the iPad from falling off. Just below, you can see the edge of a LED strip I had lying around… I cut it in half, linked the two halves together, and taped them to the window, facing outwards. When we tried this light last night, it was strong enough that the snow stick cast a shadow, so hopefully that’ll be enough to keep taking pictures overnight.
The lights and the iPad are both plugged into a power strip resting on the window sill. (I should probably tape the power strip to the wall, or get a USB extender cable, so that it won’t pull the iPad down when it inevitably gets knocked off the sill). I’ll eventually move the strip onto a UPS, which should hopefully let me keep going even during a power failure. (We’re almost certainly going to lose power at some point…I just hope it doesn’t go for too long. There’s only so far I want to take this, you know…and we have an electric snowblower, so no power means sore back.)
Not long after tweeting the picture of the whole rig, someone joked about streaming the images, which was amusing, since I was in the middle of getting live images posted to this site anyway. I have a small Linux box (running on an old 1st generation Apple TV), which I’ve used as a local “photo dump” to sync pictures off my camera. I set up a cron job to rsync the TimeLapse app’s photos off the iPad (it’s a jailbroken device) and onto the linux server, in between synchronization runs, it copies the most recent image here.
(updates every 10 minutes or so)
So far, it seems to be (mostly) working, but the app has stopped running twice already — once after only 5 minutes, and again after an hour or so. I don’t know if the device is getting an alert and popping out of the app, or if it’s because it’s jailbroken, or if it’s something else altogether. If I can figure out how to send a text message from my linux box, I can always have it alert me if the most recent sync doesn’t seem to have grabbed any new images. If I get that working, I’ll be sure to update here.
Hopefully I can work out these kinks and get a nice video…if it runs every 2 minutes, then that’s about 1 second per hour at 30 fps, so this’ll be a nice minute or two video once it’s all done.
Update Okay, I still don’t know why the app is crashing. It actually died while Andrea was looking right at it — took a picture, went black, returned to the iPad springboard. Dunno. I wrote a simple python script that looks for the oldest picture that’s been synced from the iPad, and if it’s more than 11 minutes old, it calls another script (oysttyer - a command-line perl Twitter app), which sends me a DM on Twitter. Now I just have to make my phone make a really loud noise for DMs from that account so it’ll wake me up overnight, if the app needs restarting.
Update Update Looks like there’s some kind of memory problem that’s causing the app to reliably crash after an hour of use. However, since I’m able to detect the crash (well, the lack of updates) pretty easily, I’ve now added a remote restart. So whenever it crashes, the pictures get old, my script notices nothing new’s coming through, and it re-opens the app. Yay.
I’ve also stitched together the first 8 hours of video and put it up on YouTube. It gets a little dark towards the end — the LEDs help, but it’s still pretty dim out there, even with all the skyglow reflected in the snow. When it’s all done I’ll see what I can do to make the light levels more consistent across the whole video. Oh, and my brother created a Twitter account which simply scrapes the current image from the blog and tweets it.
I just finished presenting this at ShmooCon, and wanted to get the slides out quickly before it got shoved aside by the next crisis. :) I’ll replace this with a blog entry that’s actually useful later.
The short version is this:
I do a lot of application testing, for web and iOS / mobile apps
Many (most?) of those apps rely on some kind of authentcation to a back-end server
How that authentication is handled seems to be generally restricted to a handful of systems
It seemed to me that being able to understand how those systems work is important to being able to fully test such applications
So this talk explains how the systems work, what’s good, and bad, and why
There’s also a whitepaper (in final draft) that goes into even more detail and has extensive references. I’ll post that here as well when it’s released.
Here’s the abstract from the conference, which says all I just said but in fancier words:
The great thing about standards is there are so many to choose from. That’s especially true in the realm of web and mobile application authentication. From Base-64 to OAuth, there are nearly as many ways to send your password to a server as there are ways to store that password.
But how do these work? Is any one system better than another, and if so, why?
Application testers need to understand how an app authenticates, in order to properly assess risk. Developers need to be able to make good design decisions. And end users may wonder just how safe their password really is online.
This talk explains, with simple examples, how some of the most frequently-seen authentication systems work. It identifies the characteristics of an “ideal” authentication system, compares the common methods against that ideal, and demonstrates how to verify that they’ve been implemented correctly.
Finally, the talk will demonstrate a tool which can help make it easier to identify, test, and verify these systems.
I hope for this presentation, and the white paper (and eventually, a simple tool as well) to be a good introduction and even reference to how these systems work.
[Note: Yes, I understand the point of DLP. Yes, I’m being unrealistically idealistic. I still think this is wrong, and that we do ourselves a disservice to pretend otherwise.]
The Latest Craziness
It is happeningagain. A major computer manufacturer (this time, Dell, instead of Lenovo) shipped with a trusted root TLS CA certificate installed on the operating system. Again, the private key was included with the certificate. So now, anyone who wants to perform a man-in-the-middle attack against users of those devices can easily do so.
(Image by Kenn White (@kennwhite))
But as shocking as that may have been, what comes next may surprise you!
Data Loss Prevention and Certificate Pinning
It’s (reasonably) well known that many large enterprises utilize man-in-the-middle proxies to intercept and inspect data, even TLS-encrypted data, leaving their networks. This is justified as part of a “Data Loss Prevention” (DLP) strategy, and excused by “Well, you signed a piece of paper saying you have no privacy on this network, blah blah blah.”
However, I had no idea that browser makers have conspired to allow such systems to break certificate pinning. (and apparently I wasn’t the only one surprised by this).
Certificate pinning can go a long way to restoring trust in the (demonstrably broken) TLS public key infrastructure, ensuring that data between an end user and internet-based servers are, in fact, properly protected.
It’s reasonably easy to implement cert pinning in mobile applications (since the app developer owns both ends of the system — the server and the mobile app), but it’s more difficult to manage in browsers. RFC 7469 defines “HPKP”, or “HTTP Public Key Pinning,” which allows a server to indicate which certificates are to be trusted for future visits to a website.
Because the browser won’t know anything about the remote site before it’s visited at least once, the protocol specifies “Trust on First Use” (TOFU). (Unless such information is bundled with the browser, which Chrome currently does for some sites). This means that if, for example, the first time you visit Facebook on a laptop is from home, the browser would “learn” the appropriate TLS certificate from that first visit, and should complain if it’s ever presented with a different cert when visiting the site in the future, like if a hacker’s attacking your connection at Starbucks.
But some browsers, by design, ignore all that when presented with a trusted root certificate, installed locally:
Chrome does not perform pin validation when the certificate chain chains up to a private trust anchor. A key result of this policy is that private trust anchors can be used to proxy (or MITM) connections, even to pinned sites. "Data loss prevention" appliances, firewalls, content filters, and malware can use this feature to defeat the protections of key pinning.
We deem this acceptable because the proxy or MITM can only be effective if the client machine has already been configured to trust the proxy’s issuing certificate — that is, the client is already under the control of the person who controls the proxy (e.g. the enterprise’s IT administrator). If the client does not trust the private trust anchor, the proxy’s attempt to mediate the connection will fail as it should.
What this means is that, even when a remote site specifies that a browser should only connect when it sees the correct, site-issued certificate, the browser will ignore those instructions when a corporate DLP proxy is in the mix. This allows the employer’s security team to inspect outbound traffic and (they hope) prevent proprietary information from leaving the company’s network. It also means they can see sensitive, personal, non-corporate information that should have been protected by encryption.
This Is Broken
I, personally, think that’s overstepping the line, and here’s why:
[ranty opinion section begins]
The employer’s DLP MITM inspecting proxy may be an untrusted third party to the connection. Sure, it’s trusted by the browser, that’s the point. But is it trusted by the user, and by the service to which the user is connecting?
If, for example, a user is checking their bank account from work (nevermind why, or whether that’s even a good idea). Does the user really want to allow their employer to see their bank password? Because they just did. Does the bank really want their customer to do that? Who bears the liability if the proxy is hacked and banking passwords extracted? The end-user who shouldn’t have been banking at work? The bank? The corporation which sniffed the traffic?
A corporation has some right to inspect their own traffic, to know what’s going on. But unrelated third parties also have a right to expect their customers’ data to be secure, end-to-end, without exception. If this means that some sites become unavailable within some corporate environments, so be it. But the users need be able to know that their data is secure, and as it stands, that kind of assurance seems to be impossible to provide.
Users aren’t even given a warning that this is happening. They’re told it could happen, when they sign an Acceptable Use Policy, but they aren’t given a real-time warning when it happens. They deserve to be told “Hey, someone is able to access your bank password and account information, RIGHT NOW. It’s probably just your employer, but if you don’t trust them with this information, don’t enter your password, close the browser, and wait until you get to a computer and network that you personally trust before you try this again.”
[end ranty section]
It’s Bigger Than Just The Enterprise
Unfortunately, it’s not just large corporations which are doing this kind of snooping. Just a few days ago, I was at an all-night Cub Scout “lock-in” event for my eldest son, at a local volunteer fire department. They had free Wi-Fi. Great! I’m gonna be here all night, might as well get some work done in the corner. Imagine my surprise when I got certificate trust warnings from host “18.104.22.168”. The volunteer fire department was trying to MITM my web traffic.
In my job, I frequently recommend certificate pinning as a vital mechanism to ensure that traffic is kept secure against any eavesdropper. Now, suddenly, I’m faced with the very real possibility that there’s no point, because we’re undermining our own progress in the name of DLP. Pinning can make TLS at least moderately trustworthy again, but if browsers can so easily subvert it, then we’re right back where we started.
Finally, though I’m not usually one to encourage tin foil hat conspiracy theories…with all the talk about companies taking the maximum possible steps to protect their users’ data, with iPhone and Android encryption and the government complaining about “going dark”… a DLP pinning bypass provides an easy way for the government to get at data that users might otherwise think is protected. Could the FBI, or NSA, or <insert foreign intelligence or police force> already be requesting logs from corporate MITM DLP proxies? How well is that data being protected? Who else is getting caught up in the dragnet?
Cognitive Dissonance FTW
On the one hand, we as an industry are:
Advocating strongly for the maximum possible privacy and security protections for users’ data
Developing and promulgating solutions such as certificate pinning and HPKP to ensure connections are secure and trusted
Loudly complaining when government entities push for a “back door” into such data, either at rest or in transit
But at the same time, we:
Tell enterprises that “hackers are stealing your data”
You need to inspect everything leaving your network so you can catch these hackers
To do this, simply install this “back door” on your network
It’s okay to ignore everything we said about privacy and encryption…just add a disclaimer when people get on your network
I think this is a lousy situation to be in. Who do we fight for? What matters? And how do we justify ourselves when we issue such contradictory guidance? How can we claim any moral high ground while fighting against government encryption back doors, when we recommend and build them for our own customers? How can our advice be trusted if we can’t even figure this out?
I hope and believe that in the long run, users and services will push back against this. (And, as I said at the beginning, I know that I’m probably wrong.) I suspect it will begin with the services — with banks, healthcare providers, and other online services wanting HPKP they can trust, corporate DLP polices be damned. Who knows, maybe this will be the next pressure point Apple applies.
When that happens, I just hope we can offer a solution to the data loss problem that doesn’t expect a corporation to become the NSA in order to survive.
For the past year or so, I’ve been thinking about the information security research space. Certainly, with the mega-proliferation of security conferences, research is Getting Done. But is it the right kind of research? And is it of the right quality?
This has recently become a hot topic, since .mudge tweeted on June 29:
Goodbye Google ATAP, it was a blast.
The White House asked if I would kindly create a #CyberUL, so here goes!
We’ve also seen increased attention on Internet of Things, and infosec in general, from the “I Am The Cavalry” effort, and more recently, the expansion of research at Duo Labs and elsewhere.
So this seems like a good time to jot down some of my thoughts.
CyberUL and traditional research
First, the idea of an “Underwriter’s Laboratories” for infosec, or “CyberUL”: I think most people agree that it’s a good idea, at its core. John Tan outlined such a service back in 1999, and it’s been revisited many times since. However, many issues remain. I’m certainly not the first to bring these points up, but for the sake of discussion, here are some high-level problems.
For one thing, certifying (or in UL parlance, “listing”) products is difficult enough in the physical space, but even harder in CyberSpace. Software products are a quickly moving target, and it’s just not possible to keep up with all the revisions to product firmware, both during design and through after-sale udpates.
Would a CyberUL focus on end-user products, such as the “things” we keep hooking up to the Internet, or would it also review software and services in general? What about operating systems? Cloud services?
Multiple certifications of one form or another already exist in this space. The Common Criteria, for example, is very thorough and formalized. It’s also complicated, slow, and very expensive to get. The PCI and OWASP standards set bars for testers to assess against, but the actual mechanisms of testing may not be consistent across (or even within) organizations.
Finally, there’s the question of how deep testing can go. Even with support from vendors, fully understanding some systems is a daunting undertaking, and comprehensive product evaluations may require significant resources.
Ultimately, I’m afraid that a CyberUL may suffer from many of the same problems that “traditional” information security testing faces.
So, what about traditional testing?
Much (if not most) testing is paid for by the product’s creator, or by some 3rd party company considering a purchase. The time and scope of such testing is frequently limited, which drastically curtails the depth to which testers can evaluate a product, and can lead to superficial, “checkbox” security reviews. This could be especially true if vendors wind up, to be honest, frantically checking the “CyberUL” box in the last month prior to product release.
Sometimes, testing can go much deeper, but ultimately they’re limited by whoever’s paying for it. If they’ll only pay for a 2-week test, then a 2-week test is all that will happen.
Maybe independent research is the answer?
There’s obviously plenty of independent research, not directly paid for by customers. However, because it’s not paid for…it generally doesn’t pay the testers’ bills in the long term.
Usually, this work comes out of the mythical “20%” time that people may have to work on other projects (or 10%, or 5%, or just “free time at night”). If research is a tester’s primary function, then that dedicated work is often kept private: its goal is to benefit the company, sell vulnerabilities, improve detection products, etc.
Firms which pay for truly independent and published research are vanishingly rare. Today’s infosec environment steers testers towards searching for “big impact” vulnerabilities, while also encouraging frequent repeats of well-trodden topics. I see very little research into “boring” stuff: process and policy, leading-edge technologies, general analysis of commodity products, etc.
What would I like to see done?
In an ideal world, with unlimited resources, what could a company focused on independent information security research accomplish?
They could perform a research-tracking function across the community as a whole: Manage a list of problems in need of work, new and under-researched issues, longer-term goals, even half-baked pie-in-the-sky ideas.
The execution of this list of topics could be left open for others to take on, or worked on in-house (or even both — some problems will benefit from multiple, independent efforts, confirming or refuting one another’s results).
The company could even possibly provide funding for external research efforts: Cyber Fast Track reborn!
Perform original research
At its core, though, the company would be tasked with performing new research. They’d look at current products, software, and technology. The focus wouldn’t be simply finding bugs, but also understanding how these systems work. Too many products are simply “black boxes,” and it’s important to look under the hood, since even systems which are functioning properly can present a risk. How many of today’s software and cloud offerings are truly understood by those who sign off on the risks they may introduce?
We occasionally see product space surveys (for example, EFF’s Secure Messaging Scorecard). We need more efforts like that, with sufficient depth of testing and detailed publication of methods and results, as well as regular and consistent updates. Too often such surveys are completed and briefly publicized, generating a few sales for the company which performed it, and then totally forgotten.
I’d also like to see generalized risk research across product categories — for example, what kinds of problems do Smart TVs or phone-connected door locks create? I don’t mean a regular survey of Bluetooth locks (which might be useful in itself) but a higher-level analysis of the product space, and potential issues which purchasers need to be aware of.
Specific product testing could also be an offered service, provided that the testing permits very deep reviews without significant time limitations, and that the results, regardless of outcome, be published shortly after the conclusion of the effort (naturally, giving the vendor reasonable time to address any problems).
And important but currently underutilized function is “research about research.” The Infosec Echo Chamber (mostly Twitter, blogs, and a few podcasts) is great about talking about other research and findings, but not very good at critically reviewing and building upon that work.
We need more methodical reviews of existing work, confirming and promoting findings when appropriate, and correcting and improving the research where problems are discovered. Currently, those best able to provide such analysis are frequently busy with paying work, and so valuable insights are delayed or lost altogether.
Related to this is doing a better job of promoting and explaining research, findings, and problems, both within the community and also to the media in general. Another related function would be managing a repository, or at least a trusted index, of security papers, conference slides, and other such information.
Tracking broader industry trends
The Verizon Data Breach Investigation Report (DBIR) provides an in-depth annual analysis of data breaches. Could the same approach be used for, say, an annual cross-industry “Bug Report,” identifying and analyzing common problems and trends? [or really, any other single topic…I don’t know whether a report focused on bugs would be worthwhile.]
The DBIR takes a team of experts months to collect, analyze, and prepare — expanding that kind of report into other arenas is something that can’t be undertaken without a significant commitment. An organization dedicated to infosec research may be among the few able to identify the need for, and ultimately deliver, such tightly-focused reporting.
Shaping research in general
Finally, I (and many others, I believe) think that the industry needs a more structured and methodical approach to security research. An organization dedicated to research can help to develop and refine such methodologies, encouraging publication of negative findings as well as cool bugs, emphasizing the repeatability of results, and guaranteeing availability of past research. The academic world has been wrestling with this for decades, but the infosec community has only begun to transition from “quick and dirty” to “rigorous and reliable” research.
How can we do this?
These goals are difficult to accomplish under our current research model: Lack of dedicated time and availability for ad-hoc work are just two of the biggest problems. Breadth, depth, and consistency of testing, and long-term availability of results, are among the other details we haven’t yet worked out.
A virtual team of volunteers might work, but they’d still be relying on stolen downtime (or after-hours work). Of course, they’d also have to worry about conflicts of interest (“Will this compete with our own sales?” and “Don’t piss off our favorite customer.” being two of my favorites.) Plus, maintaining consistency would be an issue, as team members drift in and out.
A bug-bounty kind of model might be possible, like the virtual team but even more ad-hoc (“Here’s a list of things we need to do. Sign up for something that interests you!”), and with predictably more logistical and practical problems.
Plus, for either virtual approach, you’d still need some core group to manage everything.
Ultimately, I think a non-profit company remains the only way to make this happen. This would allow the formation of a core, dedicated team of researchers and administrators. They could charge vendors for specific product tests, and possibly even receive funding from industry or government sources, though keeping such funding reliable year after year will probably be a challenge.
John Tan, author of the 1999 CyberUL paper, updated his thoughts earlier this month. A key quote, which I think drives to the heart of the problem:
"If your shareholder value is maximized by providing accurate inputs for decision making around risk management, then you're beholden only to the truth."
Any company which can keep “Provide risk managers the best data, always” as a core mission statement, and live up to it, will, I think, be on the right track.
So, can this work?
I honestly don’t know.
There are many things our community does well with research, but a lot which we do poorly, or not at all. An independent company that can focus on issues like those I’ve described could have a significant positive impact on the industry, and on security in general. But it won’t happen easily.
According to John Tan’s initial paper, it took 30 years of insurance company subsidies before Underwriters Laboratories could reach a level of vendor-funded self-sufficiency. We don’t have that kind of time today. And the talent required to pull this off wouldn’t come cheaply (and, let’s face it, this is probably the kind of dream job that half the speakers at Black Hat would love to have, so competition would be fierce).
If anyone can run with this, my money would definitely be on Mudge. He’s got the knowledge, and especially the experience of running Cyber Fast Track, not to mention the decades of general information security experience behind him. But he’s definitely got his work cut out for him.
Hopefully he’ll come out of stealth mode soon. I’d love to see what we can do to help.
A new service was just announced at the RSA conference that takes an interesting approach to hashing passwords. Called “Blind Hashing,” from TapLink, the technology is fully buzzword-compliant, promising to “completely secure your passwords against offline attack.” Pretty grandiose claims, but from I’ve been able to see in their patent so far, it seems like it has some promise. With a few caveats.
Traditionally, passwords are hashed and stored in place. First we had the the Unix cyrpt() function, which, though it was specifically designed to be “slow” on systems at the time, is now hopelessly outdated and should be killed with fire at every opportunity. That gave way to unsalted MD5-based hashes (also a candidate for immediate incendiary measures), salted SHA hashes, and today’s state of the art functions bcrypt, scrypt, and PBKDF2. The common goal throughout this progression of algorithms has been to make the hashing function expensive, in either CPU time or memory requirements (or both), thus making a brute force attack to guess a user’s password prohibitive.
So far, we seem to have accomplished that goal, but a downside is that a slow hash is still, well, slow. Which can potentially add up, when you’ve got a site that processes huge numbers of logins every day.
The “Blind Hashing” system takes a different approach. Rather than handling the entire hash locally, the user’s password is, essentially, hashed a second time using data from a cloud-based service. Here’s an excerpt from the patent summary:
A blind hashing system and method are provided in which blind hashing is used for data encryption and secure data storage such as in password authentication, symmetric key encryption, revocable encryption keys, etc. The system and method include using a hash function output (digest) as an index or pointer into a huge block of random data, extracting a value from the indexed location within the random data block, using that value to salt the original password or message, and then hashing it to produce a second digest that is used to verify the password or message, encrypt or decrypt a document, and so on. A different hash function can be used at each stage in the process. The blind hashing algorithm typical runs on a dedicated server and only sees the digest and never sees the password, message, key, or the salt used to generate the digest.
Thinking through the process, here’s one way this might work:
The user provides their userid and password to the system.
The password is hashed (optionally using a locally-stored salt, unique to the user)
Traditionally, the hash is then stored locally, and this is what’s used to compare against the identically-generated hash at next login.
In Blind Hashing, the hash is then sent to a remote service.
This service uses the hash as an index into a massive (petabyte-sized) database, to retrieve a random number. Each hash thus points to some unique random number.
The number is returned to the server, and used as a salt to hash the password a second time.
This second hash is stored locally and used for future logins.
Put in a more functional notation, this might look like:
In the event of a compromise on the server, the attacker may recover all the Salt1 and Hash2 values. However, they will not be able to retrieve Salt2 without the involvement of the remote blind hash service. So a brute force attack will require cycling through all possible passwords and, for each password tested, requesting Salt2 from the remote service. This should, in theory, be significantly slower than a local hash / salt computation, and can also be rate-limited at the service to further protect against attacks.
On its surface, this seems a pretty solid idea. The second salt is deterministically derived from the first hash, but not in an algorithmic manner, so there isn’t a short-circuit that allows for immediate recovery of the salt. The database used to store Salt2 values is too large to be copied by an attacker. And the round trip process is (presumably) too slow to be practical for a brute force attack. Finally, the user’s password isn’t actually sent to the blind hash lookup service, only a hash of the password (salted with a value that is not sent to the service).
An attacker who compromises the (website) server gains only a collection of password hashes that are uncrackable without the correct password and the cooporation of the blind hash service. If they are able to collect all blind hash responses, they could build a dictionary of secondary salts to use in brute force attacks, but that would still be very slow (for a large site), as each password tested would be multiplied by the length of this secondary salt list. (Of course, if they can intercept the blind hash response data, then the attacker can probably also intercept the initial login process and just grab the passwords in plaintext.) Finally, an attacker who compromises the blind hash service gains access to a database too large to exfiltrate, and to an inbound stream of passwords hashed with unknown salts.
So in theory, at least, I can’t see anything seriously wrong with the idea.
But is it worth it? The only argument I’ve heard against “slow” hash algorithms like bcrypt or scrypt is that it may present too big a load to busy sites. But wouldn’t the constant communication with the blind hash service also present a fairly large load, both for CPU and especially for network traffic? What happens if the remote service goes down, for example, because of a DDOS attack, or network problems? This service protects against future breakthroughs that make modern hash algorithms easy to brute force, but I think we already know how to deal with that eventuality.
I think the biggest problem we have today, with regards to securely hashing passwords, isn’t the technology available, but the fact that sites still use the older, less secure approaches. If a site cares enough to move to a blind hash service, they’d certainly be able to move to bcrypt. If they haven’t already moved away from MD5 or SHA hashes, then I really don’t see them paying for a blind hashing service, either.
In the end, though I think it’s a very interesting and intriguing idea, I’m just not sure I see anything to recommend this over modern bcrypt, scrypt, or PBKDF-based password hashes.
Arguably one of the more interesting developments (aside from the SIM thing, which I’m not even going to touch) was the decision by Lenovo to pwn all of their customers with a TLS Man-In-The-Middle attack. The problem here was two-fold: That Lenovo was deliberately snooping on their customer’s traffic (even “benignly,” as I’m sure they’re claiming), and that the method used was trivial to put to malicious use.
Which has me thinking again about the nature of the Certificate Authority infrastructure. In this particular case, Lenovo laptops are explicitly trusting sites signed with a private key that’s now floating around in the wild, ready to be abused by just about anyone. But it’s more than just that — our browsers are already incredibly trusting.
On my Mac OS X Yosemite box, I count (well, the Keychain app counts, but whatever) 214 different trusted root certificate authorities. That means that any website signed by any of those 214 authorities…or anyone that those authorities have delegated as trustworthy….or anyone those have trusted…will be trusted by my system.
That’s great, if you trust the CAs. But we’ve seen many times that we probably shouldn’t. And even if you do trust the root CAs on your system, there are other issues, like if a corporation or wifi provider prompts the user to install a custom MITM CA cert. (Or just MITMs without even bothering with a real cert).
I’ve been trying to bang the drum on certificate pinning for a while, and I still think that’s the best approach to security in the long run. But there’s just no easy way for end users to handle it at the browser level. Some kind of “Trust on First Use” model would seem to make sense, where the browser tracks the certificate (or certificates) seen when you first visit a site, and warns if they change. Of course, you have to be certain your connection wasn’t intercepted in the first place, but that’s another problem entirely.
Some will inevitably argue that ubiquitous certificate pinning will break applications in a corporate environment, and yes, that’s true. If an organization feels they have the right to snoop on all their users’ TLS-secured traffic, then pinned certificates on mobile apps or browsers will be broken by those proxies. Oh, well. Either they’ll stop their snooping, or people will stop using those apps at work. (I’m hoping that the snooping goes away, but I’m probably being naïve).
When a bunch of CA-related hacks and breaches happened in 2011, we saw a flurry of work on “replacements,” or at least enhancements, of the current CA system. A good example is Convergence, a distributed notary system to endorse or disavow certificates. There’s also Certificate Transparency, which is more of an open audited log. I think I’ve even seen something akin to SPF proposed, where a specific pinned certificate fingerprint could be put into a site’s DNS record. (Of course, this re-opens the whole question of trusting DNS, but that’s yet another problem).
But as far as I know, none of these ideas have reached mainstream browsers yet. And they’re certainly not something that non-security-geeks are going to be able to set up and use.
So in the meantime, I thought back to my post from 2011, where I have a script that dumps out all the root CAs used by the TLS sites you’ve recently visited. Amazingly enough, the script still works for me, and also interestingly, the results were about the same. In 2011, I found that all the sites I’ve visited eventually traced back to 20 different root certificate authorities. Today, it’s 22. (and in both cases, some of those are internal CAs that don’t really “count”). (It’s also worth noting — in that blog post, I reported that I had 175 roots on my OS X Lion system. So nearly 40 new roots have been added to my certificate store in just 3 years).
So of the 214 roots on my system, I could “safely” remove 192. Or probably somewhat fewer, since the history file I pulled from probably isn’t that comprehensive (and my script didn’t pull from Safari too). But still, it helps to demonstrate that a significantly large percentage (like on the order of 90%) of the trust my computer has in the rest of the Internet is unnecessary in my usual daily use.
Now, if I remove those 190ish superfluous roots, what happens? I won’t be quite as vulnerable to malware or MITM attacks using certs signed by, say, an attacker using China’s CA. Or maybe the next time I visit Alibaba I’ll get a warning. But I’d bet that most of the time, I’ll be just fine. Of course, if I do hit a site that uses a CA I’ve removed, I’d like the option to put it back, which simply brings me back to the “Trust on First Use” certificate option I mentioned earlier. If we’re to go that route, might just as well set it up to allow for site-level cert pinning, rather than adding their cert provider’s CA, to “limit the damage” as it were. (Otherwise, over time, you’d just be back to trusting every CA on the planet again).
And of course, even if I wanted to do this, there’s no (easy) way to do this on my iOS devices. And the next time I got a system update, I’d bet the root store on my system would be restored to its original state anyway (well, original plus some annual delta of new root certs).
Last Saturday, I gave a talk at ShmooCon detailing the results of a short survey of iOS applications, and the way they handled (and secured) network-based authentication. For a quick summary of my talk, read on. If you’d like to follow along with the slides, they can be downloaded here. If you’d like a very detailed white paper explaining everything I said in the talk and more, well, you’ll have to wait a little longer. But I’m working on it.
As part of my “day job,” I frequently review the security of iOS applications. In most cases, these applications do not exist only within the confines of any given device, but connect to dedicated back-end services, authenticated with a username and password (or something very similar). We as consumers place a fair amount of trust in that relationship, between iPhone app and the server, and it occurred to me that it might be interesting to see how well-founded that trust is. This is an especially interesting question for applications which handle sensitive data, like a banking or healthcare app.
So I looked at the apps on my phone. Out of over 200 applications, I dropped apps which either didn’t use internet-based servers, or for which I didn’t actually have an active account. This left me with about 50 apps, of which about 40 were actually reviewed. The applications came from (what I feel to be) a fairly representative cross-section of applications, including: Banking, healthcare, travel, cloud storage, and social networking.
The review was fairly simple, focused exclusively on authentication, and didn’t allow me the time to perform deep reviews of any single application. Some few apps appeared much more complex than the rest, and I could easily have spent days examining each of them to fully understand how they worked. But most of the time, I spent between 30 minutes and a couple of hours (per application) to gather the information I needed.
I wanted to focus on four specific areas of interest:
Secure Network - Are network communications properly protected?
Secure Login - Is the network based login performed in a way that is open to attack?
Secure Session - Are login credentials properly handled for ongoing application use, or after the app has been quit and restarted?
Secure Storage - Are login credentials stored securely on the device (if at all)?
To complete the survey, I used a jailbroken iPhone running iOS 8.1.2, and Burp Suite Pro, a man-in-the-middle proxy tool. The proxy allowed me to collect and observe traffic between the applications and their servers, while the jailbroken phone allowed me to easily access and review data stored on the device. I made four complete passess across the list of applictions, focusing on a different behavior or device configuration for each pass.
For the first pass at all the applications, I did not have Burp’s MITM proxy certificate installed on the device. This allowed me to measure whether the applications noticed that their communications were being intercepted, and also to get a feel for how well errors were reported to the user. I’m happy to report that most (all but 2) applications did in fact detect the intercepted communication, and refused to proceed. However, of those 38 applications, only one had a decent error message, where all the rest were cryptic, unhelpful, or flat-out misleading.
Of the two applications which didn’t detect the MITM interception, one continued to communicate over TLS as if nothing was wrong, while the other never noticed because it wasn’t even using TLS in the first place: all communications with this one app happened over HTTP.
I then installed the proxy’s CA certificate onto my test device, which caused all the TLS communications to suddenly become “trusted,” and allowed for interception, and inspection, of nearly all the application traffic.
I say nearly all because four applications appeared to use certificate pinning. For these apps, it was not enough that the TLS connection be certified with a trusted certificate, the connection required a known certificate. Since my MITM cert was trusted by the OS, it made it past the first check, but because it was my cert, it wasn’t known by the application to be the right certificate, and so these four apps refused to continue communicating.
I was able to bypass this certificate pinning on two of the applications, while the other two I had to set aside, and due to time constraints, I was unable to review them any further.
(It bears mentioning that one of the cert-pinned apps, which was the only application to provide a useful certificate error message to the user, was not a bank, health care, or even social media app, but simply a podcast player. Kudos to the developer for taking security so seriously, even for low-impact data like podcast list synchronization.)
With a trusted and operational MITM proxy functioning, I was able to review the actual passing of credentials and security tokens between the applications and their servers. Most applications sent credentials (username, password) as parameters in an HTTP POST request. A few passed the credentials via HTTP headers (for example, as an Authentication: Basic header), while two sent credentials in the URL (one of which didn’t even bother to obscure the password — it was sent in human-readable form).
With the initial login observed and understood, I then used each application for a little while, to create plenty of traffic with which to observe session authentication. In most cases, the continuing session authentication was carried via some form of security token. Most of these tokens were static in nature (unchanging from request to request), while a few were dynamic, either changing with each request, or actually being cryptographically tied to the request (for example, signing each request). These tokens were generally sent in HTTP headers, but a few were sent as URL parameters or in POST data.
Two applications didn’t use tokens, but instead simply re-sent the userid and password with every single request.
Finally, I reviewed the application sandbox for each app to see whether credential information (userid, password, tokens) were being deliberately, or accidentally, saved to the device. For this stage I looked in the system keychain, at the app’s preferences file, the HTTP cache and Cookie files, and any other developer-created files in the Documents or Library folders (I found nothing in any app’s /tmp folder).
I found userids stored just about everywhere (preferences, cookies, application-specific files in /Documents, and the keychain), passwords in a few locations (5 in the keychain and 4 stored elsewhere in the app’s filesystem), and tokens all over the place (14 in the keychain, 27 in the applications’ filesystem storage).
About a week after completing the initial collection and review of data, I relaunched every single app (after having force-quit each during pass two), to determine how they reacted after being out of use for a few days. Most apps simply sent a stored token, while a few re-sent the userid and password, and 6 asked for either the user’s password or both their userid and password. [It should be noted that this likely wasn’t enough time to measure the expiration rate of static tokens, which was not a target of the review.] This also allowed me a chance to re-observe the traffic and generally look for things I may have missed, and to verify my data.
Finally, I force-quit all the apps again, and removed the Burp CA certificate from the device, then relaunched everything. This was mostly to see if the TLS errors caused by the untrusted MITM connection returned (I could imagine some applications only checking for the trusted certificate during login, and ignoring the error from then on). All applications behaved as they did during the first pass, though a few appeared to function normally, but were instead simply displaying locally cached data. Upon forcing a network refresh, TLS errors were reported in these applications as well (and, again, most of the error messages were unhelpful).
Summary of Findings
In the end, my general conclusion was that (for the 40 apps I reviewed) security was “Not bad, but could be better.” Of the 38 applications which completed review (remember, two were pinned and I couldn’t bypass):
12 had only minor issues (insecurely stored userid, certificate pinning not in use)
6 had at least one major issue (password stored insecurely, application ignores TLS errors or doesn’t use TLS)
0 (ZERO) had no issues at all
In most cases, a few simple fixes are all that’s needed to improve the security stance of these applications.
Note that I’m calling lack of certificate pinning “minor”, because as much as I feel it’s necessary, I haven’t seen its use become anywhere near commonplace, which was certainly reflected in my findings here. Applications rely on the strength of the TLS connection (OAuth 2.0 makes this reliance explicit), but with bugs and certification authority issues, that reliance may be misplaced. Certificate pinning remains a very easy way to increase the reliability of that connection.
Top 5 Suggestions
I concluded my talk with five suggestions that I feel would greatly improve the security of any iOS application using network-based servers to host personalized data (whether sensitive or not):
Use TLS certificate pinning.
Store credential components (password, tokens, and if possible, the userid) only in the keychain.
Always use strong “hash” constructs (PBKDF, HMAC, etc., as appropriate).
Take steps to avoid leaked storage of credentials in cache and cookie files
If possible, use one-time (nonce / timestamp based) tokens. Even better, tie these tokens to the request contents via a signature.
More details, especially describing attack vectors and rationale for these suggestions, as well as detailed summary tables of all the findings, are in the slides, available here.
Apple released iOS 8.1.1 yesterday, and with it, a small flurry of bugs were patched (including, predictably, most (all?) of the bugs used in the Pangu jailbreak). One bug fix in particular caught my eye:
Available for: iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact: An attacker in possession of a device may exceed the maximum number of failed passcode attempts
Description: In some circumstances, the failed passcode attempt limit was not enforced. This issue was addressed through additional enforcement of this limit.
CVE-2014-4451 : Stuart Ryan of University of Technology, Sydney
We’ve seen lock screen “bypasses” before (that somehow kill some of the screen locking application and allow access to some data, even while the phone is locked). But this is the first time I’ve seen anything that could claim to bypass the passcode entry timeout or avoid incrementing the failed attempt count. What exactly was this doing? I reached out to the bug reporter on Twitter (@StuartCRyan), and he assured me that a video would come out shortly.
Enter a bad passcode several times, until you have a “disabled for 1 minute” warning.
Wait a minute, and enter one more bad passcode. Now you should have to wait 5 minutes to try again.
As soon as the “iPhone is Disabled” message appears, hold down the power and home buttons until the phone reboots.
Once you see the Apple logo, release the power button, but keep holding Home.
After four seconds, release Home as well, and the phone should continue rebooting.
Once it’s rebooted, go back to the passcode screen and you’ll see that it’s enabled and there’s no entry lockout delay.
This doesn’t appear to reset the attempt count to zero, but it keeps you from waiting between attempts (which can be up to a 60 minute lockout). It also doesn’t appear to increment the failure count, either, which means that if you’re currently at a 15 minute delay, the device will never go beyond that, and never trigger an automatic memory wipe.
Combining this with something like iSEC Partners’ R2B2 Button Basher could easily yield something that could just carefully hammer away at PINs 24x7 until a hit is found (though it’d be SLOW, like 1-2 minutes per attempt….)
Why this even works, I’m not sure. I had presumed that a flag is set somewhere, indicating how long a timeout is required before the next unlock attempt is permitted, which even persists through reboots (under normal conditions). One would think that this flag would be set immediately after the last failed attempt, but apparently there’s enough of a delay that, working at human timescales, you can reboot the phone and prevent the timeout from being written.
Presumably, the timeout and incorrect attempt count is now being updated as close to the passcode rejection as possible, blocking this demonstrated bug.
I may try some other devices in the house later, to see how far back I can repeat the bug. So far, I’ve personally verified it on an iPhone 5S running 8.1.0, and an iPad 2 on 7.0.3. Update: I was not able to make this work on an iPod Touch 4th generation, with iOS 6.1.6, but it’s possible this was just an issue with hitting the buttons just right (many times it seemed to take a screenshot rather than starting up the reboot). On the other hand, the same iOS version (6.1.6) did work on an iPhone 3GS, though again, it took a few tries to make it work.
I just voted, even though pundits and statisticians have proven fairly definitively that my particular vote won’t matter. My district has had a Republican congressman for 30 years and his hand-picked heir is likely to win, and I don’t live in one of the 6 states all the news organizations tell me will decide control of the Senate. I voted because it’s the right thing to do, and because if I don’t vote, I lose the moral right to complain about the idiots in power (and anyone who knows me knows I love to complain.)
But why I hate voting isn’t the issues, or the parties, or the polarized electorate, or the aforementioned futility of my particular involvement. It’s the process. The process makes my blood boil.
For months, we are subjected to constant attack ads, literally he-said-she-said finger pointing about which candidate is the bigger idiot for siding with whichever other idiots are in power.
For weeks, the candidates clutter the countryside with illegally placed campaign signs that aren’t just an eyesore, but can seriously impede traffic safety simply by blocking drivers’ view of oncoming traffic. (Though to be fair, this has gotten much better in Fairfax County over the last few years…I don’t know how they got the candidates to stop, but I’m glad they did it).
I work at home, in my basement. When the doorbell rings, I answer it. Which means I have to interrupt my work, walk upstairs, and attend to whoever is at the door. And then get annoyed when it’s just someone stumping for a politician I don’t care about (or even one I do like). And then they get annoyed when I’m annoyed at them — as if they weren’t the ones being rude by disturbing me in the first place.
Then, finally, election day. That’s the worst.
Rather than experiencing relief that it’s all about to be over, my annoyance level spikes to new highs. First, I drop the kids off at their school (for school-provided daycare while the school is closed for election day). There’s no way to get through the front door without running a gauntlet of partisan party representatives handing you their “Sample Ballots” (which conveniently exclude all other parties — not actually a sample at all, but I suppose we’re used to the lies). Sure, there’s a “50 foot exclusion zone” around the entrance, but it’s not possible to park within that zone. So all they have to do is hover around the perimeters and they get you.
But at this point I’m not even there to vote — I’m just there to drop off my kids. (In fact, two Republican candidates even had people camped out in front of the school on Back to School night this year, so even then we weren’t able to escape their harassment). Why the school system doesn’t kick these people off their property is beyond me. (And don’t tell me it’s because of First Amendment rights — politicians can still express their views…they just shouldn’t be allowed to interrupt voters on their way to the polls).
It’s even worse today, because I’ll have to sneak past the same people for parent/teacher conferences this afternoon.
Then when I actually do go to vote, I have to navigate a different set of politicians’ antagonists (because my polling place is in a different school). And I have to present an ID to vote, because there’s an astronomically small chance that someone could be trying to vote illegally (which Never Ever Happens. Seriously.) And after I present my ID, the poll workers ask me to tell them my address — as if it weren’t already printed on my ID. Somehow, going to vote where the poll workers can’t even read the address on my ID doesn’t fill me with confidence.
(No, I know it’s because they want to be sure that I really know my address and am not simply taking someone else’s identity. It’s still bullshit. Next year, I’m reading the address from my ID before I even hand it to them. See what happens then.)
So by the time I’m done, I’ve been harassed by politicians on the radio, on the TV, in my mail, at my front door, on the way to drop off the kids, on my way to conferences with my kids’ teachers, on the way to actually vote, and then while voting, I’m told pretty clearly that the state doesn’t think I’m actually me and am trying to fraudulently cast a ballot. All this after being told again and again by, well, Science, that my vote really doesn’t matter.
In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)
Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.
It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).
But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:
In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry.
“The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”
This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.
Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.
This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?
I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).
Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.
But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.
The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).
Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.
It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?
This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.
It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.
The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”
It’d be nice if we could figure out what’s going on and actually fix things.
Lots of discussion the last few days about Rite Aid and CVS (and possibly other merchants) actually disabling existing NFC point of sale functionality simply because they were suddenly getting used (by Apple Pay).
NFC payments are nothing new — Android has supported them for a couple years now (on select phones, though not without some complicated political shenanigans between manufacturers and carriers). Not a lot of places support such contactless payments, though I’ve certainly been seeing more and more POS terminals with NFC-looking logos lately. And I’ve seen a LOT of new POS terminals going in recently (at Panera and Target, in particular) which definitely support the upcoming EMV (“Chip and PIN”) cards, and I believe also support NFC.
So it was a bit of a surprise when suddenly Rite Aid and CVS stopped processing NFC payments. The rumor is that they have some agreement with the Merchant Customer Exchange to specifically prohibit Apple Pay, which is silly. Apple Pay isn’t anything new (as far as I can tell), it’s just implemented on a very popular phone and so is actually getting traction today.
In a way, disabling all NFC payments is almost like saying “okay, yes, that’s a valid payment method, but we don’t take magnetic stripe cards anymore, sorry.” It would seem to me that the card networks (Visa, MasterCard, etc.) should be able to have a say in this, but I’m not sure they’ve spoken up yet.
This is all made much worse by the apparent reasoning behind the new policy: They want to push their own solution. Which itself has several problems:
It’s much clunkier (you have to launch an app, scan a code, accept the charge, and then display a new code on the phone for the cashier to scan back)
It’s not as secure (on iPhones, the Apple Pay data is stored in a separate, secure chip. For these apps, it’s stored in the phone’s filesystem)
It links directly to your checking account (not to bank credit cards)
It’s not as private (one big goal of the system is to “encourage loyalty” by providing customers with targeted offers and coupons)
It’s not even available yet
Obviously, the “friction” that such a cumbersome interface presents is a big reason that I think this will eventually fail. But I’m far more worried about the direct links to bank accounts. And much more annoyed about the “loyalty” and privacy aspects.
If a merchant wants my loyalty, they can build it, strongly, in a very simple manner: Have the products I want, at prices I find reasonable, and offered in an environment with a pleasant shopping experience. Fail on any of those three criteria and I’ll only shop at your store grudgingly. Succeed on all three, and your locations will always be at the top of my list.
In addition to the great Tech Crunch article linked by this post, there’s also some good commentary from Gruber (which links to some other articles), and I think the simplest description of the problem in this great image from Dan Frommer. “Can’t wait for the mobile payments app from the company that designed this receipt.”
For once, I’m glad that my iPhone is a year out-of-cycle. By the time I’ve upgraded to the iPhone 6S (or whatever it’ll be called), hopefully the MCX thing will have died a very swift and public death, and Apple Pay (along with Android based NFC payments as well) will simply work.
As @BenedictEvans said:
Few things are more predictable than the failure of a tech product made by an industry consortium of non-tech companies.
The recent release of iOS 8 brought with it several cool new features, especially some which more tightly integrate the iOS world with the OS X desktop world. Some of these are limited by physical proximity (like handing off email drafts among devices), while others are require being on the same local subnet (forwarding phone calls to the desktop).
However, one feature apparently Just Works all the time, and that’s SMS message forwarding. If you have an iPhone, running iOS 8, then you can send and receive normal text messages (to your “Green bubble friends”) from your iPad or Yosemite desktop. Even if the phone is the next town over.
This is actually pretty cool — I use text messaging a lot, and while most of the people I communicate with use iPhones, a fair number (especially customers) don’t. If I need to send them something securely, like a password to a document I just emailed them, I have to manually type the password into my iPhone and hope I don’t mess it up. With SMS messages bridged between the systems, now I can just copy out of my password safe and paste right into iMessage.
However, this does raise one possible security issue. Many services which offer Two-Factor Authentication (2FA, or as many are preferring to all this particular brand of 2FA, “two step authentication”), send the 2FA confirmation codes over SMS. The theory being that only the authorized user will have access to that user’s cell phone, and so the SMS will only be seen by the intended person.
But if your SMS messages are also copied to your iPad (which you left on your desk at work) or your laptop or desktop (which, likewise, may be left in the office, out of your control) then password reset messages sent over SMS will appear on those devices too.
Which means that your [fr]enemies at work may be able to easily gain control over some of your accounts, simply by requesting a password reset while you’re at lunch. And, since you’re really enjoying your three-bourbon lunch, you don’t even notice the messages appearing on your phone until it’s too late (at which point you’re alerted, not by the Twitter account reset, but by dozens of replies to the “I’m an idiot!” tweet your co-workers posted on your behalf.)
Fortunately, there’s an easy way to correct this.
In OS X Yosemite, go into the System Preferences application and select “Notifications.” Then go down to “Messages,” and where it says “Show message preview” make sure the pop-up is “when unlocked,” not “always.” If this is set to “when unlocked,” then the contents of SMS messages won’t be displayed when the desktop is locked, only a “you got a message” sort of notification. You might also consider disabling the “Show notifications on lock screen” button just above it, which will even disable the notification of the notification.
In iOS, a similar setting can be found in Settings, also under Notifications:
However, the control here isn’t quite as fine-grained — you can either show notifications on the lock screen, or not, and if they’re shown at all, then the contents will be displayd as well.
You might consider even preventing SMS notifications from displaying on your primary phone when locked, but if it’s almost never out of your control, then perhaps that’s not a big risk to worry about.
Note that both of these settings apply to iMessages as well as SMS messages.
If you never use SMS messages for account validation (whether you call them 2FA or 2SV or just “validation messages,” then you might not need to worry about this at all. Though it’s probably a good idea to at least consider disabling these notifications anyway…