In June of 2013, a few videos started circulating showing people unlocking cars without authorization. Basically, people walking directly up to a car and just opening it, or walking by cars on the street. One of the more interesting videos (watch at about 30 seconds in) showed a thief walking along the street, grabbing a handle in passing, and stopping short when the car unlocked. (interestingly, all the videos I found this morning showed attackers reaching for the passenger side door, which may just be a coincidence…)
Predictably, this was picked up by news organizations all over the world, who talked about the “big problem” this is in the US. Then I didn’t hear much again for a while.
It’s not even a particularly new thing. This story about BMW thefts in 2012 mentions key fob reprogramming, and also work presented by Don Bailey at Black Hat 2011 (in which he discussed starting cars using a text message).
But just recently, it’s been making the news again, with some insurers even reportedly refusing insurance for some vehicles.
But none of these reports really shed any light on what’s actually happening, though I suspect there are a couple of different problems at play. The more recent articles included some clues:
In a statement, Jaguar Land Rover said vehicle theft through the re-programming of remote-entry keys was an on-going problem which affected the whole industry. [...] “The challenge remains that the equipment being used to steal a vehicle in this way is legitimately used by workshops to carry out routine maintenance … We need better safeguards within the regulatory framework to make sure this equipment does not fall into unlawful hands and, if it does, that the law provides severe penalties to act as an effective deterrent.”
This sounds a lot like the current spate of articles are referring to key fob reprogramming via the OBDII port. Basically, if you get physical access to the car, you can connect something to the diagnostic port and program a new key to work with the car. Bingo, instant key, stolen car.
Then they seem to say that “this attack can be easily mitigated by simply ensuring that thieves don’t get the tightly controlled equipment to reprogram the car.” Heh. Right.
This attack relies on a manufacturer-installed backdoor designed for trusted third parties to do authorized work on the vehicle, and instead is being exploited by thieves. Sound familiar?
I’m actually surprised it’s this simple. I haven’t given it a lot of thought, but I’d bet there are ways this could be improved. Maybe a unique code given to the purchaser of the vehicle that they would keep at home (NOT in the glovebox!) and can be used to program new keys. If they lose that, some kind of trusted process between a dealer and the automaker could retrieve the code from some central store. Of course, that opens up social engineering attacks (a bit harder) and also attacks against the database itself (which only need to succeed once).
Again, this seems like a good real-world example of why backdoors are hard (perhaps nearly impossible) to do safely.
But what about the videos from last year? Those thieves certainly weren’t breaking a window and reprogramming keys…they just touched the car and it opened. For those attacks, something much more insidious seems to be happening, and frankly, I’m amazed that we haven’t figured it out yet.
The thieves might be hitting a button on some device in their pockets (or it’s just automatically spitting out codes in a constant stream) and occasionally they get one right. That seems possible, but improbable. The kinds of rolling codes some remotes use aren’t perfect (especially if the master seed is compromised) but I don’t think they can work that quickly, and certainly not that reliably. (But I could certainly be wrong — it’s been a while since I looked into this).
Also, in these videos, the car didn’t respond until the thief actually touched the door handle. In a couple cases, they held the handle and then appeared to pause while they (perhaps) activated something in their other hand. I’ve wondered if this isn’t exploiting some of the newer “passive” keyless entry systems, where the fob stays in your pocket and is only activated when the car (triggered by a hand on the handle) triggers the fob remotely.
It’s possible there’s a backdoor or some unintended vulnerability in this keyfob exchange, and that’s what’s being exploited. Or even just a hardware-level glitch, like a “whitenoise attack” that simply overwhelms the receiver (as suggested to me this morning by @munin). I’ve also wondered how feasible it might be for a “proxy” attack against an almost nearby fob. For example, if the attacker touches the door handle, and the car asks “are you there, trusted fob?” the fob, currently sitting on the kitchen counter, isn’t within range of the car and so won’t respond. But if the attacker has a stronger radio in their backpack, could they intercept the signal and replay it at a much stronger level, then use a sensitive receiver to collect the response from inside the house and relay it back to the car?
This seems kind of far fetched, and there are probably a great many reasons (not least, Physics) why this might not work. Then again, we’ve demonstrated “near proximity” RFID over fairly large distances, too. And many people probably hang their keys next to the door to the garage, pretty close (within tens of feet) to the car.
It would also be reasonably easy to demonstrate. Too bad we had to sell our Prius to buy a minivan.
The bottom line is this: We’ve seen pretty solid evidence of thefts and break-ins against cars using keyless entry technology. The press love these stories as they drum up eyeballs every 6 months or so. But the public at large really doesn’t get any useful information other than “keyless is bad, mmkay?”
It’d be nice if we could figure out what’s going on and actually fix things.