While lying on the couch last Friday, trying to decompress after a busy day and expecting an even more hectic weekend, I had a crazy idea for how Apple might implement multiple user accounts on iOS devices like the iPad.
File System Overlays.
Applications in iOS are all restricted to their own sandbox — that is, they can only access files and data within their own application bundle, and nothing else. So right off the bat, data’s pretty well segregated.
Now, imagine that there’s an easy way for the operating system to distinguish between the application itself and its data. Like if all apps stored data in, say, Documents and Private Documents and Caches and other similarly-named folders. Anything that’s user-specific would be pretty easy to identify and peel away from the rest of the app.
Here’s where the hare-brained idea comes in: Across the entire filesystem, take any of those such folders, and move them off of the main disk, and into a second filesystem that’s mounted as an overlay on the actual disk.
This is sort of weird. It probably needs a picture.
The base iOS filesystem has system files (the operating system itself plus built-in apps and such), and has separate applications installed by the user. Let’s assume that each app stores user-specific data in a standardized place, like “Documents.”
The device simply puts all the Documents folder into a separate filesystem, then depending on which user has been activated, takes that filesystem and merges it with the base filesystem, overlaying the folders back into their proper locations. So to the device, to the apps, it’s as if nothing has changed. Data’s where you expect it to be.
You could merge preferences in a similar way. iOS already supports multiple configuration profiles, and dynamically merges them into a single active settings profile. So you could have perhaps one “master” account, that can make unalterable settings for the entire device, then create different users, each of which could add their own preferences to what’s already been defined.
Imagine going back to the main home screen, and doing a five-finger pinch to “zoom out” of the iPad and into a new screen with different users listed. Tap on a different user (and enter a passcode, if that user has one set), and the OS removes your overlay and installs the other user’s overlay. Then it’s a whole new iPad!
And the best part about this is it’s all handled at the operating system level. No changes to the applications are necessary (obviously, they need to be following at least some kind of predictable approach for storing data, though there might be some sneaky ways for the OS to figure that out on the fly as well). Of course, if users wanted to share data with other users on the same device (think music or videos), then applications would need to add support for that.
iOS already supports some pretty fancy magic at the filesystem level, with the built-in data protections present in iOS 4. (In fact, it was while musing on those protections that this idea occurred to me). So I don’t see this as being too far off in terms of difficulty to implement. Provided they can get the right filesystem support into the kernel, which I’m sure wouldn’t be too difficult.
Any comments? Is this totally whacked out, or is there some potential here? Also, think about taking this to the desktop…it could definitely add a lot more security to data at rest where multiple users (or the same user, with multiple roles) are sharing a system…..
Some time ago, I started wondering why I couldn’t find any Rainbow Tables for old-school Unix crypt(3) passwords. After some research, I learned that the salt was the culprit — that virtually anyone who’d asked about such tables went away chastised, told that the salt made it impossible to generate Rainbow Tables, unless you went through the trouble to create 4096 different tables (one for each salt). And who’s going to do that?
Somehow, that just didn’t sit right with me, and it wasn’t long before I decided that the conventional wisdom was wrong, and that there would be an easier way to build crypt(3) tables. But I didn’t really do anything with it for a long time, until I finally decided to try, once and for all, to see if I was right. And it turns out — I was right. Changing the standard rainbowcrack programs to support crypt(3) password hashes was trivial. In only one evening, I had something (more or less) working, and a couple of nights later, it was able to actually read, write, and process crypt(3) hashes in their native form (as opposed to a flat hexadecimal dump of the hash).
“Wow! This is cool,” I thought. “I should totally submit this for a security con.” Which I did. But I didn’t get accepted.
So what do I do now? Do I sit on my findings and resubmit, again and again, until a conference accepts it? Or should I just admit that maybe it’s not quite as cool as I think, and maybe it won’t get accepted ever? (As cool as I think it is, it’s certainly possible that it’s not that cool, or that perhaps someone else has already done this and I’ve just not found the code yet — and I’m okay with that.)
It seems silly to just keep this in my back pocket for the sole purpose of getting up in front of a room full of people to talk about it. So rather than hiding it away, I decided to turn it into a more detailed paper, and post it.
So I’ve now posted it to my company’s website. All the crazy details are there, including 50-some-odd lines of proof-of-concept code that need to be inserted into the linux rainbow table crack source. It’s not entirely turnkey (you’ll have to work some to get it compiled yourself), but then again the tables aren’t built, either, so it’s not like you could just make the changes and start cracking passwords. It’s also verly likely far from optimal.
I’m hoping that Rainbow Table experts can see what I’ve written and roll it back into some canonical, actively maintained source tree, and that people can start building and using tables for crypt(3).
Before you go running to read the paper (if you haven’t already noticed, I’m a little long-winded, and the paper is 12+ pages long), here’s a quick preview:
- Instead of generating 4096 tables of 1-8 character passwords, just create 1 table of 3-10 character passwords, and use the 1st two characters of the plaintext passwords as the salt. (That part will make more sense if you read the paper.)
- It’s still kind of slow: 9x slower than LM hashes, for example. But CPUs are much faster than they were in 2003, when people first started building tables for LM hashes.
- It also takes a lot of storage. But storage, likewise, is much cheaper than it was seven years ago.
- So, in the end, I think it may be worth the effort, finally.
Why would anyone care? Well, even though crypt(3) hash technology is something like 35 years old, it still shows up from time to time. It’s a simple, well-understood, and almost universally-supported format. So it’s tempting when building systems to just use crypt(3) and forget about it.
That’s apparently what happened with Gawker Media, who had over 1 million emails and password hashes leaked last week, most of which were crypt(3) based.
So anyway, it’s a fun little hack, and I’m hoping people can run with it.
You can read my corporate blog-post, with the paper linked at the end, right here.
UPDATE – I presented my original slides (with appropriate updates) at the Northern Virginia Hackers Association (NoVAHA) in April. You can download those slides here.
So I was reading yesterday about the Cross-Site Scripting attack against apache.org. And it struck me that there might be an easy way to reduce or eliminate a lot of these attacks, using better isolation within the browser.
Essentially, my thought boiled down to this: Why, when I load a page in the browser, should that page have access to cookies from another server?
“But it doesn’t,” you might say. “The same-origin policy on cookies prevents one page from accessing another server’s cookies!” True. But if the malicious page manages to convince your browser to load a page from the target server, with its own cookie-stealing XSS code injected, then that malicious page, indirectly, has access to those cookies.
So let me rephrase the thought: Why should a web page be permitted to load a page (and worse, execute code) in an unrelated session?
The reason this is possible, to my not-very-XSS-savvy mind, is that the browser really has only a single Security Context (set of cookies, session information, login credentials, etc.). (It’s possibly more complex than this, but the description should hold at a basic level for this discussion). If you log into a web site, then that session information is stored at the browser level, and any window, tab, or frame within that current browser instance has access to that information. The same-origin policy can help to restrict what information a page loaded within those views can access, but the information is still there.
Which leads to my half-baked idea: Within the browser, isolate session information (the session’s Security Context) to only those pages directly related to that session.
This is best described by way of example [Note that I'm using Gmail and Twitter for simplicity, and don't don't mean to suggest that any XSS bugs currently exist in either service ]:
- I log in, fire up the browser, and open Gmail.
- Then, I open a second tab and log into Twitter.
- Later in the day, I click a link in a tweet for a “really awesome photo that you MUST see!”
- This opens in a 3rd tab (or possibly stays in the Twitter-focused 2nd tab), and contains what really is an awesome photo, but also includes a hidden iframe.
- That iframe fetches a page from Gmail that includes an XSS-injected script.
Now, here’s where my crazy idea and reality diverge. In the current situation:
- The browser fetches that page, and so it gains access to the browser’s global Security Context.
- The XSS script executes within that context, and since the page came from mail.google.com, also passes the same-origin test and therefore has access to my Gmail cookies.
- The script then sends my Gmail session information to the attacker.
- Now the attacker has my Gmail account. (boo!)
In my suggestion to isolate security contexts, here’s what happens:
- As before, the image page is loaded in either a 3rd tab or the tab containing the Twitter feed (depending on how I opened it).
- However, this tab is not the tab with Gmail, and thus does not have access to that Security Context.
- Therefore, when the invisible iframe is loaded, the script will not be able to steal my Gmail cookies.
- XSS attack thwarted. (yay!)
Now, you may be thinking, “bye-bye to multi-window web applications.” But that doesn’t need to be the case. There’s certainly nothing preventing the browser from letting a new tab or window inherit a tab’s security context when a user requests a new window from within that session. It’s only when the browser loads a new page, outside of a session’s same-origin boundaries, that the security context is forfeit.
Now, you may be thinking, “bye-bye to single-window web browsing.” But that doesn’t need to be the case. If you’re in (for example) Gmail, and click a link for an external site, that link can still load in the current tab. For the time being, that tab loses access to the Gmail context. But if the user, having read the linked page and chuckled appropriately at the joke within, then clicks “back” in the browser, the context should switch back to Gmail.
Would this have helped in the recent Apache attack? I think so. From what I understand, the attack went like this:
- User is logged into issue tracking application.
- User reads a bug report entered into that application by a malicious user.
- Report includes a link to an external site (in this case, a URL shortening service).
- That link redirects to another page from the bug tracking application, with XSS-injected code to steal the user’s credentials.
If the browser had the sort of context-limiting controls I’m envisioning, then the privileged session’s credentials would have been “lost” as soon as that external URL was requested, and not regained, even though it eventually redirected back to the originating site.
I will note, however, that if the URL had never “left” the site (was not obfuscated through a redirector service, but was clearly within the current context’s originating domain), then this approach wouldn’t have helped. But hopefully, then, the URL would have been ugly enough (containing embedded XSS code) that the user might’ve noticed the problem before clicking on it. Or better yet, it would have been rejected outright and never made it into the malicious user’s bug report in the first place.
To try to simplify, I’d propose starting with the following rules:
- Credentials (temporary cookies, current login sessions, etc. — the “Security Context”) shouldn’t be shared across browser views (windows, tabs, frames, invisible code-executing sandboxes, etc.).
- Within a view, the Security Context should not be shared with subsequently-loaded data with a different origin. This includes both full replacements of the current view (navigating to a new page) and subordinate views displayed or processed within the requesting page (iframes, etc.).
- A Security Context can be shared with a new tab, window, etc., when the user executes something within the session that requests that new view, provided it’s still within the same origin as the current site.
At the moment, I can only think of one real drawback (though I’m sure there are more). If, for example, you’re clicking on a bunch of different links, all of which lead to a site at which you have an account, then each of those clicks will pull up that content as a general, unauthenticated, first-time-visiting user. So if you have a current YouTube session open, and click on a YouTube link in a tweet, you’ll have to re-authenticate within the window that click opens in order to post any comments to the video. Perhaps the user could be permitted to drag such a link into a previously-authenticated session window, but unfortunately we then start to diminish the level of protection.
I’ve already been pointed at two projects that start to address this. The first, Caja, appears to be focused more on the secure development of a page in the first place, not as much on XSS vunerabilities. The second, CookiePie, has some of the features I suggest but is implemented in a manual fashion, with the goal being simultaneous use of a website by multiple different accounts. Neither solution really gets to the deep-in-the-browser XSS prevention that I think isolated Security Contexts could automatically provide.
So how crazy is this? Am I missing something obvious, or is it just chance that we haven’t tried to do this already? Or are there security features already present in browsers that do this, just not in a way I’ve noticed? It seemed, once I thought of it, absurdly simple, but then again, I thought of it, so that’s sort of a given.
I welcome any thoughts and criticisms (but still hope that there might be some value here).