Introduction
Last Saturday, I gave a talk at ShmooCon detailing the results of a short survey of iOS applications, and the way they handled (and secured) network-based authentication. For a quick summary of my talk, read on. If you’d like to follow along with the slides, they can be downloaded here. If you’d like a very detailed white paper explaining everything I said in the talk and more, well, you’ll have to wait a little longer. But I’m working on it.
UPDATE The video for the talk has been posted on Archive.org
As part of my “day job,” I frequently review the security of iOS applications. In most cases, these applications do not exist only within the confines of any given device, but connect to dedicated back-end services, authenticated with a username and password (or something very similar). We as consumers place a fair amount of trust in that relationship, between iPhone app and the server, and it occurred to me that it might be interesting to see how well-founded that trust is. This is an especially interesting question for applications which handle sensitive data, like a banking or healthcare app.
So I looked at the apps on my phone. Out of over 200 applications, I dropped apps which either didn’t use internet-based servers, or for which I didn’t actually have an active account. This left me with about 50 apps, of which about 40 were actually reviewed. The applications came from (what I feel to be) a fairly representative cross-section of applications, including: Banking, healthcare, travel, cloud storage, and social networking.
The review was fairly simple, focused exclusively on authentication, and didn’t allow me the time to perform deep reviews of any single application. Some few apps appeared much more complex than the rest, and I could easily have spent days examining each of them to fully understand how they worked. But most of the time, I spent between 30 minutes and a couple of hours (per application) to gather the information I needed.
I wanted to focus on four specific areas of interest:
- Secure Network - Are network communications properly protected?
- Secure Login - Is the network based login performed in a way that is open to attack?
- Secure Session - Are login credentials properly handled for ongoing application use, or after the app has been quit and restarted?
- Secure Storage - Are login credentials stored securely on the device (if at all)?
To complete the survey, I used a jailbroken iPhone running iOS 8.1.2, and Burp Suite Pro, a man-in-the-middle proxy tool. The proxy allowed me to collect and observe traffic between the applications and their servers, while the jailbroken phone allowed me to easily access and review data stored on the device. I made four complete passess across the list of applictions, focusing on a different behavior or device configuration for each pass.
First pass
For the first pass at all the applications, I did not have Burp’s MITM proxy certificate installed on the device. This allowed me to measure whether the applications noticed that their communications were being intercepted, and also to get a feel for how well errors were reported to the user. I’m happy to report that most (all but 2) applications did in fact detect the intercepted communication, and refused to proceed. However, of those 38 applications, only one had a decent error message, where all the rest were cryptic, unhelpful, or flat-out misleading.
Of the two applications which didn’t detect the MITM interception, one continued to communicate over TLS as if nothing was wrong, while the other never noticed because it wasn’t even using TLS in the first place: all communications with this one app happened over HTTP.
Second pass
I then installed the proxy’s CA certificate onto my test device, which caused all the TLS communications to suddenly become “trusted,” and allowed for interception, and inspection, of nearly all the application traffic.
I say nearly all because four applications appeared to use certificate pinning. For these apps, it was not enough that the TLS connection be certified with a trusted certificate, the connection required a known certificate. Since my MITM cert was trusted by the OS, it made it past the first check, but because it was my cert, it wasn’t known by the application to be the right certificate, and so these four apps refused to continue communicating.
I was able to bypass this certificate pinning on two of the applications, while the other two I had to set aside, and due to time constraints, I was unable to review them any further.
(It bears mentioning that one of the cert-pinned apps, which was the only application to provide a useful certificate error message to the user, was not a bank, health care, or even social media app, but simply a podcast player. Kudos to the developer for taking security so seriously, even for low-impact data like podcast list synchronization.)
With a trusted and operational MITM proxy functioning, I was able to review the actual passing of credentials and security tokens between the applications and their servers. Most applications sent credentials (username, password) as parameters in an HTTP POST request. A few passed the credentials via HTTP headers (for example, as an Authentication: Basic header), while two sent credentials in the URL (one of which didn’t even bother to obscure the password – it was sent in human-readable form).
With the initial login observed and understood, I then used each application for a little while, to create plenty of traffic with which to observe session authentication. In most cases, the continuing session authentication was carried via some form of security token. Most of these tokens were static in nature (unchanging from request to request), while a few were dynamic, either changing with each request, or actually being cryptographically tied to the request (for example, signing each request). These tokens were generally sent in HTTP headers, but a few were sent as URL parameters or in POST data.
Two applications didn’t use tokens, but instead simply re-sent the userid and password with every single request.
Finally, I reviewed the application sandbox for each app to see whether credential information (userid, password, tokens) were being deliberately, or accidentally, saved to the device. For this stage I looked in the system keychain, at the app’s preferences file, the HTTP cache and Cookie files, and any other developer-created files in the Documents or Library folders (I found nothing in any app’s /tmp folder).
I found userids stored just about everywhere (preferences, cookies, application-specific files in /Documents, and the keychain), passwords in a few locations (5 in the keychain and 4 stored elsewhere in the app’s filesystem), and tokens all over the place (14 in the keychain, 27 in the applications’ filesystem storage).
Third Pass
About a week after completing the initial collection and review of data, I relaunched every single app (after having force-quit each during pass two), to determine how they reacted after being out of use for a few days. Most apps simply sent a stored token, while a few re-sent the userid and password, and 6 asked for either the user’s password or both their userid and password. [It should be noted that this likely wasn’t enough time to measure the expiration rate of static tokens, which was not a target of the review.] This also allowed me a chance to re-observe the traffic and generally look for things I may have missed, and to verify my data.
Fourth Pass
Finally, I force-quit all the apps again, and removed the Burp CA certificate from the device, then relaunched everything. This was mostly to see if the TLS errors caused by the untrusted MITM connection returned (I could imagine some applications only checking for the trusted certificate during login, and ignoring the error from then on). All applications behaved as they did during the first pass, though a few appeared to function normally, but were instead simply displaying locally cached data. Upon forcing a network refresh, TLS errors were reported in these applications as well (and, again, most of the error messages were unhelpful).
Summary of Findings
In the end, my general conclusion was that (for the 40 apps I reviewed) security was “Not bad, but could be better.” Of the 38 applications which completed review (remember, two were pinned and I couldn’t bypass):
- 12 had only minor issues (insecurely stored userid, certificate pinning not in use)
- 6 had at least one major issue (password stored insecurely, application ignores TLS errors or doesn’t use TLS)
- 0 (ZERO) had no issues at all
In most cases, a few simple fixes are all that’s needed to improve the security stance of these applications.
Note that I’m calling lack of certificate pinning “minor”, because as much as I feel it’s necessary, I haven’t seen its use become anywhere near commonplace, which was certainly reflected in my findings here. Applications rely on the strength of the TLS connection (OAuth 2.0 makes this reliance explicit), but with bugs and certification authority issues, that reliance may be misplaced. Certificate pinning remains a very easy way to increase the reliability of that connection.
Top 5 Suggestions
I concluded my talk with five suggestions that I feel would greatly improve the security of any iOS application using network-based servers to host personalized data (whether sensitive or not):
- Use TLS certificate pinning.
- Store credential components (password, tokens, and if possible, the userid) only in the keychain.
- Always use strong “hash” constructs (PBKDF, HMAC, etc., as appropriate).
- Take steps to avoid leaked storage of credentials in cache and cookie files
- If possible, use one-time (nonce / timestamp based) tokens. Even better, tie these tokens to the request contents via a signature.
More details, especially describing attack vectors and rationale for these suggestions, as well as detailed summary tables of all the findings, are in the slides, available here.