For the past year or so, I’ve been thinking about the information security research space. Certainly, with the mega-proliferation of security conferences, research is Getting Done. But is it the right kind of research? And is it of the right quality?
This has recently become a hot topic, since .mudge tweeted on June 29:
Goodbye Google ATAP, it was a blast.
The White House asked if I would kindly create a #CyberUL, so here goes!
We’ve also seen increased attention on Internet of Things, and infosec in general, from the “I Am The Cavalry” effort, and more recently, the expansion of research at Duo Labs and elsewhere.
So this seems like a good time to jot down some of my thoughts.
CyberUL and traditional research
First, the idea of an “Underwriter’s Laboratories” for infosec, or “CyberUL”: I think most people agree that it’s a good idea, at its core. John Tan outlined such a service back in 1999, and it’s been revisited many times since. However, many issues remain. I’m certainly not the first to bring these points up, but for the sake of discussion, here are some high-level problems.
For one thing, certifying (or in UL parlance, “listing”) products is difficult enough in the physical space, but even harder in CyberSpace. Software products are a quickly moving target, and it’s just not possible to keep up with all the revisions to product firmware, both during design and through after-sale udpates.
Would a CyberUL focus on end-user products, such as the “things” we keep hooking up to the Internet, or would it also review software and services in general? What about operating systems? Cloud services?
Multiple certifications of one form or another already exist in this space. The Common Criteria, for example, is very thorough and formalized. It’s also complicated, slow, and very expensive to get. The PCI and OWASP standards set bars for testers to assess against, but the actual mechanisms of testing may not be consistent across (or even within) organizations.
Finally, there’s the question of how deep testing can go. Even with support from vendors, fully understanding some systems is a daunting undertaking, and comprehensive product evaluations may require significant resources.
Ultimately, I’m afraid that a CyberUL may suffer from many of the same problems that “traditional” information security testing faces.
So, what about traditional testing?
Much (if not most) testing is paid for by the product’s creator, or by some 3rd party company considering a purchase. The time and scope of such testing is frequently limited, which drastically curtails the depth to which testers can evaluate a product, and can lead to superficial, “checkbox” security reviews. This could be especially true if vendors wind up, to be honest, frantically checking the “CyberUL” box in the last month prior to product release.
Sometimes, testing can go much deeper, but ultimately they’re limited by whoever’s paying for it. If they’ll only pay for a 2-week test, then a 2-week test is all that will happen.
Maybe independent research is the answer?
There’s obviously plenty of independent research, not directly paid for by customers. However, because it’s not paid for…it generally doesn’t pay the testers’ bills in the long term.
Usually, this work comes out of the mythical “20%” time that people may have to work on other projects (or 10%, or 5%, or just “free time at night”). If research is a tester’s primary function, then that dedicated work is often kept private: its goal is to benefit the company, sell vulnerabilities, improve detection products, etc.
Firms which pay for truly independent and published research are vanishingly rare. Today’s infosec environment steers testers towards searching for “big impact” vulnerabilities, while also encouraging frequent repeats of well-trodden topics. I see very little research into “boring” stuff: process and policy, leading-edge technologies, general analysis of commodity products, etc.
What would I like to see done?
In an ideal world, with unlimited resources, what could a company focused on independent information security research accomplish?
They could perform a research-tracking function across the community as a whole: Manage a list of problems in need of work, new and under-researched issues, longer-term goals, even half-baked pie-in-the-sky ideas.
The execution of this list of topics could be left open for others to take on, or worked on in-house (or even both – some problems will benefit from multiple, independent efforts, confirming or refuting one another’s results).
The company could even possibly provide funding for external research efforts: Cyber Fast Track reborn!
Perform original research
At its core, though, the company would be tasked with performing new research. They’d look at current products, software, and technology. The focus wouldn’t be simply finding bugs, but also understanding how these systems work. Too many products are simply “black boxes,” and it’s important to look under the hood, since even systems which are functioning properly can present a risk. How many of today’s software and cloud offerings are truly understood by those who sign off on the risks they may introduce?
We occasionally see product space surveys (for example, EFF’s Secure Messaging Scorecard). We need more efforts like that, with sufficient depth of testing and detailed publication of methods and results, as well as regular and consistent updates. Too often such surveys are completed and briefly publicized, generating a few sales for the company which performed it, and then totally forgotten.
I’d also like to see generalized risk research across product categories – for example, what kinds of problems do Smart TVs or phone-connected door locks create? I don’t mean a regular survey of Bluetooth locks (which might be useful in itself) but a higher-level analysis of the product space, and potential issues which purchasers need to be aware of.
Specific product testing could also be an offered service, provided that the testing permits very deep reviews without significant time limitations, and that the results, regardless of outcome, be published shortly after the conclusion of the effort (naturally, giving the vendor reasonable time to address any problems).
And important but currently underutilized function is “research about research.” The Infosec Echo Chamber (mostly Twitter, blogs, and a few podcasts) is great about talking about other research and findings, but not very good at critically reviewing and building upon that work.
We need more methodical reviews of existing work, confirming and promoting findings when appropriate, and correcting and improving the research where problems are discovered. Currently, those best able to provide such analysis are frequently busy with paying work, and so valuable insights are delayed or lost altogether.
Related to this is doing a better job of promoting and explaining research, findings, and problems, both within the community and also to the media in general. Another related function would be managing a repository, or at least a trusted index, of security papers, conference slides, and other such information.
Tracking broader industry trends
The Verizon Data Breach Investigation Report (DBIR) provides an in-depth annual analysis of data breaches. Could the same approach be used for, say, an annual cross-industry “Bug Report,” identifying and analyzing common problems and trends? [or really, any other single topic…I don’t know whether a report focused on bugs would be worthwhile.]
The DBIR takes a team of experts months to collect, analyze, and prepare – expanding that kind of report into other arenas is something that can’t be undertaken without a significant commitment. An organization dedicated to infosec research may be among the few able to identify the need for, and ultimately deliver, such tightly-focused reporting.
Shaping research in general
Finally, I (and many others, I believe) think that the industry needs a more structured and methodical approach to security research. An organization dedicated to research can help to develop and refine such methodologies, encouraging publication of negative findings as well as cool bugs, emphasizing the repeatability of results, and guaranteeing availability of past research. The academic world has been wrestling with this for decades, but the infosec community has only begun to transition from “quick and dirty” to “rigorous and reliable” research.
How can we do this?
These goals are difficult to accomplish under our current research model: Lack of dedicated time and availability for ad-hoc work are just two of the biggest problems. Breadth, depth, and consistency of testing, and long-term availability of results, are among the other details we haven’t yet worked out.
A virtual team of volunteers might work, but they’d still be relying on stolen downtime (or after-hours work). Of course, they’d also have to worry about conflicts of interest (“Will this compete with our own sales?” and “Don’t piss off our favorite customer.” being two of my favorites.) Plus, maintaining consistency would be an issue, as team members drift in and out.
A bug-bounty kind of model might be possible, like the virtual team but even more ad-hoc (“Here’s a list of things we need to do. Sign up for something that interests you!”), and with predictably more logistical and practical problems.
Plus, for either virtual approach, you’d still need some core group to manage everything.
Ultimately, I think a non-profit company remains the only way to make this happen. This would allow the formation of a core, dedicated team of researchers and administrators. They could charge vendors for specific product tests, and possibly even receive funding from industry or government sources, though keeping such funding reliable year after year will probably be a challenge.
John Tan, author of the 1999 CyberUL paper, updated his thoughts earlier this month. A key quote, which I think drives to the heart of the problem:
“If your shareholder value is maximized by providing accurate inputs for decision making around risk management, then you’re beholden only to the truth.”
Any company which can keep “Provide risk managers the best data, always” as a core mission statement, and live up to it, will, I think, be on the right track.
So, can this work?
I honestly don’t know.
There are many things our community does well with research, but a lot which we do poorly, or not at all. An independent company that can focus on issues like those I’ve described could have a significant positive impact on the industry, and on security in general. But it won’t happen easily.
According to John Tan’s initial paper, it took 30 years of insurance company subsidies before Underwriters Laboratories could reach a level of vendor-funded self-sufficiency. We don’t have that kind of time today. And the talent required to pull this off wouldn’t come cheaply (and, let’s face it, this is probably the kind of dream job that half the speakers at Black Hat would love to have, so competition would be fierce).
If anyone can run with this, my money would definitely be on Mudge. He’s got the knowledge, and especially the experience of running Cyber Fast Track, not to mention the decades of general information security experience behind him. But he’s definitely got his work cut out for him.
Hopefully he’ll come out of stealth mode soon. I’d love to see what we can do to help.