Monday, February 13, 2012

Should We Be Focusing On Vulnerabilities Or Exploits?

Or Maybe Both?

This post was inspired by a recent ZDNET article "Offensive security research community helping bad guys" and this ThreatPost interview after the Kaspersky security analyst summit, in which Adobe security chief Brad Arkin explains his (Adobe's) philosophy on addressing software vulnerabilities. The crux of this philosophy can be summarized with Brad's words: "My goal isn't to find and fix every security bug, I'd like to drive up the cost of writing exploits.". Subsequently, he mentioned that offensive security researchers are "driving that cost down when they research a new technique to hack into software, write a paper and publish it to the world."

Although the average sentiment of the comments under the "offensive security" article was, well..., offensive, one thing is true: if the only alternative to driving up the cost of writing exploits were to find and fix every security bug, and one would have to choose between the two, the former is the logical choice - after all, it is a general consensus (or as some prefer: excuse) that you can never find all security bugs, while one can achieve demonstrable success in driving up the costs of exploitation for many vulnerabilities. (And Adobe, having introduced sandboxing to the Reader, has undoubtedly made real progress in this area.)


Reality vs. Perception

If you're in charge of product security, your official job description is probably something like "make our products secure". But in all likelihood, your effective job description, as your employer sees it, is more akin to "make our products perceived as secure". Don't misunderstand this: Your employer won't mind if your product is actually secure, but he will mind if it is not perceived as such and it adversely affects the sales. I'm sure most people would do their best - and actually do bend over backwards - to make their products actually as secure as possible, but what affects a company's bottom line is customers' perception, not the reality. And the market's invisible hand (through superiors' and owners' not-so-invisible hands) will make it really clear that perception has priority over reality. Which is, incidentally, not only a case with infosec, but the way things work wherever reality is elusive.

Let's think about that for a while. Where does the difference between perception and reality come from? As already noted, reality is elusive in information security, full of known unknowns (have we missed any buffer overflows or XSSs; is our product being silently exploited?) as well as unknown unknowns (who knows what new attack methods those pesky researchers will come up with tomorrow?). And while you do know that security of your product improves with each identified and fixed vulnerability, you don't know where you are on the scale - there is, alas, no scale.

Perception, on the other hand, is more measurable and more manageable: you can listen to your customers and prospects to see what they think of your security - and this will, in the absence of your marketing material, largely depend on their knowledge of (1) your product's vulnerabilities and (2) publicized incidents involving your product. The former tend to frequently find their ways to public vulnerability lists - and your customers -, but the latter are more tricky: I'm confident that an overwhelming majority of break-ins are never even detected (typically: data theft), much less publicized. And for those detected, is the exploited vulnerability ever determined at all? As a result, most publicized incidents that are actually linked to vulnerable products involve self-replicating exploits (e.g., worms) that ended up in malware researchers' labs. The point being that we generally only know about incidents involving specific remotely exploitable vulnerabilities, suitable for worm-like malware. Others remain unknown.


The Hidden Danger

Developing methods for limiting exploitability is of great value. Sandboxes, ASLR, DEP and other exploit mitigation techniques do drive the cost of exploitation up, and do so for a wide range of different vulnerability types. This is good.

There is, however, a hidden danger in focusing on limiting exploitability instead of exterminating vulnerabilities. Let me illustrate with a (maybe not so) hypothetical dialog:

You: "There is a vulnerability in your product."
Vendor: "Yes, but it's not exploitable."
You: "How do you know it's not exploitable?"
Vendor: "Well, it hasn't been exploited yet."
You: "How do you know it hasn't been exploited yet?"
Vendor: "We're not aware of any related incidents. Are you?"
You: "Uhm..., no, but..."
Vendor: "Case closed."

The danger here is that replacing a determinable value (existence of a known vulnerability) with a non-determinable one (absence of exploits/incidents) when deciding whether to fix a security flaw may result in a better perception of security ("We don't know of any incidents, therefore there aren't any") but worse reality. Why? Because it opens the door to reasoning that it doesn't make sense to fix vulnerabilities if there's a second layer of defense that blocks their exploitability. And then, once someone finds a hole in this second layer of defense, there will be an array of vulnerabilities to choose from for mounting a successful attack.

So let's hope that software vendors don't have to choose between limiting exploitability and exterminating vulnerabilities, but can actually do both. (Google's Chris Evans replied to Brad on Twitter, "Unfortunately, modern security best practice is BOTH 1) sandbox and 2) find/fix bugs aggressively"). I know from personal experience that Adobe is actively finding and fixing bugs in their products in addition to making exploitation harder, so I think Brad is being misunderstood there. But as far as hacking exploit-mitigation mechanisms goes, a flaw in such mechanism is a vulnerability like any other: it allows an adversary to do something that should have been impossible. As such, it is unreasonable to expect that these vulnerabilities would not be researched, discussed, privately reported, published on mailing lists, sold and bought, and silently or publicly exploited just like others are - depending on who finds them.


P.S.: On a somewhat related note, I will present an out-of-sandbox remote exploitation of a binary planting vulnerability in Adobe Reader X at RSA Conference US on March 1st. There will be no remote shares, no WebDAV and no double-clicking on files, just pure browser-aided code execution. We notified Adobe about this bug in early January, so it won't be alive for long.

@mkolsek