Tuesday, January 11, 2011

How To Secure a Security Product

And Whose Bug Is It, Anyway?

Our company issued a security advisory today about a binary planting vulnerability in multiple F-Secure products, including F-Secure Internet Security 2011. F-Secure has issued automatically deployed fixes for this vulnerability last month, and all affected users can at this moment safely be presumed safe, so to speak. Before going any further, it has to be said that F-Secure Corporation was extremely responsive and cooperative during the whole process of resolving this issue, and demonstrated a high level of commitment to the security of their users.

Now, two facts are of interest in this case. First, the remotely exploitable code-execution bug is in a security product. Security products are especially designed to protect computer systems so when one such product makes it possible to attack a system that might otherwise not be vulnerable, it means a serious compromise of  that product's main purpose - namely to protect the system. Think about it: if there's a code-execution vulnerability in a web browser or in a document editing software, these products may still perform their mission and allow you to browse the Internet pages or write documents, even though they also allow attackers to own your computer. In other words, their main "purpose in life" has not been compromised, and their value to the user may remain unaffected. But when a security product is vulnerable, it begins to provide the exact opposite of what it was purchased for - insecurity instead of security.

Second, the remotely exploitable code-execution bug was not "developed" by the vendor's developers: it resided in Nokia's Qt, a cross-platform and application UI framework, which F-Secure developers trusted and integrated in their products. Such trust is often extended, and can be highly economical - the whole idea behind  3rd party programming libraries is in the division of labor, the concept that allowed the mankind to prosper so enormously. Why develop some special and complex functionality yourself if you can license it from someone with expertise who has already developed it, and quite possibly better than you would have? It saves time and money, and makes your product more competitive. There's just this small pesky issue of security. What if such 3rd-party code includes vulnerabilities that will "infect" your product when you integrate it? How can you even know? And who will be to blame for these bugs in your product?

Mind you, we've been discovering vulnerabilities in security products for more than a decade now and helped their vendors fix them before these bugs could put their users at risk. Vendors of security products are well aware that any vulnerabilities in their products have a potential to directly affect their revenue, and not in a good way (a sentiment not shared by many non-security product vendors). If you sell security products, it helps if your prospects believe that you're actually going to increase, not decrease their security. However, this is a difficult goal to achieve: security software is like any other software, increasingly complex, full of 3rd-party (often closed) code, developed rapidly to meet deadlines set by marketing, and built upon a limited budget. All these factors are vulnerability-friendly.

So what should security product vendors do to keep vulnerabilities out of their products?


  1. They need to obtain more assurance of the security of all 3rd-party code they integrate in their products. This can be done by having the source code reviewed by skilled experts (if the source code is available), or having the built product reviewed in a black box manner;
  2. They need to have their own code reviewed by either internal or external vulnerability hunters before the products are deployed to users. Developers are people, people make mistakes, mistakes often evolve into functional or security problems, functional problems can be caught by QA but security problems generally can't;
  3. They need to keep an eye on newly discovered vulnerability types that may affect their products. Binary planting is one such case, others include SSL certificate null-terminate attacks, remote file inclusion, session fixation and many more;
  4. They need to keep an eye on discovered vulnerabilities in the 3rd-party code they integrate. When such vulnerabilities are publicly known, attackers will quickly find vulnerable products and will try to exploit them.

How does this arguably incomplete list differ from what other (non-security) software vendors should do? From a pure security perspective, it really doesn't, as any product running on a computer - regardless of its declared function - can provide an entry point for attackers, although products running with higher privileges (which security products often are) are more risky. But from the business perspective, security software vendors will be smart to go an extra mile. Security is their sole functionality and their only purpose. It's hard to convince a customer that you will secure their system if you can't seem to be able, or willing, to secure your own product.

Security software vendors are, by the nature of their products, not only expected to provide premium security software, but also premium software security. In the world where many software vendors as well as users seem to have conceded that security is a reactive game where attackers always win, security vendors may be our best hope for driving the progress in code security and vulnerability prevention, and for showing that secure software is not, in fact, a myth.