Friday, February 17, 2012

Downloads Folder: A Binary Planting Minefield

Browser-Aided Remote Binary Planting, Part Deux

This article reveals a bit of our research and provides an advance notification of a largely unknown remote exploit technique on Windows. More importantly, it provides instructions for protecting your computers from this technique while waiting for the affected software to correct its behavior.

Two weeks from now I'll be holding a presentation at RSA Conference US called "Advanced (Persistent) Binary Planting" (Thursday, March 1, 9:30 AM Room 104). The presentation will include demonstrations of "two-step" binary planting exploits where in the first step the attacker silently deploys a malicious executable to user's computer, and the second step gets this executable launched. For those familiar with our past research on binary planting, this removes the need for remote shared folders as well as the need to get the user to double-click on a document in Windows Explorer.

Obviously, the idea is not new: If the attacker manages to somehow get her executable onto user's computer, getting it executed may be just a step away. But in order to deploy the file without heavy-duty social engineering (which invariably works in practice but is frowned upon among security folks) or physical access (which may include an overseas round trip), what is she left with? One ally she may find is the web browser - which lets the user download all sorts of files from all sorts of web sites. Directly to the Downloads folder.


What's In Your Downloads Folder, Anyway?

If you have ever downloaded anything from the Internet, you know that you can always find it in the browser's "Downloads" or "Downloaded files" window. This window also provides a way to delete any downloaded file, or all of them, with just a few clicks. Or so one would think.

Actually, browsers don't delete files from the Downloads folder: they only delete them from the browser's list so that they're no longer visible to the user. In fact, between the latest versions of top web browsers (Chrome, Firefox, Internet Explorer, Safari and Opera), only Internet Explorer 9 (not 8) and Opera provide a way to actually delete a downloaded file from the Downloads folder through their user interface, and even then you have to do it through a right-click menu - in Opera even a sub-menu. Only Opera allows you to delete all files at once.

As a result, your average Downloads folder is a growing repository of files, new, old and borderline ancient. If anything malicious sneaks by your browsers' warnings or your mental safeguards, it is bound to stay there for a long time. Waiting for someone or something to launch it.


Do You Really Want To Download This?

But, you may say, all major web browsers will warn the user if he tries to download an executable file, and the user will have to confirm the download. Right?

Not entirely. One major web browser will, under certain conditions (to be explained at the presentation), download an executable to the Downloads folder without asking or notifying the user. For sure, it will then not execute this file, but the file will remain in the Downloads folder. Possibly until the user re-installs Windows. Furthermore, the same web browser allows a malicious web page to trick the user into confirming a download attempt using clickjacking (an old trick), which is another way to get the executable to the Downloads folder.

And finally - applying to all web browsers -, if some extremely (perhaps even obscenely) interesting web site persistently tries to initiate a download of an executable, how many attempts will it take before an average web user tells it to shut up already and accepts the download, knowing that it will not be automatically executed?


Downloaded But Not Executed? Give It Time.

So the Downloads folder tends to host various not-so-friendly executables. Big deal; it's not like the user is going to double-click those EXEs and have them executed. No, not the user directly, but other executables that he downloads and executes - for instance, installers.

We found that a significant percentage of installers we looked at (especially those created by one leading installer framework) make a call to CreateProcess("msiexec.exe") [simplified for illustration] without specifying the full path to msiexec.exe. This results in the installer first trying to find msiexec.exe in the directory where it itself resides - i.e., in the Downloads folder (unless it was saved elsewhere) - and launching it if it finds it there.

And this is just one single executable. If you launch Process Monitor and observe activities in the Downloads folder when any installer is launched, you will find a long series of attempts to load various DLLs. Not surprising: this is how library loading works (first trying to find DLLs in the same folder as EXE), and in most cases it would not be a security problem as most folders hosting your EXEs are not attacker-writable. However, the Downloads folder is - to some extent, anyway.

So what do we have here? An ability to get malicious EXEs and DLLs to the Downloads folder, where they will in all likelihood remain for a very long time, and at least occasional activities on user's computer that load EXEs and DLLs from the Downloads folder. This can't be good.

But that's it for now. My presentation will also feature data files (non-installers) launching executables from the Downloads folder in a "classic" binary planting manner, instructions for finding binary planting bugs, recommendations for administrators, developers and pentesters, and more.


What You Should Do Right Now

For those of you who think we might be the first people in the world to have thought of this - we sincerely appreciate your compliments! The rest of you should do the following:
  1. Open your browser's Downloads folder in Windows Explorer or any other file manager.
  2. Look for the presence of msiexec.exe. If you find it there and you don't think you intentionally downloaded it at some point in the past, send it to your favorite malware research (anti-virus) company and delete it from your Downloads folder.
  3. Look for the presence of any *.dll files in the Downloads folder and do the same as in the previous step.
  4. Delete all files from the Downloads folder.
  5. Locate msiexec.exe in your %SystemRoot%\System32 folder and copy it to the Downloads folder. (Note: this will prevent Windows to update the msiexec.exe that will be used when installing files from the Downloads folder, but won't affect installers launched from other locations. On the upside, it will also block installer-based attacks described above.)


Hope to see you at RSA Conference,
@mkolsek
@acrossecurity

Monday, February 13, 2012

Should We Be Focusing On Vulnerabilities Or Exploits?

Or Maybe Both?

This post was inspired by a recent ZDNET article "Offensive security research community helping bad guys" and this ThreatPost interview after the Kaspersky security analyst summit, in which Adobe security chief Brad Arkin explains his (Adobe's) philosophy on addressing software vulnerabilities. The crux of this philosophy can be summarized with Brad's words: "My goal isn't to find and fix every security bug, I'd like to drive up the cost of writing exploits.". Subsequently, he mentioned that offensive security researchers are "driving that cost down when they research a new technique to hack into software, write a paper and publish it to the world."

Although the average sentiment of the comments under the "offensive security" article was, well..., offensive, one thing is true: if the only alternative to driving up the cost of writing exploits were to find and fix every security bug, and one would have to choose between the two, the former is the logical choice - after all, it is a general consensus (or as some prefer: excuse) that you can never find all security bugs, while one can achieve demonstrable success in driving up the costs of exploitation for many vulnerabilities. (And Adobe, having introduced sandboxing to the Reader, has undoubtedly made real progress in this area.)


Reality vs. Perception

If you're in charge of product security, your official job description is probably something like "make our products secure". But in all likelihood, your effective job description, as your employer sees it, is more akin to "make our products perceived as secure". Don't misunderstand this: Your employer won't mind if your product is actually secure, but he will mind if it is not perceived as such and it adversely affects the sales. I'm sure most people would do their best - and actually do bend over backwards - to make their products actually as secure as possible, but what affects a company's bottom line is customers' perception, not the reality. And the market's invisible hand (through superiors' and owners' not-so-invisible hands) will make it really clear that perception has priority over reality. Which is, incidentally, not only a case with infosec, but the way things work wherever reality is elusive.

Let's think about that for a while. Where does the difference between perception and reality come from? As already noted, reality is elusive in information security, full of known unknowns (have we missed any buffer overflows or XSSs; is our product being silently exploited?) as well as unknown unknowns (who knows what new attack methods those pesky researchers will come up with tomorrow?). And while you do know that security of your product improves with each identified and fixed vulnerability, you don't know where you are on the scale - there is, alas, no scale.

Perception, on the other hand, is more measurable and more manageable: you can listen to your customers and prospects to see what they think of your security - and this will, in the absence of your marketing material, largely depend on their knowledge of (1) your product's vulnerabilities and (2) publicized incidents involving your product. The former tend to frequently find their ways to public vulnerability lists - and your customers -, but the latter are more tricky: I'm confident that an overwhelming majority of break-ins are never even detected (typically: data theft), much less publicized. And for those detected, is the exploited vulnerability ever determined at all? As a result, most publicized incidents that are actually linked to vulnerable products involve self-replicating exploits (e.g., worms) that ended up in malware researchers' labs. The point being that we generally only know about incidents involving specific remotely exploitable vulnerabilities, suitable for worm-like malware. Others remain unknown.


The Hidden Danger

Developing methods for limiting exploitability is of great value. Sandboxes, ASLR, DEP and other exploit mitigation techniques do drive the cost of exploitation up, and do so for a wide range of different vulnerability types. This is good.

There is, however, a hidden danger in focusing on limiting exploitability instead of exterminating vulnerabilities. Let me illustrate with a (maybe not so) hypothetical dialog:

You: "There is a vulnerability in your product."
Vendor: "Yes, but it's not exploitable."
You: "How do you know it's not exploitable?"
Vendor: "Well, it hasn't been exploited yet."
You: "How do you know it hasn't been exploited yet?"
Vendor: "We're not aware of any related incidents. Are you?"
You: "Uhm..., no, but..."
Vendor: "Case closed."

The danger here is that replacing a determinable value (existence of a known vulnerability) with a non-determinable one (absence of exploits/incidents) when deciding whether to fix a security flaw may result in a better perception of security ("We don't know of any incidents, therefore there aren't any") but worse reality. Why? Because it opens the door to reasoning that it doesn't make sense to fix vulnerabilities if there's a second layer of defense that blocks their exploitability. And then, once someone finds a hole in this second layer of defense, there will be an array of vulnerabilities to choose from for mounting a successful attack.

So let's hope that software vendors don't have to choose between limiting exploitability and exterminating vulnerabilities, but can actually do both. (Google's Chris Evans replied to Brad on Twitter, "Unfortunately, modern security best practice is BOTH 1) sandbox and 2) find/fix bugs aggressively"). I know from personal experience that Adobe is actively finding and fixing bugs in their products in addition to making exploitation harder, so I think Brad is being misunderstood there. But as far as hacking exploit-mitigation mechanisms goes, a flaw in such mechanism is a vulnerability like any other: it allows an adversary to do something that should have been impossible. As such, it is unreasonable to expect that these vulnerabilities would not be researched, discussed, privately reported, published on mailing lists, sold and bought, and silently or publicly exploited just like others are - depending on who finds them.


P.S.: On a somewhat related note, I will present an out-of-sandbox remote exploitation of a binary planting vulnerability in Adobe Reader X at RSA Conference US on March 1st. There will be no remote shares, no WebDAV and no double-clicking on files, just pure browser-aided code execution. We notified Adobe about this bug in early January, so it won't be alive for long.

@mkolsek