Thursday, May 3, 2012


Just a quick description for what we think may (or may not) become an important attack technique in the future:

User-in-the-Middle (UITM) - A technique where attacker hides behind a legitimate user of an online service in order to avoid being traced once his/her malicious activities are detected. Applicable where user registration - providing access to said online service and its vulnerabilities - requires exposure of user's real identity, e.g., in online banking.

In contrast to the well known Man-in-the-Middle (MITM) technique and its web application derivative, Browser-in-the-Middle, where the attacker intercepts communication between a user and an online service in an attempt to steal user's physical or digital property while hiding the attack from the user, a user-in-the-middle attack is an attack against the server (not the user), but utilizing user's computer as well as his identity to perform the attack. This allows the attacker to effectively hide behind the user and make it appear as if the user was actually the source of malicious activity. Even if forensic investigation subsequently discovers that it was not the user who performed the malicious activities, the traces on his computer would be insufficient for tracking down a smart attacker. Furthermore, a cautious attacker would erase user's computer after use, deleting all remaining traces of his activities.


Anatomy Of An Online Bank Robbery

This article is partly a summary of, and partly an update to, my presentation titled "How To Rob An Online Bank And Get Away With It," presented at SOURCE Boston last month and previously at DeepSec Vienna.

The subject of our dissection is an online bank robbery. Not the all-too-common attack against an online banking user, his computer and his identity, but an attack against the bank itself - or more precisely, against the bank's online banking application.

An online banking attack has four distinct phases:
  1. Vulnerability Finding Phase
  2. Vulnerability Exploitation Phase
  3. Buying Time Phase
  4. Extraction Phase
Let's look at each of these phases individually.

Phase 1: Vulnerability Finding

Online banking applications are typically custom-made or at least significantly customized for every individual bank, and one can not find their source code on the Internet. A motivated attacker could certainly try to obtain source code from the developers (finding out who they are should be simple enough) and search for vulnerabilities in the code without risking detection, but online banking application's code may not be enough for identifying actually usable vulnerabilities: for instance, even if the code exhibits lack of input validation in a critical flow, there may still be some back-end validation in place that would stop attempted exploitation. Therefore, profit-driven attackers who don't care which bank they attack will more likely approach vulnerability finding in a black box manner by trying a long list of usual tricks that work in online banking until one turns out to be working.

Since this phase is an interactive one, the attacker runs a risk of being detected while looking for vulnerabilities and verifying their exploitability. Furthermore, a typical online banking system only provides access to its functionalities - and consequently, vulnerabilities - to registered users; in order to become an online banking user, one must undergo a face-to-face interaction with the bank and provide proof of identity while being recorded by on-premise security cameras. The attacker does not particularly like such exposure as it could reveal his real identity during inevitable forensic investigation.

To avoid this risk, we expect the attackers will start using (or perhaps already are using?) a technique we call User-in-the-Middle. In contrast to the well known Man-in-the-Middle technique and its web application derivative, Browser-in-the-Middle, where the attacker intercepts communication between a user and an online banking server in an attempt to steal user's funds while hiding the attack from the user, a user-in-the-middle attack is an attack against the server (not the user), but utilizing user's computer as well as his identity to perform the attack. This allows the attacker to effectively hide behind the user and make it appear as if the user was actually the source of the attack (or, in this phase, the one looking for vulnerabilities in an online banking application). Even if forensic investigation subsequently discovers that it was not the said user who performed the malicious activities, the traces on his computer would be insufficient for tracking down a smart attacker.

Phase 2: Vulnerability Exploitation

Once the vulnerability finding phase has produced an exploitable vulnerability - typically a logical flaw such as acceptance of negative numbers, bypassing of overdraft limits, flawed authorization, etc. - the attacker will exploit it at a convenient time to avoid detection or delay bank's response time. This timing depends heavily on the functional periods of any particular online bank, as some functionalities may only be available during bank's operating hours (e.g., between 8:00AM and 3:00PM on working days) while some others may be available 24/7 (e.g., internal funds transfers). Regardless, the end goal of this phase is to generate a large sum of money on one or more attacker's accounts, which will then wait there for the last phase (extraction) or will - more likely - first be transferred to other banks in other countries to slow down subsequent investigative efforts.

Again, these "attacker's accounts" need not actually belong to the attacker or have any association with him. To avoid capture, the attacker can use the user-in-the-middle technique here to use an unsuspecting online banking user's identity for storing stolen funds on his account (while this user may be on vacation).

Phase 3: Buying Time

Once the digital money has been stolen and is sitting on one or more attacker-controlled bank accounts, the attacker knows it is just a matter of time before the attack is detected and the hunt for the stolen money begins. He knows that if the hunters locate the money he stole, it will get "frozen" or simply taken away (it is nothing more than bytes in a database, after all). To delay the hunters, one of the most obvious and effective methods is for the attacker to transfer the funds to another country, and then to yet another country, and so on. These countries are not to be chosen aribtrarily, but such as to maximize the delay provided by the (lack of) cooperation between said countries' law enforcement agencies.

Since large funds transfers can quickly trigger fraud detection alerts (in many countries, every transaction above 10K dollars or euros is automatically flagged for money laundering inspection), attacker will likely use "borrowed" corporate accounts instead of personal ones, since large cross-border transactions are not unusual in the business world.

Predictably, the abovementioned foreign countries' accounts can again be "borrowed" from legitimate users to hide attacker's identity.

Phase 4: Extraction

The last phase of the attack is about converting the stolen digital money, which is highly traceable as long as it remains in the digital global banking system, into an untraceable form. By and large, the most convenient untraceable form of money is cash. And the safest cash extraction points for attackers are ATMs, as they make it possible to withdraw cash without being recorded by a camera (covering face with cap or hood) or seen by a bank employee.

Most of the currently known cases of ATM-based cash extraction seem to be involving so-called money mules, i.e., gullible individuals who are duped into "little work, easy money" schemes without knowing they're helping criminals acquire stolen money. Criminals simply transfer stolen digital funds to mules' accounts and instruct them to withdraw cash, keep 10% and send the rest to another country via, say, Western Union. Criminals know that the police will likely catch some or all of the money mules but make sure that the trail stops cold at that point.

Interestingly, there is again a potential for user-in-the-middle here. Instead of recruiting money mules who can potentially decide not to send the agreed-upon 90% to the attacker, and can inevitably provide useful information to law enforcement when caught, the attacker could hack into multiple online banking users' computers - again, not to steal their money, but to send stolen money through their accounts to remote Western Union extraction points.

How To Block These Attacks?

Clearly we stand a better chance for blocking attacks once we understand how the attackers operate. While the last two phases (Buying Time and Extraction) are not unique to attacks against online banking servers (we also see them in attacks against online banking clients/users), phases 1 and 2 are distinct, and provide some unique ways for detecting and blocking the attack.

  1. Vulnerability Finding Phase: Detecting activities typical for a vulnerability assessment could block the attack before any significant damage has been done. In case the attacker is using the user-in-the-middle technique to hide behind a legitimate online banking user, such early detection can provide law enforcement with more information about the attacker and possibly even allow them to covertly trace him back from the user's computer. Fortunately for defenders, there is a free OWASP AppSensor project focusing on such early attack detection, which can be utilized in online banking with relative ease. In addition, the vulnerability finding phase usually generates an increased amount of server-side errors (on web server, in application and/or in the database); a heightened number of errors associated with a single user account may be a sign of malicious activities and a reason to investigate the particular user.
  2. Vulnerability Exploitation Phase: A characteristic feature of this phase is the emergence of unusually large funds on accounts that may previously not have had much money or have just been opened recently. While existing fraud detection mechanisms can be useful for detecting suspect patterns, attackers using "borrowed" corporate accounts would have a higher chance of remaining undetected. A careful attacker would also likely distribute stolen funds among several unrelated accounts, making detection all the more difficult.


In over 12 years of legally breaking into online banks and following public and not-so-public cases of attacks against banking IT systems, we've accumulated quite some experience with the subject and found many interesting vulnerability types specific to banking operations. We've long ago anticipated the shift from attacks against personal online banking users to corporate ones (which we're seeing increasingly happen today), and we now anticipate an increase in attacks against online banking applications - attacks where hacked online banking user accounts will mainly be used for hiding attackers' identities. We believe it's time for banks to prepare for such attacks, and fortunately this need not be very expensive. Naturally, minimizing the number of vulnerabilities in online banking applications should still be priority, but the next step is to build capacity for detecting attackers' looking for vulnerabilities and exploiting them.


Tuesday, April 10, 2012

Adobe Reader X (10.1.2) msiexec.exe Planting

Outside The Sandbox, But Not Terribly Critical

Adobe today issued an update for Adobe Reader X (new version is 10.1.3), which, among other issues, fixes the outside-the-sandbox msiexec.exe EXE planting vulnerability (CVE-2012-0776) I roughly demonstrated during my RSA Conference US talk last month titled "Advanced (Persistent) Binary Planting."

This article explains the vulnerability and how it could have been exploited. It builds upon our research already published here, which I recommend you read before proceeding if you haven't already.

The Bug

It is a typical EXE planting: Under certain conditions, Reader (specifically, the AcroRd32.exe process) launches msiexec.exe using the CreateProcess function without providing a full path to the executable. This triggers a search for msiexec.exe through the CreateProcess search path as explained here; consequently, if the current working directory of AcroRd32.exe  happens to point to a location under attacker's control and said attacker planted a malicious msiexec.exe there, that msiexec.exe would get executed instead of the intended one.

Furthermore, attackers would appreciate the malicious msiexec.exe getting executed outside the Reader's sandbox, i.e. with unlimited permissions of the user running it.

So how does one get the Reader to point its current working directory to a location of a malicious executable? The easiest way is to place a PDF document and msiexec.exe in the same folder and either double-click the PDF or have the PDF opened by some application. Since the double-clicking scenario has been discussed extensively in the past and it is believed to be frustratingly difficult to get a security expert to double-click an unknown document in an Internet-based shared folder, we shall investigate the other vector: application-based document opening.

The Exploit - Stage One

As already mentioned in "Downloads Folder: A Binary Planting Minefield," the Downloads folder - a folder where most leading web browsers store user's downloads - can be a highly suitable location for what could aptly be called side-by-side payload. While most browsers will require user confirmation when a web site tries to download an executable to user's computer, Google Chrome was, and still is, willing to do so without any question - and such process of downloading can be highly unnoticeable. Let's see how this could look like for a user under attack.

The above video demonstrates how a web site can silently get msiexec.exe downloaded to user's Downloads folder by having the user click on a link - in this case, the link took the user to Google search page, but it could as well take him anywhere else, or nowhere, for that matter. A keen observer may have noticed that, around the 19th second of the video, a small Chrome pop-up window appears in the bottom right corner: this is where the download actually occurs in order to prevent the main Chrome window from displaying the button of the downloaded file. Some extra work could make this pop-up entirely invisible.

Now that we have a malicious msiexec.exe in our Downloads folder - possibly sitting there for weeks or months -, it's time for stage 2.

The Exploit - Stage Two

The second stage of the exploit requires a malicious web page to get some PDF document to the Downloads folder (where msiexec.exe is already waiting) and have the browser open it. This sounds easy - but there's a trick: the browser must open the PDF externally, in a separate Reader process, and not in browser's integrated PDF viewer. There is a way to achieve this in Chrome (and other browsers too). In the following video, we use Gmail to deliver an e-mail with a PDF document attached to the targeted user, and use Gmail's download functionality to actually download the PDF to the Downloads folder instead of rendering it directly in Chrome. Let's see what happens.

The video demonstrates one way to get a PDF downloaded (instead of rendered) to user's Downloads folder. Any malicious web site could do the same and not even offer a way for users to see the PDF directly in the browser (which Gmail does offer, by the way).

Also, the video shows what the user has to do in Reader to get the malicious msiexec.exe executed: he has to manually select Help -> Repair Adobe Reader Installation and confirm in by clicking Yes. Hardly what users go about doing on their own all the time, or even occasionally. The attack, therefore, requires social engineering; I'll leave it to others to continue the "How much social engineering is too much?" debate and shall merely point out a possibility that training users to avoid opening e-mail attachments from unknown sources or downloading executables from the Internet may not do much to prevent the same users from trying to repair their Reader in order to reach some strongly desired content.

Administrators Only

An important mitigating factor limits the exploitability of this issue: The Repair Adobe Reader Installation menu item is only available to administrative users. Therefore, if you're logged in to your desktop as a non-admin user, you now know of one more way in which your computer didn't get owned. On the other hand, if you're logged in to Windows 7 as a "protected administrator," you will have this menu item and if you chose to repair the Reader installation, you would probably not hesitate too much when the malicious msiexec.exe asked you to elevate its privileges.


All in all, this was not a terribly critical vulnerability as its exploit scenario includes non-trivial social engineering and it only applies to users working as administrators, but it serves as a nice reminder that EXE planting issues may continue to expose an application - and its users - even after the more commonly understood DLL planting bugs have been eradicated or mitigated. Meanwhile, attackers are probably happy to know that, with Chrome, it is much easier to silently plant an EXE to the Downloads folder than it is to do the same with a DLL.

Take care,

Friday, February 17, 2012

Downloads Folder: A Binary Planting Minefield

Browser-Aided Remote Binary Planting, Part Deux

This article reveals a bit of our research and provides an advance notification of a largely unknown remote exploit technique on Windows. More importantly, it provides instructions for protecting your computers from this technique while waiting for the affected software to correct its behavior.

Two weeks from now I'll be holding a presentation at RSA Conference US called "Advanced (Persistent) Binary Planting" (Thursday, March 1, 9:30 AM Room 104). The presentation will include demonstrations of "two-step" binary planting exploits where in the first step the attacker silently deploys a malicious executable to user's computer, and the second step gets this executable launched. For those familiar with our past research on binary planting, this removes the need for remote shared folders as well as the need to get the user to double-click on a document in Windows Explorer.

Obviously, the idea is not new: If the attacker manages to somehow get her executable onto user's computer, getting it executed may be just a step away. But in order to deploy the file without heavy-duty social engineering (which invariably works in practice but is frowned upon among security folks) or physical access (which may include an overseas round trip), what is she left with? One ally she may find is the web browser - which lets the user download all sorts of files from all sorts of web sites. Directly to the Downloads folder.

What's In Your Downloads Folder, Anyway?

If you have ever downloaded anything from the Internet, you know that you can always find it in the browser's "Downloads" or "Downloaded files" window. This window also provides a way to delete any downloaded file, or all of them, with just a few clicks. Or so one would think.

Actually, browsers don't delete files from the Downloads folder: they only delete them from the browser's list so that they're no longer visible to the user. In fact, between the latest versions of top web browsers (Chrome, Firefox, Internet Explorer, Safari and Opera), only Internet Explorer 9 (not 8) and Opera provide a way to actually delete a downloaded file from the Downloads folder through their user interface, and even then you have to do it through a right-click menu - in Opera even a sub-menu. Only Opera allows you to delete all files at once.

As a result, your average Downloads folder is a growing repository of files, new, old and borderline ancient. If anything malicious sneaks by your browsers' warnings or your mental safeguards, it is bound to stay there for a long time. Waiting for someone or something to launch it.

Do You Really Want To Download This?

But, you may say, all major web browsers will warn the user if he tries to download an executable file, and the user will have to confirm the download. Right?

Not entirely. One major web browser will, under certain conditions (to be explained at the presentation), download an executable to the Downloads folder without asking or notifying the user. For sure, it will then not execute this file, but the file will remain in the Downloads folder. Possibly until the user re-installs Windows. Furthermore, the same web browser allows a malicious web page to trick the user into confirming a download attempt using clickjacking (an old trick), which is another way to get the executable to the Downloads folder.

And finally - applying to all web browsers -, if some extremely (perhaps even obscenely) interesting web site persistently tries to initiate a download of an executable, how many attempts will it take before an average web user tells it to shut up already and accepts the download, knowing that it will not be automatically executed?

Downloaded But Not Executed? Give It Time.

So the Downloads folder tends to host various not-so-friendly executables. Big deal; it's not like the user is going to double-click those EXEs and have them executed. No, not the user directly, but other executables that he downloads and executes - for instance, installers.

We found that a significant percentage of installers we looked at (especially those created by one leading installer framework) make a call to CreateProcess("msiexec.exe") [simplified for illustration] without specifying the full path to msiexec.exe. This results in the installer first trying to find msiexec.exe in the directory where it itself resides - i.e., in the Downloads folder (unless it was saved elsewhere) - and launching it if it finds it there.

And this is just one single executable. If you launch Process Monitor and observe activities in the Downloads folder when any installer is launched, you will find a long series of attempts to load various DLLs. Not surprising: this is how library loading works (first trying to find DLLs in the same folder as EXE), and in most cases it would not be a security problem as most folders hosting your EXEs are not attacker-writable. However, the Downloads folder is - to some extent, anyway.

So what do we have here? An ability to get malicious EXEs and DLLs to the Downloads folder, where they will in all likelihood remain for a very long time, and at least occasional activities on user's computer that load EXEs and DLLs from the Downloads folder. This can't be good.

But that's it for now. My presentation will also feature data files (non-installers) launching executables from the Downloads folder in a "classic" binary planting manner, instructions for finding binary planting bugs, recommendations for administrators, developers and pentesters, and more.

What You Should Do Right Now

For those of you who think we might be the first people in the world to have thought of this - we sincerely appreciate your compliments! The rest of you should do the following:
  1. Open your browser's Downloads folder in Windows Explorer or any other file manager.
  2. Look for the presence of msiexec.exe. If you find it there and you don't think you intentionally downloaded it at some point in the past, send it to your favorite malware research (anti-virus) company and delete it from your Downloads folder.
  3. Look for the presence of any *.dll files in the Downloads folder and do the same as in the previous step.
  4. Delete all files from the Downloads folder.
  5. Locate msiexec.exe in your %SystemRoot%\System32 folder and copy it to the Downloads folder. (Note: this will prevent Windows to update the msiexec.exe that will be used when installing files from the Downloads folder, but won't affect installers launched from other locations. On the upside, it will also block installer-based attacks described above.)

Hope to see you at RSA Conference,

Monday, February 13, 2012

Should We Be Focusing On Vulnerabilities Or Exploits?

Or Maybe Both?

This post was inspired by a recent ZDNET article "Offensive security research community helping bad guys" and this ThreatPost interview after the Kaspersky security analyst summit, in which Adobe security chief Brad Arkin explains his (Adobe's) philosophy on addressing software vulnerabilities. The crux of this philosophy can be summarized with Brad's words: "My goal isn't to find and fix every security bug, I'd like to drive up the cost of writing exploits.". Subsequently, he mentioned that offensive security researchers are "driving that cost down when they research a new technique to hack into software, write a paper and publish it to the world."

Although the average sentiment of the comments under the "offensive security" article was, well..., offensive, one thing is true: if the only alternative to driving up the cost of writing exploits were to find and fix every security bug, and one would have to choose between the two, the former is the logical choice - after all, it is a general consensus (or as some prefer: excuse) that you can never find all security bugs, while one can achieve demonstrable success in driving up the costs of exploitation for many vulnerabilities. (And Adobe, having introduced sandboxing to the Reader, has undoubtedly made real progress in this area.)

Reality vs. Perception

If you're in charge of product security, your official job description is probably something like "make our products secure". But in all likelihood, your effective job description, as your employer sees it, is more akin to "make our products perceived as secure". Don't misunderstand this: Your employer won't mind if your product is actually secure, but he will mind if it is not perceived as such and it adversely affects the sales. I'm sure most people would do their best - and actually do bend over backwards - to make their products actually as secure as possible, but what affects a company's bottom line is customers' perception, not the reality. And the market's invisible hand (through superiors' and owners' not-so-invisible hands) will make it really clear that perception has priority over reality. Which is, incidentally, not only a case with infosec, but the way things work wherever reality is elusive.

Let's think about that for a while. Where does the difference between perception and reality come from? As already noted, reality is elusive in information security, full of known unknowns (have we missed any buffer overflows or XSSs; is our product being silently exploited?) as well as unknown unknowns (who knows what new attack methods those pesky researchers will come up with tomorrow?). And while you do know that security of your product improves with each identified and fixed vulnerability, you don't know where you are on the scale - there is, alas, no scale.

Perception, on the other hand, is more measurable and more manageable: you can listen to your customers and prospects to see what they think of your security - and this will, in the absence of your marketing material, largely depend on their knowledge of (1) your product's vulnerabilities and (2) publicized incidents involving your product. The former tend to frequently find their ways to public vulnerability lists - and your customers -, but the latter are more tricky: I'm confident that an overwhelming majority of break-ins are never even detected (typically: data theft), much less publicized. And for those detected, is the exploited vulnerability ever determined at all? As a result, most publicized incidents that are actually linked to vulnerable products involve self-replicating exploits (e.g., worms) that ended up in malware researchers' labs. The point being that we generally only know about incidents involving specific remotely exploitable vulnerabilities, suitable for worm-like malware. Others remain unknown.

The Hidden Danger

Developing methods for limiting exploitability is of great value. Sandboxes, ASLR, DEP and other exploit mitigation techniques do drive the cost of exploitation up, and do so for a wide range of different vulnerability types. This is good.

There is, however, a hidden danger in focusing on limiting exploitability instead of exterminating vulnerabilities. Let me illustrate with a (maybe not so) hypothetical dialog:

You: "There is a vulnerability in your product."
Vendor: "Yes, but it's not exploitable."
You: "How do you know it's not exploitable?"
Vendor: "Well, it hasn't been exploited yet."
You: "How do you know it hasn't been exploited yet?"
Vendor: "We're not aware of any related incidents. Are you?"
You: "Uhm..., no, but..."
Vendor: "Case closed."

The danger here is that replacing a determinable value (existence of a known vulnerability) with a non-determinable one (absence of exploits/incidents) when deciding whether to fix a security flaw may result in a better perception of security ("We don't know of any incidents, therefore there aren't any") but worse reality. Why? Because it opens the door to reasoning that it doesn't make sense to fix vulnerabilities if there's a second layer of defense that blocks their exploitability. And then, once someone finds a hole in this second layer of defense, there will be an array of vulnerabilities to choose from for mounting a successful attack.

So let's hope that software vendors don't have to choose between limiting exploitability and exterminating vulnerabilities, but can actually do both. (Google's Chris Evans replied to Brad on Twitter, "Unfortunately, modern security best practice is BOTH 1) sandbox and 2) find/fix bugs aggressively"). I know from personal experience that Adobe is actively finding and fixing bugs in their products in addition to making exploitation harder, so I think Brad is being misunderstood there. But as far as hacking exploit-mitigation mechanisms goes, a flaw in such mechanism is a vulnerability like any other: it allows an adversary to do something that should have been impossible. As such, it is unreasonable to expect that these vulnerabilities would not be researched, discussed, privately reported, published on mailing lists, sold and bought, and silently or publicly exploited just like others are - depending on who finds them.

P.S.: On a somewhat related note, I will present an out-of-sandbox remote exploitation of a binary planting vulnerability in Adobe Reader X at RSA Conference US on March 1st. There will be no remote shares, no WebDAV and no double-clicking on files, just pure browser-aided code execution. We notified Adobe about this bug in early January, so it won't be alive for long.


Monday, January 9, 2012

Is Your Online Bank Vulnerable To Currency Rounding Attacks?

A Hefty Discount Your Bank Never Intended To Give You

In the 12+ years of doing penetration tests against various critical environments, we've seen numerous online banking servers and found all sorts of vulnerabilities in them, including bugs that allowed users to take money from other users' accounts, make unlimited overdrafts on their own accounts, transfer negative amounts to other accounts (effectively sucking other users' money from these accounts) and even - frightening as it may sound - create unlimited amounts of money out of thin air. These types of critical bugs in financial systems are not nearly as rare as one would hope; in fact, our experience shows that at least one such defect exists in your online bank if it hasn't been thoroughly and regularly reviewed by different researchers with a lot of knowledge about banking vulnerabilities and attacks. (As a rule of thumb, if your bank's penetration testing call for proposals contains the word "scanning" or specifies the number of IP addresses to test instead of focusing on  application code and logic, their online systems are likely hosting various logical security flaws.)

While such vulnerabilities can allow an online thief to take a lot of money from bank's customers or the bank itself, doing so would positively qualify as a punishable criminal act in most jurisdictions.

Legally Exploitable Security Flaws

There exist, however, other types of logical security flaws in financial systems whose "exploitation" can be perfectly legal. One such flaw is in the way rounding is done in currency exchange and allows users to effectively influence the currency exchange rates to such extent that, for instance, they get 100 EUR (Euro) for 100 USD (US Dollar) even though 100 EUR would normally cost approximately 130 USD.

To our knowledge, this type of flaw was first described in the 2001 paper titled Assymetric Currency Rounding by M'Ra├»hi, Naccache and Tunstall of Gemplus, and later in a 2008 Corsaire paper Breaking the bank -Vulnerabilities in numeric processing within financial applications. We've been regularly detecting it in online banking systems we've reviewed. Here's how it works.

Currency Exchange 101

Banks usually have two exchange rates for currency exchange: the buying rate is the rate at which the bank will buy the foreign currency, while the selling rate is the rate at which they will sell it. These rates can be expressed either as direct quotations (how many units of local currency equal to one unit of foreign currency) or indirect quotations (how many units of foreign currency equal to one unit of local currency). According to Wikipedia, most countries are using direct quotations, while indirect quotations are used in Euro zone and a few others.

In this article, we'll be using numerals with decimal comma and a point as a thousands separator. For example, one thousand will be written as 1.000,00. (Apologies to the readers accustomed to different notations.)

Suppose a European bank is providing the following indirect quotation for US Dollar exchange rates:
  • Buying rate: 1,388 (you pay 1,388 USD for 1,00 EUR)
  • Selling rate: 1,364 (you get 1,364 USD for 1,00 EUR)
Obviously, exchange rates effectively provide a margin for the bank even if no separate commision is charged for the exchange. For example, if you buy 1.364,00 USD for 1.000,00 EUR and sell them back to the bank, you get 982,71 EUR, making a loss of 17,29 EUR.


Banks typically operate with two decimal digits and while various calculations (e.g., interests or currency exchange) are done with higher precision, the final results are rounded to two decimal digits. Now, if the bank is fair, this rounding is done to the nearest number, i.e., such that 7,237 is rounded up to 7,24 while 2,221 is rounded down to 2,22. Whether a tie-breaking case such as 1,505 is rounded to 1,50 or 1,51 is of little relevance to this article.

How Much Do I Get For One Cent?

While we've seen that the above exchange rates provide a sensible business model for the bank in a typical case, security researchers are usually more interested in atypical cases, especially various corner cases. One such corner case is the lowest possible value, typically 0,01 - which in many currencies means one cent.

What happens if you want to convert one Euro cent to US Dollars? The European online bank will use the selling rate:

0,01 EUR * 1,364 USD/EUR = 0,01364 USD

After rounding, you will get 0,01 USD. Obviously you'll make a loss (a Dollar cent is worth less than a Euro cent in our example), which nominally doesn't look significant, but is actually a 27% loss.

But what if you reverse the direction and convert one USD cent to EUR? This time, the buying rate is used:

0,01 USD  / 1,388 USD/EUR = 0,0072046109510086455331412103746398 EUR

After rounding, you get 0,01 EUR. Now this is better: the rounding just made you an instant profit of 38,8%. Of course, the profit is nominally tiny and probably won't get you a cup of coffee anywhere in the world, but this procedure can be executed repeatedly in a fully automated way. By doing so a hundred times, 1 whole US Dollar can be exchanged for one whole Euro. A hundred thousand times, and one gets 1.000,00 EUR for 1.000,00 USD (a profit of 280,00 EUR). Even if one has to first buy the 1.000,00 USD at the same bank (at bank's selling rate) for 733,14 EUR, the total profit equals to 266,86 EUR. And this profit is from a legitimate use of service provided by the bank on their terms and in their controlled environment.

But How Many Cents Can I Sell?

Suppose an online bank can accept a hundred requests per second (a highly conservative assumption - large banks have to support thousands of requests per second in peak periods), an automated script can make 100*60*60*24 = 8.640.000 one-cent exchange requests per day, making a profit of about 23.000 EUR every 24 hours if this online bank provides currency exchange services around the clock.

Many online banks allow users to upload prepared packages with multiple requests, possibly thousands of them, which the server will then process in a batch. Obviously, this can further expedite the procedure.

Another Example: Japanese Yen

Our above example used two currencies of a similar value. The same method works with any two currencies, for instance with Euro and Japanese Yen (JPY). Suppose the bank's buying rate for Yen is 97,5949. Therefore, exchanging Yens to Euro would work like this:

0,50 JPY / 97,5949  JPY/EUR = 0,0051232185288370601332651603721096

After rounding, you get 0,01 EUR (a profit of over 95%).

Maximizing The Yield

By now it is clear that the profit comes from rounding errors. When working with two decimal digits, the maximum rounding error can be 0,005 for any individual operation. The closer the converted value is to 0,005 (but above it), the higher the profit will be when the number is rounded to 0,01 - and it is easier to maximize the profit using currencies with larger exchange rates (see the Yen example above), as they allow for a better "fine tuning" when trying to get as close to 0,005 as possible.

This also means that a single exchange operation can, at best, produce a profit of 0,005 units of the target currency, so obtaining a non-negligible nominal profit would certainly require hundreds of thousands of operations.

Is This Really Legal?

The essential property of this "exploitation" is that you're really not doing anything to bypass any security mechanisms the bank has put in place; you're simply using the provided functionality the way it was intended, and it is inconceivable that exchanging 1.00 USD would be legal, and exchanging 0.02 USD would also be legal, but entering 0.01 in the same form field would not be.

Note that the legality of this can change significantly if one has to actively "help" the system to accept the low amount. Even if there is just a client-side JavaScript validation in place preventing entering of too small values, which we all know can be easily bypassed, the law (at least European law) could interpret such bypassing as illegal.
Another catch may also be in the Terms and Conditions that a particular bank sets for their users. These could include provisions for annulling certain transactions, e.g., transactions by individual users not done manually through the intended user interface or erroneous transactions due to logical errors in the code.

We've had many conversations with our banking customers who were affected by this logical flaw, some of them losing more than 100.000 EUR or USD, and their final conclusion was always that they didn't have legal grounds for persecuting the "attacker", and would have to just swallow the loss and add some countermeasures to the code to prevent it from happening again.


After becoming aware of this vulnerability, banks can employ various countermeasures to eliminate it:
  1. Charging a conversion fee. This is the simplest countermeasure, and even if the fee is really tiny (e.g., 0.01 EUR), it quickly takes away all the profit provided by the rounding.
  2. Setting a minimum conversion amount. Much like most physical exchange bureaus refuse to accept small change, an online currency exchange can refuse to exchange anything less than one unit of the "larger" currency - say, 1 EUR in the above examples. Note that exchanging an amount larger than 1 EUR can still provide up to 0,005 EUR profit from rounding error but this profit is more than neutralized by the difference between buying and selling rates.
  3. Always rounding to bank's benefit: While arguably unfair - and possibly disallowed by local banking regulations -, this is also an effective countermeasure.
  4. Limiting the number of operations per user: Obviously, harvesting a significant profit requires hundreds of thousands of exchange operations. An online bank can limit the allowed number of exchange operations for any individual user to, say, a thousand per day, without causing any problems to users.

Are There Any Other Legally Exploitable Security Flaws?

There are many. We decided to publish a blog post about this one because it's been publicly known for a decade, is being actively exploited, and we keep finding it in online banks we're reviewing. Many other flaws are not publicly known and it would be a disservice to our existing and future customers to reveal them.

Final Thoughts

This flaw is a great example of how things can go wrong when you take a simple, well-specified physical process such as manual currency exchange - and turn it into an online service. This particular case introduced two critical changes: first, the online service allows  hundreds of thousands of operations in a short period of time, while a physical exchange office could probably not  make more than two exchanges per minute. And second, the online system accepts the smallest of small change and doesn't find it either suspect or annoying that someone wants to exchange one cent over and over again - it just dutifully executes its code.

Actually, having looked at many different logical flaws we find in online banking systems, a lot of them exist only because the transformation of processes from the physical banking world into the complexity of an online application introduced many new and unwanted possibilities that would have been alarming, suspect or at least unacceptably irritating to the people involved in the same physical process.

And they say people are the weakest link in security.

Wednesday, January 4, 2012

Google Chrome HTTPS Address Bar Spoofing

The Fixed Bounty Bug Revealed

Last month Google awarded our security analyst Luka Treiber a Chromium Security Reward for a high-severity vulnerability fixed in version 16 of the Chrome web browser. Due to Chrome's automatic update mechanism we expect most browsers to be updated by now, which seems to be supported by StatCounter's Global Stats for January 2012, where Chrome 16 is the only Chrome version in the chart. Luka found this issue in Chrome 14 and we confirmed that Chrome 15 was vulnerable as well. This document presents the vulnerability in detail.

HTTPS-related  vulnerabilities tend to rate high on the severity scale as they allow attackers to make web site visitors - even the savvy ones who check for HTTPS evidence such as "https://" at the beginning of the URL - believe they are visiting a legitimate web site when they're actually on a malicious look-alike site. And when users trust a malicious web site, they will give it their credentials and personal data.

This bug is a nice addition to some other HTTPS-related vulnerabilities our security researchers have found in the past: bypassing HTTPS security warnings in Internet Explorer and Netscape Navigator (yes, we were breaking HTTPS back in 1999!), and Poisoning Cached HTTPS Documents in Internet Explorer. None of these break the cryptographic model or implementation supporting HTTPS, but rather exploit the integration of SSL/TLS into the browser and browser's presentation of security-relevant information to the user. These are often the weakest link in the security HTTPS is supposed to provide.

Chrome's Trigger-Happy Address Bar

There is an inconsistency in the way Chrome 14/15 renders some web page redirections in a way that allows an attacker to perform address bar spoofing, resulting in an HTTPS URL being displayed with the content from some other web site. Let's take, for example, a simple JavaScript redirection page located at http://source/sample.html that looks as follows:


When observing the above code in execution all parts of Chrome's user interface behave consistently, meaning that as the address bar changes from URL of the source page to the target URL, the page content changes accordingly and promptly (at once). However, by altering the script like this:


one can see the address bar starts changing before the displayed page content is replaced. So for a split second the address bar displays the new address http://target while the DOM is still from the old address http://source/sample.html. The given example is composed of two tricks:

  1. Apparently the "view-source:" prefix causes the asynchronous behavior between the address bar and the DOM.
  2. The redir.php is an HTTP 302 redirection script used to cause a redirect from view-source:http://source/redir.php to http://target, thus removing the "view-source:" prefix from the URL (the goal is to spoof the address bar to a legitimate domain as will be shown later). After this redirection, the browser no longer displays the source code but renders the HTML of http://target.

All demonstrations provided below employ the above two tricks with http://target replaced by a Gmail login page address. However, to stop the redirection at the exact moment when the inconsistency between the address bar and the page contents is being exhibited, a further trick is used. Each of the three demonstrations employs a different trick to "freeze" the state of inconsistency for a long enough time for a user to enter his credentials.

Demonstration #1: Redirection To HTTPS On Port 80

In the first demonstration, is used as target of the redir.php script to block the redirection for about 30 seconds. This occurs because Chrome is instructed to establish an HTTPS connection with a server on an HTTP port, which results in a 30-second hopeless handshake attempt between an SSL/TLS client and an HTTP server.

While the redirection using view-source: is being fruitlessly attempted (and Chrome is already showing in the address bar), a fake Gmail login page is displayed from attacker's web site. If username and password are entered and the submit button is pressed the data gets sent to (the "malicious" web site for the purpose of this blog post). Tests have shown that any unresponsive server script can be used instead of, or even an invalid target URL such as view-source:http://xxxxxxxx.

Let's look at a video of this demonstration:

Demonstration #2: Using Google's Open Redirector

In the second demonstration, we avoid the suspicious 80 port in the URL by using some open redirector on https://* as the desired spoof URL. The demonstration is analogous to the previous one except that an additional redirect is used after the script, and that is replaced with a download-throttling page that delays the loading for an arbitrary amount of time. This time the URL that the redirection gets stuck on is[...].

Let's see:

Demonstration #3: Delaying Redirection With A Blocked Modal Dialog

In the third and final demonstration, we manage to spoof the exact URL of the Gmail login page. To do that, a blocked modal dialog is used to stop the redirection instead of the "wrong port trick" with or the download-throttling slow.php employed in the previous demonstrations. A precondition in this case, for whatever reason, is that the user has come to the malicious web site from the spoofed-to-be host (in our case or another host in its 2nd level domain before address bar spoofing redirection begins. This precondition can easily be fulfilled using any one of the open redirectors on servers.

An intentionally blocked modal dialog (blocked by Chrome's pop-up blocker) is used to stop the redirection after the address bar has already been updated with the new URL but the page content hasn't been refreshed yet. Like in the previous demonstrations, a fake login form is displayed, waiting for the user to provide his credentials. Curiously, any requests resulting from the form submittal are queued while the modal dialog is blocked. We solved this with a self-destruct script inside the modal dialog executing after the submit button has been pressed, thus releasing the said queue and allowing the credentials to be sent to attacker's server.

Let's see how this looks like (notice the blocked dialog icon on the right side of the address bar):

Practical Exploitability

While it would certainly be possible to trick some users into logging in to Gmail (or any other targeted web site) through links provided by a 3rd party malicious web site, most users are likely to visit their favorite web sites directly (by typing the host name) or using browser bookmarks. In this case, as long as the initial URL is non-HTTPS, a man-in-the-middle attacker (i.e., the guy sitting next to you in the coffee shop with free WiFi) can actively inject malicious code into the initial web site's HTML to exploit this vulnerability and present a legitimately-looking fake HTTPS login form to the user.

Finally, the hawk-eyed among you may have noticed that the spoofing is not entirely perfect: the icon left to the spoofed URL is a grey planet as is typical for HTTP addresses - and not a green lock as is typical for valid HTTPS addresses. However, while many users may notice the presence of "https" and consider it a guarantee of trust, they are less likely to notice the absence of a lock icon - especially since the visual identification of HTTPS URLs is different in different web browsers.

So could this vulnerability realistically be used for defeating HTTPS in actual attacks? We think so and so does Google - and we're glad this bug is now fixed. When more and more web sites are depending on the cryptographic security of HTTPS, this bug is a reminder that HTTPS is much more than just cryptography.