When defense-in-depth failed: Heartbleed

A good benchmark of foresight in security is not avoiding vulnerabilities but having defense-in-depth measures in place to minimize the impact from unforeseen flaws. (Or failing that, one can quibble over the consolation prize of who was fastest to respond and deploy patches.) In the recent case of the openssl Heartbleed vulnerability, such success stories were mostly absent. Few seemed to speak up and shrug-off the vulnerability by saying, in effect, that they were immune because of some brilliant feature implemented years ago as a precaution, without an inkling of specific details around the exact vulnerability it would insure against one day. Given how pervasive openssl is not just on web servers, but even traditional client-side software including mobile operating systems, most of the chuckling seemed to come from the Microsoft camp, continuing their “hat trick” of having steered clear of three epic SSL bugs in 2014: “goto fail” compliments of Apple, GnuTLS certificate validation and now Heartbleed.

Everyone else appeared to be in the same boat: catastrophic failure, caused by a straightforward implementation bug. Nothing fancy, no subtle ROP/ASLR-bypass techniques required to exploit it, completely platform independent, no massive fuzzing runs required to spot the bug in the first place: in fact it was so “shallow” from source-code that two different groups found it almost simultaneously.

“Show me the exploit”

It did not take long before the armchair-philosophizing started around the exact nature of what could be done. There was no question that the vulnerability is exploitable trivially to extract chunks of memory from the affected system. (Thankfully avoiding a throwback to 1990s-style “that vulnerability is purely theoretical” argument.) Naturally speculation moved to the second-order effects: what exactly could be in those regions of memory graciously returned by openssl to anyone who asked. That sensitive information could be spilled was proved relatively easily. Typical of most popular security mistakes, Yahoo! reprised its usual role with a dramatic demonstration of how usernames and clear-text passwords for other people could be recovered from their servers. While damaging and no doubt a big deal for the users involved, these disclosures were relatively contained in scope. A lot more uncertainty lingered around whether private-keys for the SSL server could be extracted using Heartbleed.


This is where some initially put their hopes in “accidental-mitigations,” fortunate properties of the heap manager that just might protect private-keys because of their position in memory relative to leaked regions. This is what CloudFlare initially argued, complete with pretty pictures of heap layouts to visualize the location of where keys are loaded in memory and how far they are from the regions allocated for buffers. For a change CloudFlare was also willing to put its money where its idle speculations. The company created a challenge website [note: certificate revoked]  running the vulnerable nginx/openssl version, inviting anyone to attempt extraction of SSL keys. That did not take very long. In less than a day multiple users had extracted the supposedly protected private-key. (Only much later it became clear why reality did not agree with CloudFlare’s theories. At least one code-path in openssl creates a temporary copy of a prime factor of the modulus– sufficient to factor it and recover private keys– and later frees it, returning that copy to the heap without zeroing it out. Interestingly this is also a poignant reminder of why it is important to properly clear out sensitive data from memory when it is no longer needed. openssl had a mechanism in place for doing precisely that, which happens to be skipped in this case.)


A more interesting argument was advanced by the content-distribution network Akamai. [Full disclosure: this blogger’s current employer is an Akamai customer]  Akamai claimed that due to a custom memory-allocation scheme employed in their modified version of openssl, private-keys would be protected from Heartbleed attacks. Unlike the Cloudflare argument based on heap layouts, Akamai was not claiming this was a matter of luck. Instead we were told, it was a deliberate, intentional defense-in-depth feature intended to protect cryptographic keys:

We replace the OPENSSL_malloc call with our own secure_malloc. Our edge server software sets up two heaps. Whenever we’re allocating memory to hold an SSL private key, we use the special “secure heap.” When we’re allocating memory to process any other data, we use the “normal heap.”

The story may have ended there as that rare success story of security-by-design in the field… until Akamai decided to contribute this patch to the open-source project. Once other people started looking at the code, it became obvious that the modification did not work. Worse the culprit was an amateurish mistake, demonstrating a complete failure to understand how RSA private-key operations are implemented using the Chinese-remainder theorem (CRT) representation of keys. Akamai patch protected only some of the CRT components but not others which are still sufficient to recover the key. It did not help that aside from failing to achieve the stated security objective, the patch contained implementation errors and required multiple iterations to get working.

Key-isolation: old is new again

After the dust settled and benefit of hindsight kicked-in, much better ideas for improving the current situation were put forward. One example is emulating the privilege separation in SSH: create an out-of-process key agent that holds the private keys and exposes a “decryption oracle” without processing hostile input directly from incoming connections.

Strangely missing in action were vendors of hardware-security modules or HSMs. In one sense there is no need to invent any novel technology to better protect cryptographic keys. There is a tried-and-true approach commercially available since at least as far back as 1990: store keys in dedicated crypto hardware, never exposing them directly to the main application. SSL has been one of the markets targeted by HSM vendors, often rebranding their products as “SSL accelerators.”  These early attempts emphasized performance instead of security, trying to capitalize on perceived costs of SSL when server CPUs struggled to keep up with RSA private-key operations. But they  never made much inroads into that market, instead remaining confined to special purpose applications such as banking or certificate authorities. Heartbleed would have been the perfect backdrop for a customer case-study, highlighting a website that deployed HSMs and can now assert confidently that their private keys were not compromised– and unlike Akamai, this mythical website would have been correct.


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s