Taking security seriously, while failing spectacularly at it


It’s become nearly impossible to state “we take security [of our users] seriously” with a straight face.

This week witnessed three different companies suffer three unrelated incidents (two of them sharing the same underlying root cause) and all of them resorting to the same damage control cliches.

Here is the MSFT mea culpa on the MD5 collision debacle covered earlier on this blog:

“Microsoft takes the security of its customers seriously”

Here is LinkedIn, responding to 6.5M unsalted password hashes floating around– as an aside, nothing like starting the day discovering your own password hash in a contraband file shared by colleagues before receiving a data breach notification from the clueless web site in question:

“We take the security of our members very seriously.”

Here is the online dating site eHarmony spinning the leak of 1.5 million passwords:

“The security of our customers’ information is extremely important to us, and we do not take this situation lightly.”

With companies taking security so seriously, attackers hardly need anyone to take a more light-hearted approach. One can imagine the Joker asking: “Why so serious?”

It is difficult to know from the outside how these vulnerabilities came about. (Full disclosure: the author is ex-MSFT employee,  but was not involved in terminal services and possesses no information about the incident beyond what is available from public sources.) Were they unknown to the organization? Is that because that aspect of the system was never reviewed by qualified personnel? Or was it missed because the reviewers assumed this was an acceptable solution? Or perhaps the issue was properly flagged by the security team but postponed/neglected by an engineering organization single-mindedly focused on schedule? It is safe to say there will be a post-mortem at MSFT. If LinkedIn and eHarmony have a culture of accountability and learning from mistakes, perhaps they will also conduct their own internal investigations. But the useful information and frank conclusions reached in such exercises rarely leave the safe confines of the company– and that is assuming the leadership can resist the temptation to turn that post-mortem into an exercise in whitewashing.

So we can only draw conclusions based on public information, without the benefit of mitigating circumstances to absolve the gift. And these conclusions are not flattering for any of the companies involved.

LinkedIn and Harmony committed a major design flaw in storing passwords using unsalted hashes with SHA1. This is a clear-cut case of bad judgment that even the CISSP-friendly excuse “we follow industry best practices” is not applicable–  the importance of salting to frustrate dictionary attacks is well known. For comparison the crypt scheme used for storing UNIX passwords dates to the 1980s and has salting. (Both sites could have also chosen intentionally slow hashes– for example by iterating the underlying cryptographic primitive– to further reduce the rate of password recovery, but this is a relatively small omission in comparison.) The fact that both sites a experienced a breach resulting in access to passwords is almost less of a PR black-eye given the frequency of such mishaps across the industry, compared to the skeletons in the closet revealed as a result of the breach.

By comparison, the MSFT incident is far more nuanced and complex compared to the LinkedIn/eHarmony basic failure to understand cryptography. It is not a single mistake, but a pattern of persistently poor judgment that resulted in the ability of Flame authors to create a forged signing certificate:

  • Using a subordinate CA chaining up the MSFT product root for terminal server licensing. The product root is extremely valuable because it is trusted for code signing. As explained by Ryan Hurst, there was already another MSFT root that would have been perfectly suitable for TS purpose.
  • Leaving the code-signing EKU in the certificate, even though terminals server licensing appears to have no conceivable scenario that requires this feature.
  • Attempting to mitigate the excessive privilege from aforementioned mistake by using critical extensions, such as the Hydra OID. (As an aside: “Hydra” was the ancient codename for terminal services in NT4 days.) This would have worked, if Windows XP correctly implemented the concept of critical extension– if the application does not recognize it, it must reject the certificate as invalid. On XP Authenticode certificates still past muster even in the presence of unknown critical extensions– that is almost one third of Windows installations, given the relatively “recent” age of Windows 7 and abysmal uptake of its predecessor Vista.
  • Issuing certificates using the MD5 algorithm in 2010 (!!)– that is nearly two years after a dramatic demonstration that certificate forgery is feasible against CAs using MD5 and six years after the publication of first results of MD5 collision.
  • Using a sequential serial number for each certificate issue, which is critical to enabling the MD5 collision attack by allowing attackers to predict exactly what the CA will be signing.
Individually any of these five blunders could be attributed to routine oversight. By itself any mistake would have been survivable, as long as remaining defenses were correctly implemented. But collectively botching all of them undermines the credibility of the  “taking security seriously” PR spin.
CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s