Downside to security standards: when vulnerable is better than uncertified


Are security standards effective? The answer depends to a great extent on the standards process and certification process in question. On the one hand, objective third-party evaluations with clearly spelled-out criteria represent a big improvement over vendors’ own marketing fiction. In a post-Snowden world it has become common to throw around meaningless phrases such as “military grade” and “NSA proof” to capitalize on increased consumer awareness. At the same there is the gradual dawning that “certified” does not equal secure. For all the profitable consulting business PCI generates, that particular compliance program could not save Target and Home Depot from massive data breaches. One could argue that is a symptom of PCI being improperly designed & implemented, rather than any intrinsic problem with information security standards per se. After all it is the canonical example of fighting last year’s war: there is still verbiage in the standard around avoiding WEP and single-DES, as if the authors were addressing time-travelers from 1990s. Narrowing our focus to more “reputable” standards such as FIPS or Common Criteria which have long histories and fewer examples of spectacular failures, the question becomes: are consumers justified in getting a warm and fuzzy feeling on hearing the words “FIPS-certified” and “CC EAL5”?

When vendors are left to their own devices

On the one hand, it is easy to point out examples of comparable products where one model that is not subject to a certification process has demonstrable weaknesses that would have been easily caught during testing. BlackHat briefings recently featured a presentation on extracting 3G/4G cryptographic secrets out of a SIM card using elementary differential power-analysis. For anyone working on cryptographic hardware— and SIM cards are just smart-cards manufactured in a compact form factor designed to fit into a mobile device— this was puzzling. These attacks are far from novel; earliest publications date back to late 1990s. Even more remarkable, off-the-shelf equipment was sufficient to implement this attack against modern hardware. No fancy lab with exotic equipment required. The attack represents a complete break of the SIM security model. These cards have one job: safeguard the secret keys that authenticate the mobile subscriber to the wireless network. When these keys can be extracted and moved to another device, the SIM card has been effectively “cloned”— it has failed at its one and only security objective. How did the manufacturer miss such an obvious attack vector?

It turns out that vanilla SIM cards are not subject to much in the way of security testing. Unless that is, they are also going to be used for something  more critical: mobile payments over NFC. More commonly known as UICC, these units which are also designed to hold credit-card data must go through a series of rigorous evaluations defined by EMVCo. Examples of QA include both passive measurement of side-channel leaks, as well as attempting active attacks to deliberately induce faults in the hardware by manipulating power supply or zapping the chip with laser pulses. Amateur mistakes such as the one covered in BH presentation are unlikely to survive that level of scrutiny.

Frozen in time

From an economic perspective this makes sense. In the absence of external standards, manufacturers have little incentive to implement a decent security program. They are still free to make any number of marketing assertions, which is cheaper than actually doing the work or paying a  qualified, independent third-party to verify these assertions. It is tempting to conclude that more mandatory certification programs can only help bring transparency to this state of affairs: more standards applied to a broader range of products. But there is an often ignored side-effect where inflexible application of certification programs can have the effect of decreasing security. Mindless insistence on only using certified products can introduce friction to fixing known vulnerabilities in fielded systems. This is subtly distinct from the problem of delaying time to market or raising cost of innovation. That part goes without saying: introducing a new checklist as prerequisite to shipping certainly extends development schedule, while empowering third-party certification labs as gatekeepers to be paid by the vendor is effectively a tax on new products. Meanwhile some customer with an unsolved problem is stuck waiting while the wheels are turning on the certification process. That outcome is certainly suboptimal but the customer is not stuck with a defective product with a known exposure.

By contrast, consider what happens when an actual security flaw is discovered in a product that has already been through a certification process, bears the golden stamp of approval and has been deployed by thousands of customers. Certification systems are not magical, they can not be expected to catch all possible defects. (Remember the early optimism equating “PCI compliance” with a guarantee against credit-card breaches?) A responsible vendor can act on the discovery and make improvements to the next version of the product to address the weakness. Given that most products are designed to have some type of update mechanism, they may even be able to ship a retroactive fix that existing customers can apply to their units in the field to mitigate the vulnerability. Perhaps it is a firmware update or recommend configuration change for customers.

There is a catch: this new version may not have the all-important certification. At least, not initially out of the gate. Whether or not changes reset certification and require another pass varies by standard. Some are more lenient and scale the process based on extent of changes. They might exempt certain “cosmetic” improvements or offer fast-tracked “incremental” validation against an already vetted product. But in all cases there is now a question about what happens to the existing certification status. That question must be resolved before the update can be delivered to customers.

Certifiably worse-off

That dilemma was impressed on this blogger about 10 years ago on the Windows security team. When research was published describing new cache-based side-channel attacks affecting RSA implementations, it prompted us into evaluating side-channel protections for existing Windows stack. This was straightforward for the future  version of cryptography stack dubbed CNG. Slated for Vista, that code was still years from shipping, and there was no certification status to worry about.

Fixing the existing stack (CAPI) proved to be an entirely different story. Based on BSAFE library, the code running on all those XP and Windows Server 2003 boxes out there already boasted FIPS 140 certifications as a cryptographic module. The changes in question were not cosmetic; they were at the heart of multi-precision integer arithmetic used in RSA. It became clear that changing that code any-time soon with a quick update such as the monthly “patch Tuesday” would be a non-starter. While we could get started on FIPS recertification with one of the qualified labs, there was no telling when that process would conclude. Most likely outcome would be a CAPI update in a state of FIPS-limbo. That would be unacceptable for customers in regulated industries— government, military, critical-infrastructure— who are required by policy to operate systems according to FIPS compliance at all times. Other options were no better. We could exempt some Windows installations from the update, for example using  conditional logic that reverts back to existing code paths on systems configured in strict FIPS mode. But that would paradoxically disable a security improvement on precisely those systems where people most care about security.

All things considered, delaying the update in this example had relatively low risk. The vulnerability in question was local information disclosure, only  exploitable in specific scenarios when hostile code coexists on same OS with trusted applications performing cryptography. While that threat model became more relevant later with sand-boxed web-browser designs pioneered by Chrome, in the mid-2000s most likely scenario would have been terminal server installations shared by multiple users.

But the episode serves as a  great demonstration of what happens when blind faith in security certifications is taken to its logical conclusion: Customers prefer running code with known vulnerabilities as long as it has the right seal of approval, compared to new version that lives under a cloud of “pending certification.”

Designing for updates

The root cause of these situations is not limited to a rigid interpretation of standards that requires running compliant software 100% of the time even when it has a demonstrable flaw. Security certifications themselves are perpetuating a flawed model based on blessing static snapshots. Most certifications pass judgement on a very specific version of a product evaluated in a very specific configuration and frozen in time. In a world where threats and defenses constantly evolve, one would expect that well-designed products will also change in lockstep. Even products that are traditionally considered “hardware” such as firewall appliances, smart-cards and HSMs have significant amount of software that can be updated in the field. Arguably the capability for delivering new improvements in response to new vulnerabilities is itself an important security consideration, one that is under-valued by criteria focused on point-in-time evaluations.

So the solutions are two fold:

  • More flexible application of existing standards to accommodate a period of uncertainty following a new update being available before it has received official certification. There is already a race against the clock between defenders trying to patch their systems and attackers trying to exploit known vulnerabilities- and the existence of a new update from the vendor tips off even more would-be adversaries, increasing the second pool. There is no reason to further handicap defenders by waiting on the vagaries of third-party evaluation process.
  • Wider time-horizon for security standards. Instead of focusing on point-in-time evaluations, standards have to incorporate the possibility that a product may require updates in the field. That makes sense even for static evaluations: software update capabilities themselves often create vulnerabilities that can be exploited to gain control over a system. But more importantly, products and vendors need a story around how they can respond to new vulnerabilities and deliver updates (so they do not end up with another Android) without venturing into the territory of “non-certified” implementations.

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s