Future-proofing software updates: Global Platform and lessons from FBiOS (part III)


[continued from part II]

(Full disclosure: this blogger worked on Google Wallet 2011-2013)

Caveats

Global Platform constrains the power of manufacturers and system operators to insert back-doors into a deployed system after the fact. But there are some caveats to cover before we jump to any conclusion about how that could have altered the dynamics of FBI/Apple skirmish. There are still some things that a rogue card-manager— or more precisely “someone who knows one of the issuer security-domain keys” in GP terminology, a role often outsourced to trusted third-parties after deployment— can try to subvert security policy. These are not universal attacks. Instead they depend on implementation details outside the scope of GP.

Card OS vulnerabilities

Global Platform is agnostic about what operating system is running on the hardware, and for that matter the isolation guarantees provided by the OS for restricting each application to its own space. If that isolation boundary is flawed and application A can steal or modify data owned by application B, there is room for the issuer to work around GP protections. While there is no way to directly tamper with the internal state of that application B, one can install a brand-new application B that exploits the weak isolation to steal private data from A. Luckily most modern card OSes in fact do provide isolation between mutually distrustful applications, along with limited facilities for interaction provided both sides opt-in to exchanging messages with another application. For example JavaCard-based systems apply standard JVM restrictions around access to memory, type-safety and automatic memory management.

Granted, implementation bugs in these mechanisms can be exploited to breach containment. For example early JavaCard implementations did not even implement the full-range of bytecode checks expected of a typical JVM. Instead they call for a trusted off-card verifier to weed out malformed byte-code prior to installing the application. This is a departure from the security guarantees provided by a standard desktop implementation of JVM. In theory the JVM can handle hostile byte-code by performing necessary static and run-time checks to maintain integrity of the sandbox. (In reality JVM implementations have been far from perfect in living up to that expectation.) The standard excuse for the much weaker guarantees in JavaCard goes back to hardware limitations. Performing these additional checks on-card adds to complexity of the JVM implementation which must run on the limited CPU/memory/storage environment of the card. The problem is, off-card verification is useless against a malicious issuer seeking to install deliberately malformed Java bytecode with the explicit goal of breaking out of the VM.

It’s worth pointing out that this is not a generic problem with card operating systems, but a specific case of cutting-corners in some versions of a common environment. Later generations of JavaCard OS have increasingly hardened their JVM and reduced dependence on off-card verification, to the point that at least some manufacturers claim installing applets with invalid byte-code will not permit breaking out of the JVM sandbox.

Global state shared between applications

Another pitfall is shared state across applications. For example, GP defines a card global PIN object that any application on the card can use for authenticating users. This makes sense from a usability perspective. It would be confusing if every application on the card had its own PIN and users have to remember whether they are authenticating to the SSH app vs GPG app for instance. But the downside of the global PIN is that applications installed with the right privilege can change it. That means a rogue issuer can install a malicious app designed to reset that PIN, undermining an existing application which relied on that PIN to distinguish authorized access.

There is a straight-forward mitigation for this: each application can instead use its own, private PIN object for authorization checks, at the expense of usability.  (Factoring out PIN checks into an independent application accessed via inter-process communication is not  trivial. A malicious issuer could replace that applet by a back-doored version that always returns “yes” in response to any submitted PIN, while keeping the same application identifier. Some type of authenticated channel is required.) In many scenarios this is already inevitable due to the limited semantics of the global PIN object, including mobile payments such as Apple Pay and Google Wallet which support multiple interfaces and retain PIN verification state during a reset of the card.

Hardware vulnerabilities

There is another way OS isolation can be defeated: by exploiting the underlying hardware. Some of these involve painstakingly going after the persistent storage, scraping data while the card is powered off and all software checks out of the picture. Others are more subtle, relying on fault-injection to trigger controlled errors in the implementation of security checks such as by using a focused laser-beam to induce bit flips. Interestingly enough, these exploits can be aided by installing new, colluding applications on the card designed to create a very specific condition (such as specific memory layout) susceptible to that fault. For example, this 2003 paper describes an attack involving Java byte-code deliberately crafted to take advantage of random bit-flip errors in memory. In other words, while issuer privileges do not directly translate into 0wning the device outright, they can facilitate exploitation of other vulnerabilities in hardware.

Defending against Apple-gone-rogue

Speaking of Apple, there is a corollary here for the FBiOS skirmish. Manufacturers, software vendors and cloud-service operators all present a clear danger to the safety of their own customers. These organizations can be unknowingly compromised by attackers interested in going after customer data; this is what happened to Google in 2009 when attackers connected to China breached the company. Or they can be compelled by law-enforcement as in the case of Apple, called on to attack their own customers.

“Secure enclave” despite the fancy name is home-brew proprietary technology from Apple without a proven track-record or anywhere near the level of adversarial security research aimed at smart-cards. While actual details of the exploit used by FBI to gain access to the phone are still unknown, one point remains beyond dispute: Apple could have complied with the order. Apple could have updated the software running in the secure enclave to weaken previously enforced security guarantees on any phone of that particular model. That was the whole reason this dispute went to court: Apple argued that the company is not required to deliver such an update, without ever challenging the FBI assertion that it was capable of doing that.

Global Platform mitigates against that scenario by offering a different model for managing multiple applications on a trusted execution environment. If disk-encryption and PIN verification were implemented in GP-compliant hardware, Apple would not face the dilemma of subverting that system after the fact. Nothing in Global Platform permits even the most-privileged “issuer” from arbitrarily taking control of an exiseting application already installed. Apple could even surrender card-manager keys for that particular device to the FBI and it would not help FBI defeat PIN verification, absent some other exploit against the card OS or hardware.

SE versus eSE

The strange part: there is already a Global Platform-compliant chip included in newer generation iPhones. It does not look like a “card.” That word evokes images plastic ID cards with specific dimensions and rounded corners, known by the standard ISO 7810 ID1.  While that may have been the predominant form-factor for secure execution environments when GP specifications emerged, these days such hardware comes in different shapes and incarnations. On mobile devices, it goes by the name embedded secure element—another “SE” that has no relationship to the proprietary Apple secure enclave. For all intents and purposes, eSE is the same type of hardware one would find on a chip & PIN enabled credit-card being issued by US banks today to improve security of payments. In fact mobile payments over NFC was the original driver for shipping phones equipped with an eSE, starting with Google Wallet. While Google Wallet (now Android Pay) later ditched eSE entirely, Apple picked up the same hardware infrastructure, even same manufacturer (NXP Semiconductors) for its own payments product.

The device at the heart of the FBI/Apple confrontation was an iPhone 5C, which lacks an eSE; Apple Pay is only supported on iPhone6 and later iterations. But even on these newer models, eSE hardware is not used for anything beyond payments. In other words, there is already hardware present to help deliver the result Apple is seeking— being in a position  where the company can not break into a device after the fact. But it sits on the sidelines. Will that change?

In fairness, Apple is not alone in under-utilizing the eSE. When this blogger worked on Google Wallet, general-purpose security applications of eSE were an obvious next step after mobile payments. For example, the original implementation of disk encryption in Android was susceptible to brute-force attacks. It used a key directly derived from a user-chosen PIN/password for encrypting the disk. (It did not help that the same PIN/password would be used for unlocking the screen all the time, all but guaranteeing that it had to be short.) Using eSE to verify the PIN and output a random key would greatly improve security, in the same way using TPM with PIN check improves the security of disk encryption compared to relying on user-chosen password directly.  But entrenched opposition from wireless carriers meant Android could not count on access to the eSE on any given device, much less a baseline of applications present on eSE. (Applications can be pre-installed or burnt into the ROM mask at the factory, but that would have involved waiting for a new generation of hardware to reach market.) In the end Google abandoned the secure element, settling instead for the much weaker TrustZone-backed solution for general purpose cryptography.

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s