[continued from part I]
Smart-cards as application platforms differ from ordinary consumer devices in one crucial way: they are locked down against the “user.” Unlike a PC or smart-phone which derives its value from its owner’s freedom to choose from a large ecosystem of applications, cards are optimized for security; the ability to run specific applications selected by the issuer with highest assurance level. While the operating system of modern cards is powerful enough to support multiple applications at once (and even have them running at the same time, although not in traditional multi-tasking fashion) it is not up to the user to decide which applications are installed.
Global Platform in a nutshell
Global Platform formalizes that notion by defining an access control model around card-management operations such as loading, running and uninstalling applications. It also defines a family of authentication protocols for appropriate entities to assert those privileges. At a high-level GP relies on the notion of security domains. Issuer security- domain or ISD is the most privileged one. In earlier versions of Global Platform only the issuer could install applications. Later iterations generalized this to allow for delegating such privileges to supplementary security-domains or SSDs.
In practice that means installing new apps requires authenticating to ISD, which in turn involves knowledge of a unique secret-key for each card. (As a historic note, when Global Platform was being developed, the types of hardware found in smart-cards were too anemic for public-key cryptography. This is why the standard is primarily based on symmetric-key primitives and often outdated ones at that, such as triple-DES. Recent updates to GP have been introducing public-key based variants to core protocols.) ISD keys or colloquially “card-manager keys” are tightly controlled by the organization distributing the hardware, or outsourced to a third-party specializing in card management at scale. With the exception of development samples, it is difficult to find hardware shipped with “default” keys known to the end-user.
This state of affairs can help simplify the security story; it’s difficult to subvert isolation between applications when you can’t install a malicious app to attack others in the first place. But it also complicates and distorts market dynamics around deployment. Google Wallet is an instructive example. The so called “embedded secure element” on Android devices happens to be GP-compliant smart-card hardware, permanently attached to the phone. It inspired an ugly skirmish between Google and wireless carriers over control of the issuer role. In fact GP has been struggling for years to expand this inflexible model and bootstrap a rich developer ecosystem where multiple vendors can coexist to offer applications on cards controlled by a different party.
But for our purposes, there is a more interesting security property of GP management model: even the all-powerful issuer role is greatly constrained in what it can do.
There is no “root” here
Here is a short list of what does not exist in Global Platform:
- Reading out contents of card storage. All data associated with card applications is stored in permanent storage, which is traditionally EEPROM or flash-based in more recent devices. GP defines a specific set of structured information that can be retrieved from the card: list of applications installed, card unique ID etc. But there is no command to retrieve chunk of data from a specific EEPROM offset. There is not even a command to retrieve the executable code for an application after installation.
- Making arbitrary modifications to card storage. Again GP defines specific structured operations that modify card contents (install new application, create a security domain, modify keys…) there is no provision in the protocol for writing arbitrary data at a specific offset.
- Similar provisions apply to RAM. There is no equivalent to UNIX /dev/mem or /dev/kmem for freely reading and writing memory.
- There are no debugging facilities for getting information about the internal state of card applications, much less “attaching a debugger” in the conventional sense to a running application to control its execution.
Future-proofing against malicious updates
These may be viewed as limitations on platform functionality, but looked another way they constitute an interesting defense against the “rogue issuer” threat. Suppose we have a card with a general-purpose encryption application already provisioned. This app is responsible for generating cryptographic keys within the secure card environment and leveraging them to unlock private user information (such as an encrypted disk) conditional on the user entering a correct PIN. Let’s posit that the card-issuer started out “honest:” they installed a legitimate copy of the application on the card when they first handed it over to the customer.
Later this issuer experiences a change of heart. Perhaps compelled by law enforcement or due to a change of business model, they now seek to undermine the security provided by the application and extract those secret keys without knowing the user PIN. If the platform in question is Global Platform compliant, they would be out-of-luck going through the standard mechanisms. There is no mechanism in Global Platform to scrape data associated with an application. Nor can the issuer selectively tamper with application logic, for example skipping the PIN check or sneaking in a new code path that causes the application to cough up secret keys. Whether or not the issuer is cooperating, they would have to find another avenue such as exploiting some weakness in the tamper-resistance of the hardware- expensive and time-consuming attack that have to be repeated for each device, as opposed to software attacks which scale with minimal effort to any number of devices.
There is no “update” either
A common question that comes up when threat-modelling rogue issuers is the question of software updates. That was the avenue FBI pursued with Apple: to update the operating system with a subverted version that allows unbounded number of incorrect PIN entries. Even if we grant the premise that GP does not allow the issuer to reach into the application state to read secrets or tamper with the code, why can’t issuer simply replace the legitimate application with a back-doored version designed to leak secrets or otherwise misbehave? Because Global Platform has no concept of “updating” an application. One can delete an application instance and launch a new one under the same ID, but that first step erases all data associated with that instance is also removed. For the chip & PIN example above, that means all of the private information associated with that credit-card is gone. It is not possible to replace only the code while retaining data.
That is why updating card applications in the field is rare, aside from the logistical challenges of delivering updates customized to each card from ISD. System that require “upgrading” in the conventional sense need to plan that out in advance by using a split design. Typically the functionality is split into two pieces: a very-small application holds secrets, cryptographic keys and internal state, while a much larger application contains business logic for leveraging that state. These two pieces communicate using a suitable IPC mechanism exposed by the environment (for example, JavaCard defines sharable-interface objects.) The second half can be “updated” by removing and reinstalling a new version, because it does not contain any state while the first half is not affected. Still any replacement is still bound by the same interface agreement between them. If the interface allows the business-logic app to request a message to be decrypted, the replacement app can invoke the same functionality. But it will not magically gain a new capability, such as asking the other application to spit out encryption keys.
That said, there are limitations and edge-cases where having issuer privileges do grant an advantage in attacking a preexisting application, although far from guaranteeing success. These will be the topic of the next post.
[Continued: caveats & conclusion]