Security researcher Karsten Nohl recently announced discovery of vulnerabilities in SIM cards, to be presented at the upcoming BlackHat conference. Since exact details of the problem are scant, this post will cover background material on smart cards– which SIM is a special case– and their management model. Nohl’s summary hints at the existence of two independent problems:
- A cryptographic key used for provisioning code to the SIM card can be brute-forced. Recovering the key allows installing new applications.
- Implementation error in the smartcard operating system running inside the SIM card, which allowed one application to access data belonging to another one.
The first problem is nominally the realm of Global Platform. GP is a standard dating back to the 1990s for describing how to carry out content management operations on a smart card. (Here we will use “smart card” in generic sense of a tamper-resistant execution environment regardless of its form factor. By this definition plastic cards used for chip & PIN, electronic passports, smaller SIM cards used in mobile phones to connect to the cellular network, embedded secure elements featured in certain Android phones, USB tokens etc. all feature a “card” environment inside.) Content management includes installing/deleting applications, creating new security domains– roughly corresponding to notion of user accounts on a desktop operating system– as well as assigning cryptographic keys and privileges for these domains. Unlike commodity desktop OSes, card platforms are inherently locked down: these operations are only allowed to well-defined roles such as “card issuer.” The end-user holding the card rarely has that power. Instead a centralized service called “trusted services manager” or TSM has possession of cryptographic keys used to authenticate to the card OS and carry out these privileged operations. GP defines a secure messaging protocol, prescribing the protocol for establishing an authenticated channel, and laying out the format for how individual messages addressed to the card are authenticated and encrypted. GP also has a notion of a card manager application, which performs a role akin to a software installer. Typically the TSM authenticates to the card manager using the issuer-security domain or ISD keys, informally known as card manager keys and uploads new application code over the secure channel. Until GP2.2 the card manager was the only way to install new code. GP2.2 introduced the notion supplementary security domains with privilege for managing code on the card.
How strong is the secure messaging protocol? The answer is, it depends on the choice of parameters. GP defines many options when it comes to the choice of cryptographic primitives. Most of them are based on symmetric keys, with TSM and card sharing a secret, typically unique for each card– so-called “diversified” set up because each card has different keys, avoiding a single global secret across the system. This is a historical artifact of public-key cryptography not being fast enough on the early generation of hardware, since smart cards are highly constrained in terms of power, CPU speed and memory. Cards used in payment applications typically support 2TDEA, which is to say triple-DES with two independent keys, with first and third key being equal. (Only specifying two keys and leaving the third constant would not work because it is vulnerable to a meet-in-the-middle attack.) For example 2TDEA is used for over-the-air provisioning of applets to the Android secure element when Google Wallet is set up. Latest amendments to the GP spec also introduced the option for using AES.
SIM cards however support an alternative way to install applications, using the SIM Application Toolkit. This is a standard only relevant to wireless carriers, independent of Global Platform. Judging by reports it was this application using a single DES key for authenticating code installation rather than the card manager.
Assume that we recovered the card manager keys– while in this case it sounds like the attack did not accomplish that, we will conservatively look at the worst-case scenario. Now what? It’s tempting to conclude that it is game over. But the GP specification is a bit more nuanced than “ISD equals root.” In fact there is no concept of an all-powerful root or administrator role in cards compliant with GP. Here are examples of things that TSM can not do, even with complete control over card manager keys:
- Read/write raw EEPROM, Flash or other persistent storage of the card
- Recover private data belonging to an existing application
- Upgrade the code for an existing application. In fact GP has no notion of in-place upgrade. An application can be deleted and reinstalled but that will lose all existing data associated with that instance. There is no way to “backdoor” an already installed application with a malicious version without losing all instance data.
- Read out the cryptographic keys associated with any other security domain– TSM can overwrite them to take over any other supplementary security domain, but not learn the previous keys.
- Attach debugger or other diagnostic schemes to existing applications; GP has no such concept either.
- Recover the binary for an existing applications. Surprisingly even reading out the executable code is not possible after the application is installed.
This presents the attacker with a dilemma. We have a SIM card with previous installed application– the GSM applet– which contains valuable cryptographic secrets used to authenticate to the carrier network. (Just to be clear: the GSM key has nothing to do with Global Platform. It is used for obtaining cell phone service by proving identity of the subscriber to the base-station.) Even with ISD keys we can not scrape that secret out directly. But we are able to install other applications on the card. What can those applications do?
Global Platform is silent on this point. That is up to the underlying card environment. That brings us to the second vulnerability hinted in Nohl’s summary. JavaCard is a common standard for programming smart cards, running a highly restricted set of Java. Not all cards are JC-based, but it is popular enough that many SIM products utilize this platform. JavaCard in principle has a very clean model for supporting multiple, isolated applets running side-by-side on the same virtual machine. (This is unlike the model used for running Java on a commodity OS, where each application typically runs in its own process with its own instance of JVM. Due to the resource constraints of card environments, there is just one VM.) Short version is that all applets are isolated from each other, and the only way to communicate is by going through an explicit “sharable interface” mechanism to make RPC-style calls from one applet to another. Outside that mechanism, it is not possible for one applet to reach-in and read out data associated with another applet in the same VM.
At least that is the theory. In principle the virtual machine implementations running on card environments have much weaker validation rules compared to their full-fledged desktop incarnations. For example, the applet installation process for JavaCard includes an off-card validation step:
At this point, alarm bells should go off. The whole point of Java is that it promises sandboxing for code, in ways that native execution environments can not easily provide without great overhead. Yet the JVM on the card is relying on some external validation of bytecode to guarantee those properties? What happens if someone decides to “skip” that off-card validation step and install deliberately corrupted bytecode?
That we will find out at BlackHat this year.