An interesting story from past week was IOActive expose on counterfeit chips. The researchers found that a “secure microprocessor” ordered from an online market place proved to be a lower-end version of the same hardware line, dressed up as the more capable/expensive product in a clear instance of hardware tampering. While in this case the modifications appear to be motivated by cost-saving– selling the lower-end hardware at higher price-point– the authors use the incident as starting point to ask:
If it is so easy to taint the supply chain and introduce fraudulently marked microprocessors, how hard is it to insert less
obvious – more insidious – changes to the chips? For example, what if a batch of ST19XT34 chips had been modified to weaken the DES/triple-DES capabilities of the chip, or perhaps the random number generator was rigged with a more predictable pseudo random algorithm – such that an organized crime unit or government entity could trivially decode communications or replay transactions?
This is not the first time that the integrity of supply chain has been questioned, but it is notable for involving cryptographic hardware. What the article does not mention is that many smart cards, embedded secure elements and similar hardware trusted execution environments have additional cryptographic properties that can be used for verifying their authenticity.
To take one example: Global Platform is a standard for managing smart cards and for lack of a better word, “card-like” gadgets such as USB tokens, SIM cards in cell phones and more recently the embedded secure elements on Android devices. Global Platform calls for each card to be provisioned with a unique set of keys used to authenticate the Issuer Security Domain or ISD. (In some instances there can be more than one ISD key– an earlier post about the Android secure element noted PN65N models have 4 key slots.) The keys are injected by the hardware manufacturer at fabrication time, and then transferred over to a Trusted Services Manager or TSM, responsible for managing the contents of the smart card. This can be done either by handing over big list of individual keys, or more commonly using a diversification scheme. In the latter case, manufacturer and TSM share a global seed key which is used to derive individual keys based on the card ID.
When the TSM wants to remotely install an application, there is a mutual authentication process laid out by GP that is carried out between the TSM and smart card. After running through the steps of the protocol, the TSM is assured that it is talking to a genuine card provisioned with the right ISD keys, and the card is assured that it is receiving commands from an authorized TSM. Outcome is an authenticated and encrypted channel between the two parties who may be separated by a continent; the TSM could be website hosted in the US while the “smart card” is a SIM inside a phone travelling around Africa. With some caveats, GP secure messaging ensures that commands issued by the TSM can not be read or tampered with by any other party with access to the communication channel along the way. For example the TSM can guarantee that an EMV chip & PIN application is being installed on a proper smart card, and sensitive information such as card details used for payment will only be available inside that locked down environment. Garden-variety hardware counterfeiting is ruled out in this scenario. If the supply chain had been poisoned with bogus SIM cards, these cards would not have the correct ISD keys. Without being able to authenticate to the TSM, they would never get as far as receiving the credit card data.
But this is a far cry from saying that GP solves all hardware tampering issues. In particular three classes of problems remain:
- Hardware already configured for use. If a card arrives from the manufacturer fully configured with no additional TSM involvement or provisioning necessary, Global Platform does not help. This is often the case for hardware tokens delivered to an enterprise: the manufacturer simply ships a batch of cards already configured with all necessary functionality. Customer does not have ISD keys for additional confirmation of card integrity. (Granted having possession of ISD keys can also become a liability, since it allows tampering with card applications. Luckily GP also defines supplementary security domains that can be used to authenticate compliant cards without privileges for modifying them.)
- More complex counterfeiting, where the bogus chip contains a fully functional instance of the real hardware. This is easier than it looks because the physical form factor of most smart cards are huge in relation to the size of the circuitry inside. Inert plastic or air makes up the bulk of the volume, leaving plenty of room for sneaking in additional circuits. In this case, the counterfeit chip can execute a man-in-the-middle attack on communication with the card. While GP protects provisioning of functionality and data, it does not protect ordinary usage by end user. Credit card details sent encrypted remain safe from eavesdropping because the TSM-card communication takes place over secure messaging sessions. The same is not true of user PIN sent in the clear by the ordinary host application using the card.
- Actual breach of tamper resistance. If ISD keys could be recovered from the authentic chip, one can create a full-replica with the same keys that is indistinguishable for Global Platform purposes. But such an attack is far more difficult than merely substituting a look-alike unit. If the card has decent hardware and software security, key extraction will require substantial time, effort and specialized equipment. This makes it impractical to scale the attack to large volume of units.