TrustZone, TEE and the delusion of security indicators (part II)


[continued from part I]

The first post in this series reviewed a proposal advocating use of TrustZone on ARM architecture to implement trusted paths for payment applications. The advertised design is purportedly safe against malware affecting the host operating system. In this second post we look at why that does not quite work.

First, a digression into some technicalities that are solvable in principle. One requirement is that untrusted applications can not force a switch out of secure mode. Otherwise the display can revert to malware control in the middle of PIN collection. (Going back to the earlier parallel with Windows secure attention sequence, ordinary applications can not flip back to the main desktop once user has pressed CTRL+ALT+DEL, nor draw on the secure desktop.) Similarly input events from other sensors need to be suppressed. Otherwise side channel leaks may result. For example a proximity sensor along the lines of Samsung Galaxy S4 can reveal where the user’s hand was hovering before it touched down to register a key press.  Sensitive gyroscopes could hint at location of touch events, since pushing on different regions could causes the device to tilt slightly about its axis in different ways. Slightly far-fetched, the camera could capture images that include reflection of the screen from a mirror or glass surface. There is a uniform solution for all of these: during the PIN collection, disable all unnecessary sensors and directly process input events from the remainder in privileged mode.

What remains is a fundamental, conceptual problem with the design: how does the user know whether the device is operating in privileged mode? Looking at a PIN entry screen, how can one ascertain if that UI resulted from the payment application initiating a switch into privileged mode? What prevents malware from creating the exact same dialog and displaying it from its own untrusted execution mode?

Such doubts exist as long as the display can switch between untrusted and secure modes of operation. This is not merely a practical constraint. For flexibility, only critical functionality– such as PIN entry and key management– is typically implemented in privileged mode. Bulk of the business logic lives in a vanilla application running on the fully corruptible world of the host OS. But even if the entire payment application lived in privileged mode, it would not matter as long as the device also supports running plain applications. Once malware starts executing, there is no reason for it to cede control of display or trigger switch to trusted input mode.

This is where the TrustZone argument gets hand-wavy. “The device will have security indicators,” the proponents respond. Of course such visual indicators can not be part of the regular display area, since ordinary applications can render to the entire screen. Perhaps there is a dedicated LED beside the screen, lighting up when the display is operating in its trusted state. Minor problem: will users pay attention?

Growing volume of usability research in other contexts has demonstrated that users do not understand security indicators, even for something as common as the SSL status for a web browser. Several papers in usable security explored the effectiveness of various signals against phishing, including The emperor’s new security indicatorsYou’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings and An Evaluation of Extended Validation and Picture-in-Picture Phishing Attacks. The findings are consistent:  passive indicators do not work. Users are not paying attention. Expecting that “this time is different,” that some obscure signal in an unfamiliar device, implemented differently by each type of hardware, will fare any better is unrealistic. Similar problems plague active defenses that depend on users to take some action such as pressing CTRL+ALT+DEL equivalent. When users see a PIN collection screen that looks vaguely legitimate, the natural response is to enter that PIN. It is not intuitive to “challenge” the payment application with an additional step to verify that it is indeed operating in a safe state.

Properly addressing such risks requires looking at the whole system. For example one could imagine connecting the credit card reader to the mobile POS such that its raw input goes directly to code running in privileged mode. Then the act of swiping a card could automatically switch the device into trusted input mode, without requiring cooperation from user applications. (That may still require a hardware change. Typically magnetic-stripe readers are attached via USB or audio jack in the case of Square dongles. In both cases the output is perfectly accessible to ordinary applications.) Even that simplistic model runs into problems with error cases. For example, when the wrong PIN is supplied and transaction is declined, the card holder will be prompted to reenter their PIN. But it is the same untrusted application responsible for making that determination and initiating a new PIN entry sequence.

This is far from an exhaustive treatment of all possible design challenges, but it is enough to demonstrate the point: establishing trusted input path is a complex system problem. It requires careful understanding of the scenario, as well as the limitations of human factors in designing usable security. As a solution in search of a problem, TrustZone is understandably pitched as the magical fix for a slew of security challenges. But preventing one specific attack– malware intercepting PIN entry– is not the same as solving the original problem– guaranteeing that user enter payment information into the right place. Perhaps the naiveté is best exemplified by a quote from the slide shown earlier:

“A corresponding reduction in interchange rate is justifiable alongside reduction in risk – possibly approaching cardholder present rates.”

ARM the chip manufacturer is telling credit card networks that online purchases on mobile devices with TrustZone are so safe that they deserve to  be treated as card-present transactions, as if the user were physically present in person, instead of higher risk card-not-present?

One can imagine MasterCard and Visa beg to differ.

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s