Lessons from Google Wallet: how wireless carriers undermined mobile security

Apple is expected to launch an NFC payments solution for iPhone. For the small community working at the intersection of NFC, payments and mobile devices, Apple’s ambitions in NFC has been one of the worst-kept secrets in the industry. The cast of characters overlaps significantly: there are only so many NFC hardware providers to source from, so many major card networks to partner with and very similar challenges to overcome on the way. Of the many parallel efforts going on in this space, some played out in full public view. Wireless carriers have been forging ahead with field trails for their ISIS project– now rebranding to Softcard to avoid confusion with the terrorist group. Others subtly hinted at their plans, as when Samsung insisted on specific  changes to support its own wallets. Then there was Apple, proceeding quietly until now. With almost three years since the initial launch of Google Wallet, now is a good time to look back on that experience, if only to gauge how the story might play out differently for Apple.  [Full disclosure: this blogger worked on Google Wallet 2012-2013. Opinion expressed here are personal.]

Uphill battles

There are many reasons why launching a new payment system is difficult and for precisely the same reasons, to pinpoint the root cause for why a deployed system has been slow to gain traction. Is it the unfamiliar experience for consumers, tapping instead of swiping plastic cards? (But that same novelty can also drive early adopters.) Were there other usability challenges? Is it the lack of NFC-equipped cash registers at all but largest merchants? Or was that just a symptom of an underlying problem: unclear value proposition for merchants. Tap-transactions have higher security and less fraud risk, yet merchants are still paying same old card-present interchange rate. For that matter did users perceive sufficient value beyond the gee-whiz factor? Initial product only supported a prepaid card and Chase MasterCards, limiting the audience further. All of these likely contributed to a slow start for Google Wallet.

But there was one additional impediment having nothing do with technology, design, human factors or economics of credit cards. It was solely a function of the unique position Google occupies, both competing against wireless carriers over key mobile initiatives, while courting the very same companies to drive Android market share.

When consumers root their phone to run your app

When the project launched in 2011, it was limited to Sprint phones. That is bizarre to say the least. All mobile app developers crave more users. Why would any software publisher limit their prospects to one carrier alone, and not even the one with largest customer base at that? There is no subtle technical incompatibility involved. There is nothing magical about the choice of wireless carrier that unlocks hidden features out of the same exact commodity hardware that is not available to a different user. It was a completely arbitrary restriction that can be traced to the strained relationship between Google and wireless carriers who had cast their lot with ISIS.

Outwardly Verizon stuck to the fiction that they were not blocking the application deliberately. In a figurative sense, that was correct. Google Wallet itself contained a built-in list of approved configurations. At start-up the app would check if it was running on one of these blessed devices and politely inform the user that they were not allowed to run this application. In effect the application censored itself. This was a way of making sure that even if a determined user managed to get hold of the application package (so-called APK, which was not directly available from Android Play Store for Verizon, AT&T and T-Mobile customers) and side-load it, it would still not refuse to work. That charade continued to play out for the better part of 2 years, with occasional grumblings from consumers and Verizon continuing to deny any overt blocking.

Users were furious. Early reviews on Play Store were a mix of  gushing praise with 5-stars, and  angry 1-star rants complaining that it was not supported on their device. Many opted for rooting their phone or side-loading the application to get it working on the “wrong” carrier. (Die-hard users going out of their way to run your mobile app would have been a great sign of success in any other context.) Interestingly there was one class of devices where it worked even on Verizon: the Galaxy Nexus phones that Google handed out as holiday gifts to employees in 2011. In a rare act of symbolic defiance, it was decided that since Google covered every last penny of these devices with no carrier subsidy, our employees were entitled to run whatever application they wanted.

One could cynically argue that capitulating to pressure from carriers was the right call in the overall scheme of things. It may have been a bad outcome for the mobile payments initiative per se, but it was the globally optimal decision for Google shareholders. Android enjoys a  decisive edge over iPhone in market share but that race is far from being decided. And US carriers have great control over the distribution of mobile devices. Phones are typically bought straight from the carrier at below-cost, subsidized by ongoing service charges. Google made some attempts to rock the boat with line of unlocked Nexus devices, as did T-Mobile with their recent crusade against hardware subsidies. But these collectively made only a small dent in the prevailing model. Carriers still have  a lot of say in which model of phone running what operating system gets prime placement on their store shelves and marketing campaigns. Despite the occasional criticism as surrender monkeys on net-neutrality, Google leadership had a keen understanding of these dynamics. They had intuited that a fine line had to be walked. Keeping carriers happy was priority for #1, while making room for occasional muck-racking with unlocked devices and spectrum auctions. It is simply not worth alienating AT&T and Verizon over an experiment in mobile payments, an initiative that was neither strategic nor likely to generate significant revenue.

The secure element distraction

Curiously the original justification for why Google Wallet could be treated differently than all other apps came down to quirks of hardware. During its first two years, NFC payments on Google Wallet required the presence of a special chip, called the embedded secure element. This is where sensitive financial information, including credit-card numbers and cryptographic keys used to complete purchases were stored. Verizon pinned the blame on SE when trying to justify its alleged non-blocking of Google Wallet:

Google Wallet is different from other widely-available m-commerce services. Google Wallet does not simply access the operating system and basic hardware of our phones like thousands of other applications. Instead, in order to work as architected by Google, Google Wallet needs to be integrated into a new, secure and proprietary hardware element in our phones. We are continuing our commercial discussion with Google on this issue.

One part of this is undeniably true: the secure element is not an open platform in the traditional sense. Unlike a mobile device or PC, installing new applications on the SE requires special privileges for the developer. This is intentional and part of the security model for this type of hardware; limiting what code can run on a platform can reduce its susceptibility to attacks.  But the great irony of course is that a different type of secure element with exact same restriction has been present all-along on phones: SIM cards. Both the embedded secure element and SIM cards follow the same standard called Global Platform. Global Platform lays down the rules around who gets to control applications on a given chip and exactly what steps are involved. Short version is that each chip is configured at the factory with a unique set of cryptographic secrets, informally called “card manager keys.” Possession of these keys is required for installing new applications on the chip.

For SIM cards the keys are controlled by, you guessed it, wireless carriers. ISIS relies on carriers ability to install their mobile wallet applications on SIM cards, in exactly the same way Google Wallet relied on access to embedded secure element. SIM cards have been around for much longer than embedded secure elements. Curiously their alleged lack of openness seems to have escaped attention. When was the last time Google threw a temper tantrum for not being allowed to install code on SIMs?

The closer one looks at Global Platform and SE architecture, the flimsier these excuses about  platform limitations begin to sound. The specific hardware used in Android devices supported at least 4 different card-manager keys. One spot was occupied by Google and used for managing Google Wallet payments code. Another one was reserved by the hardware manufacturer to help manage the chip if necessary. Remaining two slots? Unused. Nothing at the technology level would have prevented an arrangement for wireless carriers to attain the same level of access as Google. This is true for even for devices already in the field; keys can be rotated over the air. One can envision a system where the consumer gets to decide exactly who will be in charge of their SE and the current owner is responsible for rotating keys to hand off control to the new one. If that sounds like too many cooks in the kitchen, newer versions of Global Platform support an even cleaner model for delegating partial SE access. Multiple companies can each get a virtual slice of the hardware, complete with freedom to manage their own applications, without being able to interfere with each other. In other words multiple payment solutions could well have co-existed on the same hardware. There is no reason for users to pledge allegiance to Google or ISIS; they could opt for all of the above, switching wallets on-the-fly. Those wallets could run along-side applications using NFC to open doors, login to the enterprise system or access cloud services with 2-factor authentication, all powered by the same hardware.

Who controls the hardware?

But that is all water under the bridge. Google gave up on the secure element and switched to using a different NFC technology called “host-card emulation” for payments. There is no SE on the Nexus 5, latest in the Nexus line of flagship devices. With the controversial hardware gone, any remaining excuses to claim Google Wallet was somehow special also went out the door. Newly emboldened, the application was launched to all users on all carriers for the first time. “Google gets around wireless carriers” cried the headline on NFC World, with only a slight exaggeration of that gesture. (It probably didn’t hurt that that competitive pressure on ISIS had eased up, since they were finally ready for launch after multiple setbacks.) Installed-base and usage predictably jumped. Play Store reviews improved, the sharp spread in opinion between angry users denied access and happy ones raving about the technology narrowed. A few questioned whether payments would have been more secure with the SE. Otherwise quirks of Android hardware were quickly forgotten.

A good contrast here is with the TPM or Trusted Platform Module on PCs. Much like the secure element, TPM is a tamper-resistant special chip that is part of the motherboard on traditional desktops and laptops. TPMs first made their appearance with the ill-fated Windows Vista release. They were used to help protect user data as part of the Bitlocker disk-encryption scheme. Later Windows 7 expanded the use-cases, introducing virtual smart-cards to securely store generic credentials for authentication and encryption. The situation here is akin to Microsoft shipping Bitlocker, Dell choosing to include a TPM in their hardware and a consumer buying that model, only to be told by an ISP that customers using their broadband service are not allowed to enable Bitlocker disk encryption. Such an absurd scenario does not play out in PC market because everyone realizes that ISPs simply provide the pipes for delivering bits. Their control ends at the network port; an ISP has no say over what applications you can run.

In retrospect NFC payments were an unfortunate choice of first scenario to introduce secure elements. The contemporary notion of “payments” is laden with the expectation that one more middleman can always be squeezed into the proceed to take their cut of the transaction. It is hardly surprising that wireless carriers wanted a piece of that opportunity. Nevermind that Google itself never aspired to be one of those middleman vying for a few basis-points of the interchange. One imagines there would have been much less of a land-grab from carriers if the new hardware was instead tasked with obscure enterprise security scenarios such as protecting VPN access or hardening disk encryption (Unless backed by hardware, disk encryption on Android is largely security theater: it is based on predictable user-chosen PIN or short passphrase.)

Collateral damage

Hardware techn0logy with significant potential for mobile security has been forced out of the market by intransigence of wireless carriers in promoting a particular vision of mobile payments. This is by no means the first or only time that wireless carriers have undermined security. Persistent failure to ship Android security updates is a better known, visible problem. But at least one can argue that is a sin of omission, of inaction. Integrating security updates from upstream Android code-base, verifying them against all the customizations small and large that OEMs/carrier made to differentiate themselves, takes time and effort. It is natural to favor the profitable path of selling new devices to subscribers over servicing existing ones already sold. But the case of hardware secure elements is a different type of failure. Carriers went out of their way to obstruct Google Wallet. Reasonable persons may disagree on whether that is a legitimate use of existing market power to tilt the playing field in favor of a competing solution. But one thing is clear: that strategy has all but eliminated a promising technology that holds significant potential for improving mobile security.


Mobile devices and NFC in public transit (part II)

[continued from part I]

Looking at the progression of NFC hardware on Android devices:

  • Initial devices had PN65N and later PN65O chip from NXP, which combines an NFC controller with an embedded secure element. That secure element is capable of emulating Mifare Classic. (Not surprising; Mifare is an NXP design so it is not unexpected that NXP hardware could speak a proprietary NXP protocol.)
    As an example application of that feature, Google Wallet originally stored coupons on  sectors of the emulated Mifare Classic tag. These coupons were redeemed by the point-of-sale during the NFC transaction as a distinct step from the payment itself. (It’s as if two different NFC tags were presented to the reader.)
  • Later Android devices used a different chipset, combining a Broadcom NFC controller with a secure-element from STMicro running a card operating system from Oberthur. (What better demonstrates the inter-dependencies of hardware industry?)
    Surprisingly that Broadcom controller can not read Mifare tags in reader mode– this is the reason for the debacle of Samsung Tactiles and why Pay-By-Phone NFC stickers pasted on parking meters can not be read by a good chunk of Android phones. One can only assume Broadcom did not want to fork over the $$ for licensing the protocol from NXP.
    Oddly enough in card-emulation mode the Oberthur SE can emulate Mifare classic tags. But the story does not end there: due to unrelated NFC issues revealed during Android  testing, that Mifare emulation capability was disabled in firmware. So there is no Mifare emulation possible on devices such as Nexus 4 and Galaxy S4 based on the Broadcom chipset.
  • Eventually Google dropped support for the secure element altogether in favor of a plain NFC controller. On devices without SE, the story is simple: there is no Mifare emulation.

2. Reader configuration

Bottom line: none of the hardware options on the market for the past 3 years were capable of emulating DESFire or EV1 type cards used in some of the largest transit networks around the world. But this turns out not to be the final word on the problem. Going one level deeper, DESFire protocol can operate in 2 different ways: “native” mode or “standards” mode. In standard mode the protocol messages are actually compatible with ISO7816. Cards support this, which explains the existence of Android apps that can interface with a transit card and display your recent travel history and remaining balance.

That sounds like good news because SE applets and ordinary Android applications alike can act as card-emulation targets for these. For example one could write a plain JavaCard applet to run on SE, which will be activated by the turnstile to walk through the DESFire protocol. That mode may not perform very well; general-purpose SE environment lacks the hardware crypto acceleration found in a dedicated DESFire chip. (A user-mode Android application in HCE mode would be even worse due to the additional overhead involved in shuttling messages back and forth from NFC interface to the process.) But at least it is possible to use standard ISO7816 messages to communicate with the turnstile.

Except it turns out most deployed readers are configured for native mode only. That makes eminent sense from a design perspective. Transit systems have a closed architecture with both readers and cards under control of a single operator. If the cards can operate in native mode, why complicate the reader side by adding an extra, less efficient alternative? In principle the operator–which happens to be Cubic for many of these systems– could go around to every subway station, bus, ferry terminal and update readers. But it is difficult to see any compelling scenario to justify that effort at the moment.

This is the short answer to why we can not tap an Android phone to walk into BART: prevailing NFC hardware included on these devices is not capable of communicating with the turnstile. There is no intrinsic reason for this. It will change with the introduction of better hardware. Anticipating that moment, we can pose a different question. Suppose all turnstiles are magically reprogrammed to support standards mode or perhaps all future handsets are capable of full DESFire and EV1 emulation. What other challenges remain for enabling the public transit scenario?




Mobile devices and NFC in public transit (part I)

(Or, why phones still can’t be used to get on BART)

Soon after Google integrated Near Field Communications (NFC) into Android devices, much speculation followed on when existing NFC scenarios could finally be integrated into cell phones instead of requiring separate cards and tokens. Payments was the obvious first choice and that is what Google started with. There was already a nascent infrastructure in place for contactless payments with NFC-enabled cards, using variants of the chip & PIN protocol defined by major card networks. More than any other existing use case, this one arguably has the most “universal” appeal. Not everyone rides public-transit or works in an office building where doors are controlled by NFC badge readers. It’s not everyday that people attend a conference/concert that issues fancy NFC attendance badges or stays in a hotel where rooms are equipped with NFC locks. Commerce then was the original focus for Google Wallet, launching with support for payments and offers.

What’s in your wallet?

Meanwhile real-world wallets contain much more than payment methods and merchant loyalty cards. Identity documents such as drivers license or eID, employee badges, gym-membership cards. Then there are public-transit cards such as Clipper card for Bay Area, ORCA in Seattle and Oyster for London which conveniently happen to be also based on NFC. They are used by millions of people commuting daily. Better yet, there is a uniform infrastructure in place unlike the case of payments. All stations, buses and turnstiles accept the same type of card. That is an improvement over the situation for payments: there are many merchants who accept credit cards but can only process old-school magnetic stripe cards, not chip & PIN or NFC.

So why isn’t there an Android app for getting into BART? The answer turns out to be a trifecta of  hardware limitations, a trust infrastructure that does not scale and ultimately that same chicken-egg problem faced by any new technology: significant upfront costs compared to unclear benefits.

1. Hardware limitations

The phrase “NFC” hides a complex morass of different standards and wildly incompatible technologies united only by the 13.56Mhz frequency they all use. This disparate group of hardware/software specifications have been cobbled together into a single uniform brand under the auspices of the NFC Forum. But no amount of wrangling by a standards committee can make up for the lack of uniformity in underlying technology. For example there are 4 types of tags alone, with different memory capacities, security levels and communication protocols supported.

Emulating credit-cards

EMV payments use type-4 tags, where the “tag” behaves like an ordinary smart-card except that instead of using a brass contact plate to receive signals directly, the communication takes place over the air. So how does this work on a mobile device, since an Android phone looks like nothing like a plastic card? That’s where the 3 different NFC modes of operation come in. In card-emulation mode, the NFC controller behaves like an ordinary card, except that it does not handle any of the processing. It is simply a conduit for passing messages to/from a card-emulation target. Originally that target for Google Wallet was a special-purpose chip called the embedded secure element or eSE where payment applications resided. Later iterations favored host-card emulation, with an ordinary Android application responsible for processing the incoming NFC messages.

Emulating transit cards

Transit on the other hand uses Mifare. Which variant of Mifare? It depends. Here are some data points:

  • Hawaii and San Diego  use Mifare Classic, which is the original version  based on a proprietary cryptographic design. (Surprisingly Mifare Classic does not fall into any of the 4 tag types.)
  • Bay Area, Seattle and London are on the more modern DESFire or its successor DESFire EV1.
  • Some systems also use Mifare Ultralight tags for cheaper, single-fare tickets. (These are trivially clonable and forgeable.)

Can an Android device emulate Mifare? The answer depends on the hardware model and what type of tag we are trying to emulate.

[continue to part II]





SSH with GoldKey tokens on OS X: provisioning (IV)

[continued from part III]

Until now we operated under the assumption that GoldKey tokens were already provisioned with PIV credentials. But that side-steps the question of how these key-pairs and certificates got there in the first place. The tokens arrive in a completely blank state, while PIV standard NIST SP800-73 defines several required objects such as CHUID, security-object, X509 certificates along with multiple containers that can cryptographic keys for different algorithms. Compare to the ongoing use of GoldKey, provisioning credentials turns out to be an involved problem, especially when the qualifier “on OS X” is thrown in. On Windows, there is a straightforward way using the standard certificate request path, with no manual software installation required in the best-case scenario. On other platforms the story is more complicated. Luckily provisioning and maintenance operations are infrequent. Worst case scenario they can be punted to enterprise IT department when there are special hardware/software requirements involved, much like printing employee badges.

Hierarchy of tokens

First let’s tackle a simpler problem: clearing an existing token and restoring it to original blank configuration. This turns out to require not just software but hardware support. GoldKey defines a hierarchy of tokens, with regular tokens managed by a master and masters in turn managed by a grandmaster token. Master tokens can perform administrative operations on a plain vanilla token. This functionality is accessed via the “Management” button under GoldKey information:

Accessing token management features

Accessing token management features

Dialog for managing GoldKey

Dialog for managing GoldKey

These all require having a master or grandmaster token. And it’s not just any master token, but the specific unit that this particular GoldKey was associated with during personalization phase. No such foresight or planning  involved during personalization? Not to worry: any master token can be used to completely clear out all data on that GoldKey and return it to clean slate configuration for re-personalization from scratch.

Provisioning with a master token

The same UI has an option labelled “manage smart cards” which sounds promising.

UI for managing certificates

UI for managing certificates

This allows provisioning into any of the 4 standard key types defined in the standard: PIV authentication, key-management, digital signature and card-authentication. But going down that path has an important caveat: it can only be used to provision from a PFX file, which is the MSFT variant of the PKCS12 standard. This format contains both the X509 certificate and associated private-key. But that means key material must be already available outside the token before one can import it, defeating one of the benefits of using cryptographic hardware tokens: secrets are generated on-card and never leave the secure execution environment.

Using the Windows mini-driver

Doing on-board key generation turns out to be fairly straightforward when using Windows with the GoldKey smart-card mini-driver installed. Recall that Windows has plug-and-play detection for cards and under normal circumstances will download the correct mini-driver from Windows Update. But in case the system associates a generic PIV driver instead of custom GoldKey one, it is possible to override that manually from device manager. Once correct driver is mapped, the built-in certreq utility can trigger key-generation and certificate loading.

One word of caution: GoldKey has a support article with example INF for generating self-signed certificates. Curiously the INF file used in that example does not perform key-generation on the token; it is not overriding the default cryptographic provider name to point at smart-card provider. (Fast key generation is a sign that it happened on the host. It takes about 30 seconds for 2048b RSA key to be generated, during which time the blue LED will blink.)

Also worth noting is that running certreq twice will not replace the existing PIV authentication key and associated X509 certificate. Instead it will provision to one of the other slots compatible with the specified usage, which turns out to be the key-management key. Clearing the original PIV authentication requires going back to the GoldKey client and using a master-token as described above.

What about on-board key generation from OS X or Linux? Not surprisingly, this is not supported by vendor software but it  is achievable using open-source alternatives. After all, PIV standard defines a GENERATE ASYMMETRIC KEY command for doing key-generation on the card. Leveraging this via open-source utilities (and some quirks of GoldKey tokens) will be the subject of another post.


FakeID, Android NFC stack and Google Wallet (part IV)

[continued from part III]

Wallet substitution attacks

There is a more indirect way of spying on purchase history by replacing the existing wallet with one controlled by the attacker. We noted that payments can not be relayed remotely because those commands are only honored over NFC interface. But administrative commands from the Trusted Service Manager (TSM) responsible for managing the SE are delivered via contact interface. These are used to download/install/uninstall code and personalize payment applets after installation. Since FakeID-signed malware can send arbitrary commands over that interface, it can also be used to relay legitimate traffic from TSM.

Consider an adversary who has already set up Google Wallet on her own local device, trying to attack a remote victim using malware that exploits FakeID.

  • Attacker requests TSM to perform a wallet-reset on the local device.
  • But each time local TSM agent tries to communicate with local SE, the commands are relayed to the remote victim device and forwarded to that SE using the malware running there.
  • In particular when the TSM queries for the unique SE identifier (CPLC or Card Production Life Cycle data) attacker returns the value associated with victim SE. This substitution is necessary because the card-manager keys are unique to each SE.
  • Given the wrong CPLC, TSM now proceeds to configure victim SE with payment applets representing the attacker. TSM generates an install script, a sequence of commands encrypted & authenticated using Global Platform secure messaging.
  • The commands are sent to the attacker device who relays them to the victim SE. Existing applets are deleted and replaced by new instances personalized for the attacker.

The attacker effectively replaces the victim wallet with her own. Each time the victim makes a transaction, it will bill to the attacker who can observe this spending from the transaction history in the cloud.

Such an attack is possible but comes with several caveats, not the least of which is the fact that it stretches the definition of an “attack” when the victim is getting a blank check to spend attacker funds. Of course the adversary can try to minimize costs by ensuring that the backing instruments do not have sufficient funds. In that case it will still show up as a declined transaction, complete with merchant and amount. But that brings us to the second problem: the attack is not very stealthy. At best attacker can write the victim a blank check, authorizing every transaction and hoping the victim will not notice that his/her own credit-card is never getting charged.

Finally there is another problem confronting the attacker glossed over above. After new applets are installed, they still need to be configured with a PIN.  Given access to SE, attacker can set any initial PIN desired by sending the appropriate incantation over contact interface. But the problem is, existing user PIN is not known. (Recall that FakeID only permits malware to send its own commands to SE; it does not allow spying on traffic sent by other applications.) Each time Google Wallet collects a PIN from the user and attempts to verify it, that value will be sent to the SE applets. If the PIN is wrong, the applet will respond with an error and also disable payments. In order to work around that, we have to make additional assumptions about the malware, such as social-engineering user into entering the PIN into a fake prompt or trying to masquerade as the legitimate wallet application instead of coexisting along-side it.


This is one easy category of attacks available when malware can exchange arbitrary traffic with the secure element. For example:

  • Locking the payment application by making too many failed PIN attempts. Resetting Google Wallet recovers from this state by deleting that instance and installing a new one.
  • Locking the SE itself by issuing a reset command to the controller applet. In later iterations of Google Wallet, the controller applet is installed with card-locking privileges. Following Global Platform, when card is placed into “locked” state, no application other than card manager can be selected. This too is a fully reversible state. In fact it happens by-design during wallet reset, with over-the-air commands from TSM relayed to the SE for unlocking during the next time wallet is initialized.
  • “Bricking” the secure element.** Global Platform-compliant cards typically have a feature designed to defend against brute-forcing card-manager keys. Too many failed authentication attempts will result in permanently locking up the chip. (Note this is authenticating to the card itself to manage all of its contents, as opposed to authenticating to one particular applet such as NFC payments, which can be deleted and reinstalled.) Since a failed authentication is defined as INITIALIZE-UPDATE command not followed by a successful EXTERNAL-AUTHENTICATE, malware can simply loop on the first to run up the tally.


To summarize then: contrary to speculation on FakeID affecting NFC payments, there is no easy remote attack for getting sensitive financial data out of the embedded secure element, much less for conducting unauthorized transactions. (And that’s assuming was something present in SE worth stealing: Google Wallet no longer relies on secure element hardware for payments. Unless Sprint, Samsung or another enterprising OEM launch new NFC wallets, the hardware will lie dormant.) That said, there are legitimate denial-of-service concerns, including permanently and irreversibly locking up the secure element in a way that most consumers can not easily recover from.


** Bricking appears in quotes above because the chip does not quite get TERMINATED as defined by Global Platform. For example NXP SmartMX goes into a “fused” state when the failure count exceeds the limit. Existing applets can be selected and will continue to function, however no new applications can be installed/deleted going forward. This is much better than a true “terminated” state, although it is still of greatly diminished utility for the user.

FakeID, Android NFC stack and Google Wallet (part III)

[continued from part II]

3. Secure element access

This is probably the most interesting scenario in mind when NFC payments are mentioned as a potential casualty of the Android FakeID vulnerability:

“This can result in a wide spectrum of consequences. For example, the vulnerability can be used by malware to escape the normal application sandbox and take one or more malicious actions: [...] gain access to NFC financial and payment data by impersonating Google Wallet; [...]“

Before exploring this particular threat vector, let’s reiterate that all Google Wallet instances have been using host-card emulation for NFC payments since Q4 2013.  That includes both new installations and earlier ones migrated from the secure element architecture. With the SE completely out of the picture, the following discussion is largely hypothetical. Still for the sake of this exercise, we will consider a concrete version of Google Wallet that leveraged the secure element from two-years back, circa August 2012. Could a malicious Android app access any financial information from that version using FakeID?

Interface detection, revisited

First observation is that the secure element has 2 interfaces to the outside world: the “contact” (aka wired) interface attached to the host Android operating system and the “contactless” route that is directly attached to the NFC antenna. All communication from host applications arrives via the first path, while external NFC readers communicate over the second one. Relevant to the current problem, applications running in SE can distinguish between these cases and behave differently. Informally this is known as interface detection or interface discrimination. It is one of the corner-stones of the security model against remote-relay attacks.

Payments happen over NFC only

Google Wallet applets are configured to only execute payments over the NFC interface. Barring implementation bugs in the applets’ use of interface detection, it is not possible for an Android application to simulate an external NFC reader to initiate a fraudulent transaction. This holds true even if malware attains root privileges, by-passes NFC stack and interfaces directly with raw NFC hardware. Incidentally FakeID vulnerability does not permit that level of access; it only enables going through the legitimate NFC stack to exchange traffic with SE.

Information disclosure

While executing a payment or reading out track-data containing credit card number, expiration etc. will not work, an attacker can still attempt to interact with SE to get information. As a starting point, it can survey SE contents by trying to select applets using their well-known application IDs or AID. (It is not possible directly to enumerate installed applets. Doing that requires authenticating to the SE with unique card-manager keys.)  Each flavor of EMV payments for Visa, MasterCard, American Express and Discover have unique AIDs that can be used to finger-print whether a payment instrument of that brand exists. In the case of Google Wallet, only MasterCard is supported. The ability to use other payment networks is achieved via cloud-based indirection; the SE only contains a single “virtual” MasterCard, so there is no concept of backing instruments at that level. As such the only information gleamed will be whether the device is set-up for tap & pay at all. Depending on the SE model, there is also an optional controller applet designed to provide additional constraints on the behavior of the payment applet, and its existence can be used to distinguish between versions of Google Wallet.

After locating applets, malware could try interacting with the applet to gain additional information. But there is very little interesting data to retrieve over contact interface.** There is a “log” of recent transactions but this log only contains success/failure state. No information about amount paid, merchant let alone line-level item data is stored. (None of that information was even sent to SE in the original PayPass mag-stripe profile. While  the mobile variant MMPP added support for terminal to relay amount/currency-code, that field is not reliable since it is not authenticated by the CVC3.) In earliest iterations of Wallet, two different types of MasterCard were supported and the controller applet kept track of the currently active card. In a situation with multiple cards present, malware can observe which card is selected. More interestingly it can modify the selection and cause the next transaction to be charged to a different card– still belonging to the current user– than the one originally picked.

Active attacks

The ability to change actively selected card is not particularly useful, but it points in the direction of exploring other ways that SE access can be leveraged to tamper with the state of the payment applications or for that matter SE hardware itself. The final post in this series will look at denial-of-service risks as well as more subtle substitution-attacks around replacing the user wallet with one controlled by the adversary.



** Over NFC one can read out the full credit-card number– that is how the mag-stripe profile of EMV payments work. But that capability is only available to external NFC readers and not local malware, because of interface detection in SE applets.

SSH with GoldKey tokens on OS X: fine-print (part III)

Second post in this series reviewed the basics of getting ssh working with GoldKey tokens. Here we focus on edge-cases and Mac specific problems.

Fine-print: compatibility with other applications

Invoking ssh from the command line is not the only use-case for SSH authentication. Many applications leverage SSH tunnels to protect otherwise unauthenticated/weakly-authenticated connections. One example already covered is the source-control system git. git invokes ssh in a straightforward manner, with some customization possible via an environment variable. Because all of this happens in a terminal, PIN prompt also appears inline on the same terminal session and everything works as expected after user enters correct PIN.

Such automatic compatibility does not hold true for more complex use cases, and depends to some extent on the way SSH authentication is invoked. Interestingly  GUI applications can also work with cryptographic hardware if they are designed properly. For example Sequel Pro is a popular SQL client for OS X. It supports accessing databases over SSH tunnels through bastion hosts. When using a GoldKey for the SSH credential, PIN prompt will appear in a modal dialog:

PIN prompt from Sequel Pro on OS X

PIN prompt from Sequel Pro on OS X

This is compliments of the custom ssh-agent built into OS X, which includes changes from Apple to interface with the system keychain for collecting PIN/passphrase to unlock keys. Unfortunately this “value-added” version from Apple has a downside: it does not work correctly with hardware key.

Macintrash strikes back: key-agent troubles

One problem quickly discovered by Airbnb engineers during this pilot concerned SSH key-agent: adding credentials from the token to SSH agent does not work. In fact, it is worse than simply failing: it fork-bombs the machine with copies of ssh-pkcs11-helper process until no additional processes can be created. (pkill to the rescue for terminating all of them by name.) No one can accuse Apple for having tested enterprise scenario.

This creates two problems:

  1. Usability: GoldKey PIN must be entered each time. Arguably this is beneficial for security; guidelines for PIV clients even discourage PIN caching, since it can lead to problems when a computer is left unattended with the token attached. But there is also an inconvenience element to frequent PIN entry. This is more of a concern when the token is used for scenarios such as accessing a git repo, when frequent server interactions are common. (SSH to multiple targets in quick succession is rare although there are applications such as massively parallel SSH designed to do exactly that.)
  2. Functional limitation: Putting aside the security vs usability debate for PIN caching, there is a more fundamental problem with key-agent not supporting hardware tokens: it breaks scenarios relying on SSH agent-forwarding. For example, when the user connects to server A and from there tries connecting to server B, they need an agent running on their local machine and that agent to be forwarded to server A in order to authenticate to B using credentials on the GoldKey token. A less obvious example of “nested” connection involves logging into a remote machine and then executing a git pull there to sync a repository, which involves SSH authentication under the covers.

Luckily the limitation turns out to be a function of openssh binaries included out-of-the-box with Mavericks. Upgrading to 6.6p1 and making sure that all of the binaries executed– including ssh-agent and ssh-pkcs11-helper– point to this latest version solves the problem.

$ ssh-add -s /usr/lib/opensc-pkcs11.so
Enter passphrase for PKCS#11: 
Card added: /usr/lib/opensc-pkcs11.so
$ ssh-add -e /usr/lib/opensc-pkcs11.so
Card removed: /usr/lib/opensc-pkcs11.so

The collateral damage however is losing PIN collection for GUI applications, such as the Sequel Pro example above. That functionality is dependent on custom OS X keychain integration which the vanilla open-source ssh-agent lacks. When the application is invoked manually from the terminal, we can see the PIN prompt appear on stdout. Not good.

One work-around is to manually add the key to the agent before invoking GUI applications. A better solution is provided by the Homebrew version of openssl which combines the keychain patch with latest openssh sources to satisfy both use cases.

The final post in this series will conclude with a discussion of setup, key-generation/credential provisioning and maintenance operations.




Get every new post delivered to your Inbox.

Join 53 other followers