Android Pay: proxy no more

[Full disclosure: this blogger worked on Google Wallet 2011-2013]

This blogger recently upgraded to Android Pay, one half of the mobile wallet offering from Google. The existing Google Wallet application retains functionality for peer-to-peer payments, while NFC payments are moved to the new Android Pay. At least that is the theory. In this case NFC tap-and-pay stopped working altogether. More interestingly, trying add cards from scratch showed that was not just a routine bug or testing oversight.  It was a deliberate shift in product design: the new version is no longer relying on “virtual cards” but instead integrates directly with card issuers. That means all cards are not created equal; only some banks are supported. (Although there may have been an ordinary bug too— according to Android Police, there is supposed to be an exception made for existing cards already used for NFC payments.)

Not all cards are welcome

Not all cards are welcome in Android Pay

Let’s revisit the economic aspects of the virtual-card model and outline why this shift was inevitable. Earlier blogs posts covered the technical aspects of virtual cards and how they are used by Google Wallet. To summarize, starting in 2012 Google Wallet did not in fact use the original credit-card when making payments. Instead a virtual-card issued by Google** is provisioned to the phone for use in all NFC transactions. That card is “virtual” in two senses. First it is an entirely digital payment option; there is no plastic version that can be used for swipe transactions. (There is a separate plastic card associated with Google Wallet; confusingly that card does not have NFC and follows a different funding model.) Less obvious, consumers do not have to fill out an application or pass a credit-history check to get the virtual card; in fact, it never shows up in their credit history. At the same time that card is very “real” in the sense of being a full-fledged MasterCard with 16 digit card number, expiration date and all the other attributes that make up a standard plastic card.

Proxying NFC transactions with a virtual card

Proxying NFC transactions with a virtual card

When the virtual card is used at some brick-and-mortar retailer for a purchase, the point-of-sale terminal tries to authorize the charge as it would for any other vanilla card. By virtue of how transactions are routed on payment networks such as Visa/MasterCard, that charge is eventually routed to the “issuing bank”— which happens to be Google. In effect the network is asking the issuer: “Can this card be used to authorize pay $21.55 to merchant Acme?” This is where the real-time proxying occurs. Before answering that question from the merchant, Google first attempts to place a charge for the same amount on one of the credit-cards supplied ahead of time by that Google Wallet user. Authorization for the virtual-card transaction is granted only if the corresponding charge for the backing instrument goes through.

(Interesting enough, that design makes NFC virtual card a prepaid card on steroids. Typically a prepaid-card is loaded ahead of time with funds and that balance later gets withdrawn down to pay for a purchase. Virtual card is doing the same thing on a highly compressed timeline: the card is “funded” by charging the backing credit-card and exact same balance is withdrawn to pay the merchant. The difference: it happens in a matter of seconds in order to stay under the latency requirements of payment networks.)

Why proxy?

So why all this complexity? Because it allows supporting any card without additional work required by the issuer. Recall that setting up a mobile NFC wallet requires provisioning a “payment instrument” capable of NFC transactions to that device. That is not just a matter of recording a bunch of details such as card number and CVV code once. Contactless payments use complex protocols involving cryptographic keys that generate unique authorization codes for each transaction. (This is what makes them more resistant to fraud involving compromised point-of-sale terminals, as in the case of Target and Home Depot breaches.) Those secret keys are only known to the issuing bank and can not be just read off the card-face by the consumer. That means provisioning a Citibank NFC card requires the mobile wallet to integrate with Citibank. That is how the original Google Wallet app worked back in 2011.

The problem is that such integrations do not scale easily, especially when there is a hardware secure element in the picture. There is a tricky three-way dance between the issuing bank who controls the card, mobile-wallet provider that authors the software on the phone and trusted service manager (TSM) tasked with managing over-the-air installation of applications on a family of secure elements. Very little about the issuer side is standardized, meaning that proliferating issuers also means a proliferation in the number of one-off integrations required to support each card.

Virtual cards cleanly solve that problem. Instead of having to integrate against incompatible provisioning systems from multiple banks, there is exactly one type of card provisioned to the phone. Integration with existing payment networks is instead relegated to the cloud, where ordinary online charges are placed against a backing instrument.

Proxy model: the downsides

While interesting from a technical perspective and addressing one of the more pressing obstacles to NFC adoption— namely users stymied by their particular financial institution not being supported— there are also clear downsides to this design.


As industry observers were quick to point out soon after launch, the proxy model is not economically viable at scale. Revisiting the picture above, there is a pair of transactions:

  1. Merchant places a charge against a virtual card
  2. Issuer of the virtual card places a charge for the identical amount against the consumer’s credit card

Both are going through credit-card networks. By virtue of how these network operate, each one incurs a fee, which is divvied up among various parties along the way. While the exact distribution varies based on network as well bargaining positions of issuer/acquirer banks, an issuing acquiring bank typically receives the lion’s share but less than 100%. For the above transactions, the provider of the virtual card is acting as issuer for #1 and merchant for #2. Google can expect to collect fees from the fronting instrument transaction while paying a fee for the backing charge.

In fact the situation is worse due to the different transaction types. The second charge is card-not-present or CNP, the same way online purchases are done by typing card numbers into a webpage. Due to increased risk of fraud, CNP transactions carry higher fees than in-store purchases where the customer presents a physical card and signs a receipt. So even if one could recoup 100% of the fee from the fronting instrument, that would typically not cover the cost of backing transaction. (In reality, the fees are not fixed; variables such as card type can greatly complicate the equation. For example, while the fronting instrument is always a MasterCard, the backing instrument could be an American Express which typically boasts some of the highest fees, putting the service deeper into the red.)

Concentrating risk

The other problem stems from in-store and CNP modes having different consequences in the event of disputed transactions. Suppose the card-holder alleges fraud or otherwise objects to a charge. For in store transactions with a signed receipt, the benefit of the doubt usually goes to the merchant, and the issuing bank is left eating the loss. For card-not present transactions where merchants can only perform minimal verification of card-holder, the opposite holds: process favors the consumer and the merchant is left holding the bag. Looking at the proxy model:

  • Google is the card issuer for an in-store purchase
  • Google is the merchant of record for the online, card-not-present charge against the backing instrument

In other words, the model also concentrates fraud risk with the party in the middle.

Great for users, bad for business?

In retrospect the virtual-card model was a great boon for consumers. From the user perspective:

  • You can make NFC payments with a credit card from any bank, even if your bank is more likely to associate NFC with football
  • You can use your preferred card, even at merchants who claim to not accept it. A merchant may not honor American Express but the magic of virtual cards has a MasterCard channeling for that AmEx. Meanwhile the merchant has no reason to complain because they are still paying ordinary commissions and not the higher AmEx fee structure.
  • You continue earning rewards from your credit-card. In fact that applies to even category-specific rewards, thanks to another under-appreciated feature of the proxy model. Some rewards programs are specific to merchant types, such as getting 2x cash-back only when buying gas. To retain that behavior, the proxy model can use different merchant-category code (MCC) during each backing charge. When the virtual card is used to pay at a gas station, Google can relay the appropriate MCC for the backing transaction.

From a strategic perspective, these subsidies can be lumped in with customer-acquisition costs, similar to Google Wallet initially offering a free $10 when it initially launched or Google Checkout doing the same a few years back. Did picking up the tab for three years succeed in boot-strapping an NFC ecosystem? Here the verdict is mixed. Virtual cards successfully addressed one of the major pain-points with early mobile wallets: limited selection of supported banks. Unfortunately there was an even bigger adoption blocker for Google Wallet: limited selection of wireless carriers. Because Verizon, AT&T and T-Mobile had cast their lot with the competing (and now failed) ISIS mobile wallet, these carriers actively blocked alternative NFC solutions on their devices. The investment in virtual cards had limited dividends because of a larger reluctance to confront wireless carriers. It’s instructive that Apple pursued a different approach, painstakingly working to get issuers on-board to cover a large percentage of cards on the market—but decidedly less than the 100% coverage achievable with virtual cards.


** More precisely issued by Bancorp under contractual agreement with Google.

Getting by without passwords: local login (part II)

[Part of a series on getting by without passwords]

Before diving into implementation details, a word of caution: local login with hardware tokens is more of a convenience feature than a security improvement. Counter-intuitive as that might sound, using of strong public key cryptography buys very little against the relevant threat model for local access: proximity to the device. A determined adversary who has already gotten within typing distance of the machine and staring at a login prompt does not need to worry about whether it is requesting a password or some new-fangled type of authentication. That screen can be bypassed altogether, for example by yanking the drive out of the machine, connecting it to another computer and reading all of the data. Even information stored temporarily in RAM is not safe against cold-boot attacks or malicious USB devices that target vulnerable drivers. For these reasons there is a very close connection between local access-control and disk encryption. Merely asking for a password to login is not enough when that is a “discretionary” control implemented by the operating system, “discretionary” in the sense that all of the data on disk is accessible without knowing that password. That would be trivially bypassed by taking that OS out of the equation, accessing the disk as a collection of bits without any of the OS-enforced access controls.

For this reasons local smart-card logon is best viewed as a usability feature. Instead of long and complex passwords, users type a short PIN. That applies not only for login, but for every time screen is unlocked after screen-saver kicks in due to inactivity. By contrast applying the same formula for remote authentication over a network does represent a material improvement. It mitigates common attacks that can be executed without proximity to the device, such as phishing or brute-forcing passwords.

With that disclaimer in mind, here is an overview of login with hardware tokens across three common operating systems. Emphasis is on finding solutions that are easily accessible to an enthusiast; no recompiling of kernels or switching to an entirely different Linux distribution. One note on terminology: the discussion will use “smart-card” as catch-all phrase to refer to cryptographic hardware, with the understanding that its physical manifestation can assume other forms including smart-cards, NFC tags and even smart-phones.


Windows supports smart-card login using Active Directory out-of-the-box. AD is the MSFT solution for centralized administration of machines in a managed environment, such as a large company or branch of government with thousands of devices being maintained by an IT department. While AD bundles many different features, the interesting one for our purposes is a notion of centralized identity. While each machine joined to a particular AD domain still has local accounts— accounts that are only meaningful to that machine and not recognized anywhere— it is also possible to login using a domain account that is recognized across multiple devices. That involves verifying credentials with the domain controller using Kerberos. Kerberos in turn has an extension called PKINIT to complete the initial authentication using public-key cryptography. That is how Windows implements smart-card logon.

That’s all good and well for enterprises but what about the average home user? A typical home PC is not joined to an AD domain- in fact the low-end flavors of Windows that are typically bundled with consumer devices are not even capable of being joined to a domain. It’s one of those “enterprise features” only reserved for more the more expensive SKUs of the operating system, as a great example of discriminatory pricing to ensure those companies don’t get away with buying one of the cheaper versions. Even if the user was running  the right OS SKU, they would still require some other machine running Windows Server to act as the domain controller. In theory one can devise kludges such as a local virtual machine running on the same box to serve as the domain controller, but these fall short of our criteria for being within reach of a typical power-user.

Enter third-party solutions. These extend the Windows authentication framework for local accounts to support smart cards without introducing the extra baggage of Active Directory or Kerberos. A past blog post from January 2013 described one solution using eIDAuthenticate.

eIDAuthenticate entry

eIDAuthenticate entry

Choosing a certificate from attached card

Choosing a certificate from attached card

eIDAuthenticate allows associating a trusted certificate with a local Windows account via the control panel. The user can later login to the account using a smart-card that has the corresponding private-key for that certificate. (eIDAuthenticate can also disable the password option and require smart-cards. But the same effect can be obtained less elegantly by randomizing the password.)

eIDAuthenticate setup— trial run

eIDAuthenticate setup— trial run

Using smart-card for local logon

Using smart-card for local logon


This case is already covered in a post from February on smart-card authentication in OS X which includes screenshots of the user-experience.

Linux / BSD

Unix flavors have by far the most complete story for authentication with smart-cards. Similar to the notion of extensible architecture of credential providers in Windows, Linux has pluggable authentication modules or PAMs. One of these is pam-pkcs11 for using hardware tokens. As the name implies, this module relies on the PKCS #11 standard as an abstraction for interfacing with cryptographic hardware. That means whatever gadget we plan to use must have a PKCS #11 module. Luckily the OpenSC project ships modules covering a variety of different hardware, including PIV cards of interest for our purpose. The outline of a solution combining these raw ingredients looks like:

  • Install necessary packages for PC/SC, OpenSC & pam-pkcs11
  • Configure system to include the new PAM in the authentication sequence
  • Configure certificate mappings for pam-pkcs11 module

While the first two steps are straightforward, the last one calls for a more detailed explanation. Going back to our discussion of smart-card login on OS X, the process breaks down into two stages:

  • Mapping: retrieving a certificate from the smart-card, verifying its validity and determining which user that certificate represents
  • Proof-of-possession: Verifying that the user holds the private-key corresponding to the public-key in the certificate. Typically this involves prompting for a PIN and asking the card to perform some cryptographic operation using the key.

While there is occasional variety in the protocol used for step #2 (even within PKINIT, there are two different ways to prove possession of private-key) the biggest difference across implementations concerns the first part. Case in point:

  • Active Directory determines the user identity from attributes of the certificate, such as the email address or Universal Principal Name (UPN). Note that this is an open-ended mapping. Any certificate issued by a trusted certificate authority with the right set of fields will do the trick. There is no need to register each such certificate with AD ahead of time. If a certificate expires or is revoked, a new one can be issued with the same attributes without having to inform all servers about the change.
  • eIDAuthenticate has a far more rudimentary, closed-ended model. It requires explicitly enumerating certificates that are trusted for a local account.
  • Ditto for OSX, which white-lists recognized certificates by hash but at least can support multiple certificates per account.**

Flexible mapping

By contrast to these pam-pkcs11 has by far the most flexible and well-developed model. In fact it is not a single model so much as a collection of different mapping options. These include the OSX/eIDAuthenticate model as special case, namely enumerating trusted certificates by their digest. Other mappers allow trusting open-ended set of credentials, such as any certificate issued by a specific CA with a particular email address or distinguished name, effectively subsuming the Active Directory trust model. But pam-pkcs11 goes far beyond that. There are mappers for interfacing with LDAP directories and even a mapper for integrating with Kerberos. In theory it can even function without X509 certificates at all; there is an SSH-based mapper that looks at raw public-keys retrieved from the token and compares against authorized keys files for the user.

UI after detecting smart-card

UI after detecting smart-card

Command line smart-card usage

Command line smart-card usage

Assuming one or more mappers are configured, smart-card login changes the user experience for initial login, screen unlocking and command-line login, including sudo. Screenshots above give a flavor of this on Ubuntu 14.


** OSX also appears to have hard-coded rules around what type of certificates will be accepted; for example self-signed certificates and those with an unrecognized critical extension are declined. That makes no sense since trust in the key is not coming from X509- unlike the case of Active Directory- but explicitly whitelisting of individual public-keys. The certificate is just a glorified container for public keys in this model.

Getting by without passwords: the case for hardware tokens


  1. OSX local login sans passwords
  2. Linux local login sans passwords
  3. LUKS disk-encryption sans passwords
  4. PGP email encryption sans passwords
  5. SSH using PIV cards/tokens sans passwords

For an authentication technology everyone loves to hate, there is still plenty of activity around passwords:

  • May 7 was international password day. Sponsored by the likes of MSFT, Intel and Dell the unofficial website urges users to “pledge to take passwords to the next level.”
  • An open competition to select a better password hashing scheme has recently concluded and crowned Argon2 as the winner.

Dedicated cryptographic hardware

What better time to shift gears from an endless stream of advice on Choosing Better Passwords? This series of posts will look at some pragmatic ways one can live without passwords. “Getting rid of passwords” is a vague objective and calls for some clarification— lest trivial/degenerate solutions become candidates. (Don’t like typing in a password? Enable automatic login after boot and your OS will never prompt you.) At a high level, the objective is: replace use of passwords by compact hardware tokens that both improve security and reduce cognitive burden on users.

The scope of that mission goes beyond authentication. Typically passwords are used in conjunction with access control: logging into a website, connecting to a remote computer etc. But there are other scenarios: for example, full-disk encryption or protecting an SSH/PGP key stored on disk typically involves typing a passphrase chosen by the user. These are equally good candidates in search of better technology. For that reason we focus on using cryptographic hardware, instead of biometrics or “weak 2-factor” systems such as OTP which are trivially phishable. Aside from their security deficiencies, they are only usable for authentication and by themselves can not provide a full suite of functionality such as data encryption or document signing.

Hardware tokens have the advantage that a single token can be sufficient to displace passwords in a variety of scenarios and even multiple instances of the same scenario, such as logging into multiple websites each with their own authentication system. (In other words, no identity federation or single sign-on, otherwise the solution is trivial.) While reusing passwords is a dangerous practice that users are constantly cautioned against,  reusing same public-key credentials across multiple sites presents minimal risk.

It turns out the exact model of hardware token or its physical form-factor (card vs USB token vs mobile-device vs wearable) is not all that important, as long as it implements right functionality, specifically public-key cryptography. More important for our purposes is support from commodity operating systems with device-drivers, middleware etc. to provide the right level of interoperability with existing applications. The goal is not overhauling the environment from top to bottom by replacing every app, but working within existing constraints to introduce security improvements. For concrete examples here, we will stick with smart-cards and USB tokens that implement the US government PIV standard, which enjoys widespread support across both proprietary and open-source solutions.

PIN vs password: rearranging deck chairs?

One of the first objections might be that such tokens typically have a PIN of their own. In addition to physical possession of the token, the user must supply a secret PIN to convince it to perform cryptographic operations. That appears to contradict the original objective of getting rid of passwords. But there are two critical differences.

First as noted above, a single hardware token can be used for multiple scenarios. For example it can be used for login to any number of websites and send while sharing a password across multiple sites is a bad idea. In that sense the user only has to carry one physical object and remember one PIN.

More importantly, the security of the system is much less dependent on the choice of PIN compared to single-factor systems based on a password. PIN is only stored on and checked by the token itself. Without physical possession of the token it is meaningless. That is why short numeric codes are deemed sufficient; the bulk of the security is provided by having tamper-resistant hardware managing complex, random cryptographic keys that users could not be expected to memorize. You will not see elaborate guidelines on choosing an unpredictable PIN by combining random dictionary words. PIN is only used as an incremental barrier to gate logical access after the much stronger requirement of  access to the hardware. (Strictly speaking, “access” includes the possibility of remote control over a system connected to the token;  in other words compromising a machine where the token is used. While this is a very realistic threat model, it still relies on the user physically connecting their token to an attacker-controlled system.)

Here is a concrete example comparing two designs:

  • First website uses passwords to authenticate users. It stores password hashes in order to be able to validate logins.
  • Second website uses public-key cryptography to authenticate users. Their database stores a public-key for each user. (Corresponding private-key lives on a hardware token with a PIN, although this is an implementation detail as far as the website is concerned.)

Suppose both websites are breached and bad guys walk away with contents of their database. In the first scenario, the safety of a user account is at least partially a function of their skill at choosing good passwords. You would hope the website used proper password-hashing to make life more difficult for attackers, by making it costly to verify each guess. But there is a limit to that game. The costs of verifying hashed passwords increase alike for attackers and defenders attacker. Some users will pick predictable passwords and given enough computing resources these can be cracked.

In the second case, the attacker is out of luck. Difficulty of recovering a private-key from the corresponding public key is the mathematical foundation on which modern cryptography rests. There are certainly flaws that could aid such an attack: for example, weak randomness when generating keys has been implicated in creating predictable keys. But those factors are a property of hardware itself and independent of user skills. In particular, quality of the user PIN— whether it was 1234 or 588429301267— does not enter into the picture.

User skill at choosing a PIN only become relevant in case an attacker gains physical access. Even in that scenario, attacks against the PIN are far more difficult.

  • Well-designed tokens implement rate limiting, so it is not possible to try more than a handful guesses via the “official” PIN verification interface.
  • Bypassing that avenue calls for attacking the tamper-resistance of the hardware itself. This is certainly feasible given proper equipment and sufficient time. But assuming the token had appropriate physical protections, it is a manual, time-consuming attack that is far more costly than running an automated cracker on a password dump.

Starting out local

With that context, the next series of posts will walk through examples of replacing use of passwords in each scenario with a hardware token. In keeping with the maxim “be the change you want to see in the world,” we focus on use-cases that can be implemented by unilaterally by end-users, without requiring other people to cooperate. If you wanted to authenticate to your bank using a hardware token and their website only supports passwords, you are out of luck. There isn’t much that can be done to meaningfully implement a scheme that offers comparable security within that framework. You could implement a password manager that users credentials on the card to encrypt the password, but at the end of the day the protocol still involves submitting the same fixed secret over and over. By contrast local uses of hardware tokens can be implemented without waiting for any other party to become enlightened. Specifically we will cover:

  1. Logging into a local machine. Spoiler alert- due to how screen unlocking works, this also covers that vexing question of “how do I unlock my screen using NFC?”
  2. Full-disk encryption
  3. Email encryption and signing with PGP

Also worth pointing out: SSHing to remote server with PIV token was covered earlier for OSX. Strictly speaking this is not “local” usage, but it satisfies the criteria of not requiring changes to systems outside our control. Assuming the remote server is configured for public-key authentication, how users are managing their private-key on their end remains transparent to other peers.


[Updated 05.14.16 with table of contents]