Sprints and marathons: why developer interviews miss the mark


The New York City marathon is one of the preeminent long-distance races in the world. Not surprisingly it comes with stringent qualification criteria, based on having completed either another full or half marathon at an eligible event. Imagine an alternate universe instead: runners qualify for the marathon based on their 100 meter sprint times. No doubt some of the usual entrants could still meet this unusual bar. But there would also be many false positives: the people have no problem sprinting blazingly fast over short distances but run out of fuel and drop-out of the race after a couple of miles. There would also be many false negatives— remarkable endurance athletes who never make it to the start line because the qualifiers screened for a criteria highly uncorrelated to what they are expected to perform.

That absurd hypothetical is not far from the state of the art in interviewing software engineers at leading technology companies. This blogger is certainly not first or most eloquent with a scathing criticism of the leading paradigm for conducting developer interviews. (In fact he has been a happy beneficiary of that broken model early on, one of its false positives.) But faith in measuring candidates based on their performance on contrived problems remains unshakable in many corners of the industry, from garden variety starts up in the valley to hedge-funds in NYC.

Introducing “The Problem”

With minor variations, the setup is same everywhere. Candidate walks into a room. An interviewer is sitting there, occasionally flanked by a shadow, a silent colleague who is there to observe the proceedings before he/she can conduct similar interrogations in the future. They engage in nervous chit-chat and pleasantries for a few minutes, before the interviewer walks over to a white-board and presents the candidate with “The Problem.” Invariably The Problem has two aspects—in fact novice interviewers often conflate these two problems when providing feedback about the candidate:

  • Algorithmic: This is the abstract logic piece, focused on puzzle-solving. It skews heavily towards the theory side, calling for insights into the underlying structure of a problem (“this is just an instance of network flows in disguise”) and benefits from familiarity with algorithms.
  • Coding: Translating the abstract solution into code expressed in a programming language. Often times the choice of that language is dictated by the company (“We are strictly a Rails shop and require that you are already familiar with Ruby”) In more enlightened circumstances, it may be negotiated between interviewer and candidate on the reasonable assumption that a good developer can easily become proficient at a new language or framework on the job.

Sometimes code is written on a white-board, without much attention paid to formatting or  the occasional bogus syntax. Other times candidates are sat down in front of an actual computer, provided some semblance of a work environment and asked to produce working code. (There are even websites such as CoderPad designed for remote coding interviews, with fancy syntax highlighting in several popular languages.) This is supposed to be an improvement both for the accuracy of the interview and “user-friendliness” of the process for the candidate. On the one hand, with a real compiler scrutinizing every semi-colon and variable reference, he/she must produce syntactically valid code. No sketching vague notions in pseudo-code. At the same time an IDE makes it much easier to write code the way it is normally done: jumping back and forth, moving lines around, deleting them and starting over, as opposed to strictly top-to-bottom on a white board. (Of course it is difficult to replicate the actual work space an engineer is accustomed to. It involves everything from the available screen real estate- some will swear by multiple monitors, others prefer a single giant monitor rotated into portrait mode. Experienced developers can be surprisingly finicky about that, having over-adapted to a very specific set up. Try asking an Emacs user to work in Vim.)


Putting aside the logistics of catering to developer preferences, what is the fundamental problem with this approach?

First the supply of good problems is limited. An ideal instance of “The Problem” provides more than a binary success/failure outcome. It permits a wide range of solutions, from the blindingly obvious and inefficient to the staggeringly elegant/difficult/subtle. A problem with a single “correct” solution requiring a flash of insight is a poor signal, similar to a pass/fail grade. Far more useful are those with complex trade-offs (faster but using more memory, slower but permits parallelization etc.) candidates can make progress incrementally and continue to refine their answers throughout.

Second there is an arms race between those coming up with questions and websites trying to prep candidates by publishing those questions. At Google we used to have an internal website where employees could submit these problems and others could vote on them. Some of the best ones inevitably got stamped with a big red “banned” after they had been leaked on interview websites such as Glassdoor. That even gives rise to the occasional problem of outright dishonest performances: the candidate who professes to never having encountered your question in the past, struggles through the interview and yet amazingly proceeds to parrot out the exact solution from the website last-minute.

Sprinting VS RUNNING marathons

Focusing on the difficulty of choosing an “optimal interview problem” is still missing the forest for the trees. A fundamental problem is that the entire setup is highly contrived and artificial, with no relationship to the day-to-day grind of software development in commercial settings.

To put it bluntly: most commercial software development is not about advancing the state of the art in algorithms or bringing new insights into complex problems in theoretical computer science— the type of “puzzle solving” celebrated at nano-scale by interview problems. That is not to say interesting problems never come up. But to the extent such innovations happen, they are motivated by work on real-world applications and their solution is the result of deliberate, principled work in that domain. Few engineers accidentally stumble into a deep theory question while fixing random bugs and then proceed to innovate their way (or failing that, give up on the challenge) in 45 minutes.

Much larger chunks of developer time are spent implementing solutions to problems that are well-understood and fully “solved” from theoretical perspective. Solved in quotes, because a robust implementation makes all the difference between a viable, successful product and one that goes nowhere. This is where the craft of engineering shines. That craft calls for a much broader set of skills than abstract puzzle-solving. It means working within pragmatic constraints, choosing to build on top of existing frameworks, however buggy or quirky they may be. (That is the anti-thesis of writing a solution from scratch out of thin-air. So-called greenfield projects where one gets the luxury of a clean slate are the exception rather than the norm in development.) It means debugging: investigating why a piece of code, already written by someone else with an entirely different approach, is not performing as expected. It means plenty of bug fixes and small tweaks. Making judgment calls on when to rewrite faulty components from scratch and when to continue making scoped, tactical fixes until there is more room in the schedule for an overhaul. Last but not least: working as part of a team, with complex dependencies and ever-shifting boundaries of responsibility.

Commercial software engineering is an endurance event over a highly uneven and unpredictable terrain full of obstacles, unfolding on the scale of weeks and months. (It is no accident that one of the earliest and better known books about software project management was titled “Death March.”) Many fashionable paradigms have come and gone— extreme programming, test-driven development, scrum with its emphasis on sprints— but none have altered the fundamental dynamics. Programming interviews by contrast are sprints on a perfectly flat indoor track, no matter how hard they strive to recreate a superficial similarity to actual software engineering.

The mismeasure of a developer

Why does the industry continue to evaluate engineers in this artificial way? One explanation is that the process is a relic of the 1990s, pioneered by companies such as Microsoft before there was a better way to evaluate the output of a developer.

In the days of shrink-wrap software, it was difficult to evaluate individual contributions in isolation from the larger project that person worked on. Imagine a candidate with resume credits on very large-scale applications such as MSFT Word or IBM OS/2. (Putting aside the question of how one feels about the relative quality of either of those products.) Thousands of people contributed to such a large code base. Did the engineer play a pivotal role or did they simply tweak a few lines of code at the periphery, fix a few inconsequential bugs? The opaqueness of proprietary development ensures this will be  difficult to ascertain. Meanwhile the Dunning-Kruger effect makes even the most sincere candidate self-evaluation suspect. It is not only successes that are difficult to highlight; failures are also easily swept under the rug. If you were the engineer in charge of enthusiastically integrating Clippy into MSFT Office or your code was riddled with critical security vulnerabilities that were later patched by other people, those details may be suspiciously absent from your resume.

Proprietary development has not gone away, but the good news is there is a growing amount of open-source software and it is easier to find than ever. It is no longer about downloading tarballs of source code from an obscure FTP site. With a Github handle listed on a resume, it is possible to browse and search through an individual developer’s contributions in a very systematic fashion. Those public repositories may still be that proverbial tip of the iceberg in relative proportion to their total output. For many people open-source contributions still remain a small fraction of contributions compared to private projects undertaken as part of a commercial product. But it still paints a far more accurate picture of their talent as developer- someone engaged in the craft of writing code, as opposed to solving logic puzzles. That is a picture that no 45-minute whiteboard interview can do justice to.


Sprints and marathons: why developer interviews miss the mark

Who actually pays for credit card fraud? (part III)


Previous two posts [part I, part II] reviewed the counter-intuitive dynamics of how payment networks externalize and disperse fraud, by setting issuers and merchants against each other over direct losses. Those actors in turn incorporate expected losses into cost of services rendered to consumers.

As long as fraud costs are predictable and already factored into existing prices, they can be viewed as the cost of doing business.  Granted such losses incorporate an element of chance and perfect projections are not possible. Some years it may be less than projected, adding to the bottom line, while other years “black-swan” events such as the Target breach could result in much higher losses than expected. But as long as risks are manageable, it creates a stalemate in the system, one that goes a long way to explain why card networks have not particularly motivated to institute a crash-upgrade to EMV across the board. Issuers and merchants are locked into an archaic technology— the ubiquitous magnetic stripe and CVC2 codes— which are trivially defeated by vanilla PC malware, card-skimmers, compromised point-of-sale terminals and phishing scams. Yet the cost of upgrading point-of-sale terminals (and payment-processor connections— this one is not a simple matter of attaching an NFC reader and relying on backwards compatibility) may exceed projected reductions in fraud. As late as 2012 Glennbrook stated:

“In the US the common assumption is that the current level of counterfeit card fraud is too low to merit an industry-wide investment in EMV technology.”

Placing incentives on the right actors

That brings up the question about who should be paying for improving security and reducing fraud in the system. There is a certain logic to exempting card-holders out of the fight around who gets to absorb fraud losses: there is very little actionable for individuals in the way of reducing risk. In fact most “intuitive” responses can be counter-productive for the overall payments ecosystem: avoiding plastic in favor of cash— as consumers were reportedly doing in response to recent breaches— may indeed reduce fraud figures but only at the expense of losing issuer and revenue. (It may also increase cash losses which unlike credit-card fraud is not distributed to other individuals in the risk pool. As consumers in Greece noticed, when households began to stockpile cash and valuable, thefts and burglaries increased.)

Issuers and acquirers have more direct control over the risk levers in a given transactions. Issuing bank has the one final say in authorizing every transaction, with knowledge of amount and some notion of merchant involved. To improve their odds, they develop or more commonly out-source sophisticated risk management systems that sift through the haystack of card-holder transactions in real-time and flag suspicious patterns. Similarly merchants can institute policies based on their level of risk tolerance. Examples for retail outlets include:

  • Accepting thresholds set by networks for when signatures are required. Starbucks prefers to keep the line moving for smaller amounts, since quick customer service is critical in a volume business, but take time to pause on larger purchases
  • Requiring a valid government-issued ID with name matching the payment card
  • Prompting for additional card-holder information that can be checked during authorization process. Gas station pumps requesting ZIP code is the canonical example

Likewise e-commerce merchants subject to card-not-present risks can set certain policies such as:

  • Collecting CVC2 codes for additional card verification (Interestingly many large merchants including Amazon did not do this for the longest time)
  • Not shipping to PO boxes. These are easy to obtain and favored by shipping-mules to hide their true address
  • Checking billing address during authorization against information entered by the customer
  • Requiring additional verification when shipping address is different from billing address

All of these are actionable for the issuer/merchant and more importantly, decisions can be made independently by each actor. Set the rules too conservatively, and legitimate customers are driven away because their purchases are declined. Set them too liberally and an entire underground crew of professional crooks decide to prey on that merchant.


Clearly something did change in the intervening years because Visa and MasterCard set a deadline of Oct 1st 2015 for EMV adoption in the US. The timing of the announcement coincided with the aftermath of large-scale breaches at Target and Home Depot. While correlation is not causation, one could speculate that card networks capitalized on the climate of heightened fear among consumers at the time to accelerate an EMV agenda which had been slow to gain traction until that point. Mainstream media also latched on a simplistic rhetoric that chip-and-PIN equals no more data breaches, creating a perfect environment to push EMV migration. With merchants backed into a corner too busy explaining why their shoddy security resulted in frequent data breaches, there would be no room for the type of careful cost/benefit analysis that has been otherwise the hallmark of risk management in payments.

So-called “upgrade” plan itself took the form of an ultimatum to issuers and merchants. Past the deadline, rules for disputing in-store transactions change. If the issuing bank had provided the customer with a chip&PIN card but the merchant terminal was only capable of swipe transactions, now it is the merchant who gets to eat the loss if that charge is later disputed by the card-holder. Conversely for those few situations when the merchant would have been on the hook, such as skimping on signed receipts in the interest of quick customer turnover, the bank may be stuck with the loss if the merchant POS had been upgraded for processing EMV transactions but customer card only had swipe capability. (Interestingly enough, if neither side had upgraded then business-as-usual rules apply. In that sense, not upgrading is the optimal outcome for both merchant/issuer when viewed as prisoner’s dilemma game, but the threat that the other side may “defect” would inspire both to settle for the more costly action of going through the upgrade.)

This is another great example of cost diffusion at work. Note that Visa and MasterCard are not on the hook for the vast majority of upgrade costs. The letters V and M in “EMV” may stand for their names but past research & development on EMV has become a historic sunk cost at this point. It is the issuing bank that must shell out for printing, personalizing and mailing millions of cards to their customers. Similarly the merchant is responsible for purchasing or leasing new equipment to accept EMV transactions. On the surface, consumers are off the hook again, much like their indemnification over fraudulent purchases. But to echo Robert Heinlein, TANSTAAFL: those issuer and merchant expenses be paid from somewhere. Absent any government subsidies to encourage better security— which have never played a significant role in this space— that source is the price of goods and services. Consumers will indirectly pay for this upgrade cycle too.


Who actually pays for credit card fraud? (part III)

Who actually pays for credit card fraud? (part II)

[continued from part I, which provides background]

The answer to the burning question of who gets to pay for fraudulent credit-card transactions is influenced by many factors. On the one hand there are the particulars of the situation that vary between each incident: whether it was a stolen card, where the charges took place and how quickly the card-holder contacted their financial institution. At the other extreme, there are large-scale policy issues decided for the entire ecosystem by regulatory regimes for consumer protection. For example in Europe, part of the reason EMV adoption happened in a hurry is that banks seized the opportunity to shift presumption of guilt to consumers. This so-called “liability shift” was predicated on the assumption that because EMV cards are very unlikely to be cloned or used without knowledge of the PIN (an incorrect assumption on many levels, it turns out due to vulnerabilities in the design that are being exploited in the wild) the burden of proof is on the card-holder to prove that they did not in fact.

In the US, there is a belief that consumers are not liable for credit-card fraud. It is a simple message to communicate, which makes it a common refrain for advertising/PR campaigns encouraging consumers to swipe those cards liberally without fear. It sounds reassuring. It is also not entirely accurate.

On the one hand, it is true that the US model starts out with a presumption of innocence. When the card-holder contests a charge, the bank temporarily suspends it while an investigation is under way. But more importantly, the burden of proof on consumer side is much lower. Unless the retailer can prove that the customer in fact made the purchase or at least show they have done due diligence by producing a signed receipt, they are on the hook. (That also means for card-not-present purchases such as those happening on the Internet, the merchant is very likely going to be the one eating the loss.) If there is evidence of card-holder participation, it is now between issuer and consumer to decide. The signature on the receipt could have been forged, indicating a cloned card, or perhaps the merchant authorized a different amount than originally agreed. In all cases, unless the parties in question can prove conclusively that a card-holder knowingly authorized that exact charge, the losses are absorbed by issuing bank or merchant.

In theory this is a very consumer-friendly regime. All the while surprising that it has gained traction in the US, while Europe with its tradition of consumer protection would favor the opposite. It places incentives for combating fraud on the parties most capable of taking action. Issuers can refine their statistical models to better distinguish legitimate vs fraudulent activity, meanwhile merchants can implement policies based on their risk/benefit calculations. For example online merchants may refuse to ship to addresses other than the billing address on the card, retailers may ask to check ID for large purchases, meanwhile Starbucks can define its own threshold above which signatures are required even if it means slowing down the line. That still leaves open one question: what happens to the losses that issuers and merchants still incur after all of these mitigations have been implemented?

Indiscriminate insurance

Imagine a car insurance company that charges all drivers the same rate, regardless of their demographics (no over-charging young people living alone to subsidize older married couples), past driving record or the current value of their vehicle. This is in effect how credit-card losses are distributed throughout the payment system.

Not being directly liable for fraudulent charges is not the same as being completely off the hook. US regulatory frameworks may have conspired with the card networks’ own business model to off-load losses away from card holders and towards merchants & issuers. But there is no rule that dictates those parties may not pass those costs on to consumers in the form of higher prices. In fact this concern comes up for merchants even in the absence of fraud.  Recall that a credit-card purchase could involve upwards of 3% fee compared to a cash purchase. (If that sounds negligible, consider that some retailers such as grocery stores have razor-thin profit margins less than 5%. In effect they are giving up half of their profit, which goes a long way towards explaining why Wal-Mart, Target etc. were highly motivated to spearhead a merchant consortium to create alternative payment rails.) The economically rational behavior would be to introduce a surcharge for credit card purchases. The reason that did not happen in practice is that it ran afoul of Visa/MasterCard rules until recently. In 2013 a court settlement finally allowed merchants to start passing on costs to consumers but only in certain states.

A similar situation applies to dispersing the effect of fraud. If merchants are setting prices based on the expectation that they will lose a certain percent of revenues to fraud, all customers are sharing in that cost. The bizarre part is that customers are not even subsidizing each other any longer, but complete strangers with no business relationship to the retailer. Imagine consumer Bob has his credit-card information stolen and used at some electronics retailer for a fraudulent purchase, even though Bob himself never shops there. When consumer Alice later frequents the same store, she is in effect paying a slightly higher price to make up for the charge-back caused by crooks using Bob’s card.

Moral hazard?

Same calculus applies on the issuer side, except there is arguably a greater element of individual responsibility. This time it is not about a specific “price” charged to consumers per se, but subtle adjustments to terms of credit for accommodating expected losses. For example, the annual fee for the privilege of carrying the card might be a little higher, its APR on balances set to a few basis points higher or the rewards program a little less generous. If Alice and Bob were both customers of the same bank and Bob experiences fraudulent charges because he typed his credit-card information into a phishing page, Alice is indirectly paying for that moment of carelessness.

Whatever one might say about the virtues of this system, fairness is not one of its defining features. The system provides Bob with peace of mind in the same way that insurance will pay for repairing a car after the owner repeatedly drives it into a ditch. Unlike car insurance, costs are not reflected on specific individuals with increased premiums. Instead fraud losses are socialized across the entire customer base. Now in fairness to Bob, he may not have been responsible for the breach. Even the most cautious and responsible card-holder has little control over whether Target or Home Depot point-of-sale terminals have been compromised by malware that captures card details in the course of a routine purchase. What could be more routine than using a credit card at a reputable nation-wide retailer in an actual bricks-and-mortar store? Neither can Bob compensate for fundamental design weaknesses in payment protocols, such as the ease of cloning magnetic stripes by unilaterally upgrading himself to chip&PIN card.



Who actually pays for credit card fraud? (part II)

Who actually pays for credit card fraud? (part I)

In the aftermath of a credit-card breach, an intricate dance of finger-pointing  begins. The merchant is already presumed guilty because the breach typically happened due to some vulnerability on their systems. Shifting the blame is difficult but one can take a cue from innovative strategies such as the one Target employed in suggesting that fraud could have been mitigated if only US credit-card companies switched to chip & PIN cards, which are far more resilient to cloning by malicious point-of-sale terminals. (In reality the story is not that simple, because the less secure magnetic-stripe is still present even on chip cards for backwards compatibility.) But credit card companies will not take that sitting down: it is all the merchants’ fault— in other words the Targets of the world—they will respond. What is the point of issuing chip cards when stores have archaic cash registers that can only process old-fashioned “swipe” transactions where the chip is not involved?

They have a point. October 1st 2015 was the deadline set by Visa/MasterCard for US retailers, partly in response to large-scale breaches such as Target and Home Depot, for all retailers and banks to switch to chip cards. Payment networks may have thrown down the gauntlet but by all appearances their bluff was called: less than half the cards in circulations have chips and barely a quarter of merchants can leverage them, according to a Bloomberg report. That state of affairs sows a great deal of confusion around why so little has been done to improve the security of the payment system. After all EMV adoption happened in Europe a decade earlier by comparison and at much faster clip. This feeds conspiracy theories to the effect that banks/merchants/name-your-villain does not care  because they are not on the hook for losses. This post is an attempt to look into the question of how economic incentives for security are allocated in the system.

Payment networks

Roles in payment network
Roles in a typical payment network

Quick recap of the roles in a typical credit card transaction:

  • Card-holder is the person attempting to make a payment with their card
  • Merchant is the store where they are making a purchase. This could be a bricks-and-mortar store in meatspace with a cash register or an online ecommerce shop accepting payments from a web page.
  • Issuing bank: This is the financial institution who provided the consumer with their card. Typically the issuer
  • Acquiring bank: The counterpart to the acquirer, this is the institution that holds funds in custody for the merchant when payments are made
  • Payment network, in other words Visa or MasterCard. This is the glue holding all of the issuers and acquirers together, orchestrating the flow of funds from acquirers to issuers. One note about American Express and Discover: In these networks, the network itself also operates as issuer and acquirer. While they partner with specific banks to issue co-branded cards (such as a “Fidelity AmEx” card”) with revenue-sharing on issuer fees, the transaction processing is still handled by the network itself.

In reality there can be many more middle-man in the transaction vying for a cut of the fees, such as payment processors who provide merchants with one-stop solutions that include all the hardware and banking relationships.

Following the money

Before delving into what happens with fraudulent transactions, let’s consider the sunny-day path. Merchant pays some percent of the purchase, typically 2-3% for credit transactions depending on type of card, much lower for debit cards routed through the different PIN-debit network, for the privilege of accepting cards in return for the promise of higher sales and reduced overheads managing unwieldy bundles of cash. Lion’s share of that goes to the issuer— after all, they are the ones on the hook for actual credit risk– the possibility that having made a purchase and walked out of the store with their shiny object on borrowed money, the consumer later defaults on the loan and does not pay their credit card bill. (That also explains why debit cards can be processed with much lower overhead and why retailers are increasingly pushing for debit: in that case the transaction only clears if the card-holder already has sufficient funds deposited in their bank account. There is no concern about trying to recoup the payment down the road with interest.) Remainder of that fee is divvied up between acquirer, payment network and payment processors facilitating the transaction along the way.

When things go wrong

What about fraudulent transactions? First note that issuing bank itself is in the loop for every transaction. So the bank has an opportunity to decline any purchase if the issuer decides that the charge is suspicious and unlikely to be authorized by the legitimate cardholder. (That alone should be a cue that issuers in fact have a stake in preventing fraud: otherwise they should cynically take the position that every purchase is revenue opportunity, the higher the sum the greater the commission, and green-light everything and let someone else worry about fraud.) But those systems are statistical in nature, predicated on identifying large deviations from spending patterns while also trying to avoid false-positives. If a customer based in Chicago has suddenly starts spending large sums in New York, is that a stolen card or are they on vacation? Some amount of fraud inevitably gets past the heuristics. When the consumer calls up their bank at the end of the month and contests a particular charge appearing on their bill, the fundamental question stands: who will be left holding the bag?

[continued in part II]


Who actually pays for credit card fraud? (part I)

Android Pay: proxy no more

[Full disclosure: this blogger worked on Google Wallet 2011-2013]

This blogger recently upgraded to Android Pay, one half of the mobile wallet offering from Google. The existing Google Wallet application retains functionality for peer-to-peer payments, while NFC payments are moved to the new Android Pay. At least that is the theory. In this case NFC tap-and-pay stopped working altogether. More interestingly, trying add cards from scratch showed that was not just a routine bug or testing oversight.  It was a deliberate shift in product design: the new version is no longer relying on “virtual cards” but instead integrates directly with card issuers. That means all cards are not created equal; only some banks are supported. (Although there may have been an ordinary bug too— according to Android Police, there is supposed to be an exception made for existing cards already used for NFC payments.)

Not all cards are welcome
Not all cards are welcome in Android Pay

Let’s revisit the economic aspects of the virtual-card model and outline why this shift was inevitable. Earlier blogs posts covered the technical aspects of virtual cards and how they are used by Google Wallet. To summarize, starting in 2012 Google Wallet did not in fact use the original credit-card when making payments. Instead a virtual-card issued by Google** is provisioned to the phone for use in all NFC transactions. That card is “virtual” in two senses. First it is an entirely digital payment option; there is no plastic version that can be used for swipe transactions. (There is a separate plastic card associated with Google Wallet; confusingly that card does not have NFC and follows a different funding model.) Less obvious, consumers do not have to fill out an application or pass a credit-history check to get the virtual card; in fact, it never shows up in their credit history. At the same time that card is very “real” in the sense of being a full-fledged MasterCard with 16 digit card number, expiration date and all the other attributes that make up a standard plastic card.

Proxying NFC transactions with a virtual card
Proxying NFC transactions with a virtual card

When the virtual card is used at some brick-and-mortar retailer for a purchase, the point-of-sale terminal tries to authorize the charge as it would for any other vanilla card. By virtue of how transactions are routed on payment networks such as Visa/MasterCard, that charge is eventually routed to the “issuing bank”— which happens to be Google. In effect the network is asking the issuer: “Can this card be used to authorize pay $21.55 to merchant Acme?” This is where the real-time proxying occurs. Before answering that question from the merchant, Google first attempts to place a charge for the same amount on one of the credit-cards supplied ahead of time by that Google Wallet user. Authorization for the virtual-card transaction is granted only if the corresponding charge for the backing instrument goes through.

(Interesting enough, that design makes NFC virtual card a prepaid card on steroids. Typically a prepaid-card is loaded ahead of time with funds and that balance later gets withdrawn down to pay for a purchase. Virtual card is doing the same thing on a highly compressed timeline: the card is “funded” by charging the backing credit-card and exact same balance is withdrawn to pay the merchant. The difference: it happens in a matter of seconds in order to stay under the latency requirements of payment networks.)

Why proxy?

So why all this complexity? Because it allows supporting any card without additional work required by the issuer. Recall that setting up a mobile NFC wallet requires provisioning a “payment instrument” capable of NFC transactions to that device. That is not just a matter of recording a bunch of details such as card number and CVV code once. Contactless payments use complex protocols involving cryptographic keys that generate unique authorization codes for each transaction. (This is what makes them more resistant to fraud involving compromised point-of-sale terminals, as in the case of Target and Home Depot breaches.) Those secret keys are only known to the issuing bank and can not be just read off the card-face by the consumer. That means provisioning a Citibank NFC card requires the mobile wallet to integrate with Citibank. That is how the original Google Wallet app worked back in 2011.

The problem is that such integrations do not scale easily, especially when there is a hardware secure element in the picture. There is a tricky three-way dance between the issuing bank who controls the card, mobile-wallet provider that authors the software on the phone and trusted service manager (TSM) tasked with managing over-the-air installation of applications on a family of secure elements. Very little about the issuer side is standardized, meaning that proliferating issuers also means a proliferation in the number of one-off integrations required to support each card.

Virtual cards cleanly solve that problem. Instead of having to integrate against incompatible provisioning systems from multiple banks, there is exactly one type of card provisioned to the phone. Integration with existing payment networks is instead relegated to the cloud, where ordinary online charges are placed against a backing instrument.

Proxy MODEL: the downsideS

While interesting from a technical perspective and addressing one of the more pressing obstacles to NFC adoption— namely users stymied by their particular financial institution not being supported— there are also clear downsides to this design.


As industry observers were quick to point out soon after launch, the proxy model is not economically viable at scale. Revisiting the picture above, there is a pair of transactions:

  1. Merchant places a charge against a virtual card
  2. Issuer of the virtual card places a charge for the identical amount against the consumer’s credit card

Both are going through credit-card networks. By virtue of how these network operate, each one incurs a fee, which is divvied up among various parties along the way. While the exact distribution varies based on network as well bargaining positions of issuer/acquirer banks, an issuing acquiring bank typically receives the lion’s share but less than 100%. For the above transactions, the provider of the virtual card is acting as issuer for #1 and merchant for #2. Google can expect to collect fees from the fronting instrument transaction while paying a fee for the backing charge.

In fact the situation is worse due to the different transaction types. The second charge is card-not-present or CNP, the same way online purchases are done by typing card numbers into a webpage. Due to increased risk of fraud, CNP transactions carry higher fees than in-store purchases where the customer presents a physical card and signs a receipt. So even if one could recoup 100% of the fee from the fronting instrument, that would typically not cover the cost of backing transaction. (In reality, the fees are not fixed; variables such as card type can greatly complicate the equation. For example, while the fronting instrument is always a MasterCard, the backing instrument could be an American Express which typically boasts some of the highest fees, putting the service deeper into the red.)

Concentrating risk

The other problem stems from in-store and CNP modes having different consequences in the event of disputed transactions. Suppose the card-holder alleges fraud or otherwise objects to a charge. For in store transactions with a signed receipt, the benefit of the doubt usually goes to the merchant, and the issuing bank is left eating the loss. For card-not present transactions where merchants can only perform minimal verification of card-holder, the opposite holds: process favors the consumer and the merchant is left holding the bag. Looking at the proxy model:

  • Google is the card issuer for an in-store purchase
  • Google is the merchant of record for the online, card-not-present charge against the backing instrument

In other words, the model also concentrates fraud risk with the party in the middle.

Great for USERS, bad for business?

In retrospect the virtual-card model was a great boon for consumers. From the user perspective:

  • You can make NFC payments with a credit card from any bank, even if your bank is more likely to associate NFC with football
  • You can use your preferred card, even at merchants who claim to not accept it. A merchant may not honor American Express but the magic of virtual cards has a MasterCard channeling for that AmEx. Meanwhile the merchant has no reason to complain because they are still paying ordinary commissions and not the higher AmEx fee structure.
  • You continue earning rewards from your credit-card. In fact that applies to even category-specific rewards, thanks to another under-appreciated feature of the proxy model. Some rewards programs are specific to merchant types, such as getting 2x cash-back only when buying gas. To retain that behavior, the proxy model can use different merchant-category code (MCC) during each backing charge. When the virtual card is used to pay at a gas station, Google can relay the appropriate MCC for the backing transaction.

From a strategic perspective, these subsidies can be lumped in with customer-acquisition costs, similar to Google Wallet initially offering a free $10 when it initially launched or Google Checkout doing the same a few years back. Did picking up the tab for three years succeed in boot-strapping an NFC ecosystem? Here the verdict is mixed. Virtual cards successfully addressed one of the major pain-points with early mobile wallets: limited selection of supported banks. Unfortunately there was an even bigger adoption blocker for Google Wallet: limited selection of wireless carriers. Because Verizon, AT&T and T-Mobile had cast their lot with the competing (and now failed) ISIS mobile wallet, these carriers actively blocked alternative NFC solutions on their devices. The investment in virtual cards had limited dividends because of a larger reluctance to confront wireless carriers. It’s instructive that Apple pursued a different approach, painstakingly working to get issuers on-board to cover a large percentage of cards on the market—but decidedly less than the 100% coverage achievable with virtual cards.


** More precisely issued by Bancorp under contractual agreement with Google.

Android Pay: proxy no more

Getting by without passwords: local login (part II)

[continued from part I]

Before diving into implementation details, a word of caution: local login with hardware tokens is more of a convenience feature than a security improvement. Counter-intuitive as that might sound, using of strong public key cryptography buys very little against the relevant threat model for local access: proximity to the device. A determined adversary who has already gotten within typing distance of the machine and staring at a login prompt does not need to worry about whether it is requesting a password or some new-fangled type of authentication. That screen can be bypassed altogether, for example by yanking the drive out of the machine, connecting it to another computer and reading all of the data. Even information stored temporarily in RAM is not safe against cold-boot attacks or malicious USB devices that target vulnerable drivers. For these reasons there is a very close connection between local access-control and disk encryption. Merely asking for a password to login is not enough when that is a “discretionary” control implemented by the operating system, “discretionary” in the sense that all of the data on disk is accessible without knowing that password. That would be trivially bypassed by taking that OS out of the equation, accessing the disk as a collection of bits without any of the OS-enforced access controls.

For this reasons local smart-card logon is best viewed as a usability feature. Instead of long and complex passwords, users type a short PIN. That applies not only for login, but for every time screen is unlocked after screen-saver kicks in due to inactivity. By contrast applying the same formula for remote authentication over a network does represent a material improvement. It mitigates common attacks that can be executed without proximity to the device, such as phishing or brute-forcing passwords.

With that disclaimer in mind, here is an overview of login with hardware tokens across three common operating systems. Emphasis is on finding solutions that are easily accessible to an enthusiast; no recompiling of kernels or switching to an entirely different Linux distribution. One note on terminology: the discussion will use “smart-card” as catch-all phrase to refer to cryptographic hardware, with the understanding that its physical manifestation can assume other forms including smart-cards, NFC tags and even smart-phones.


Windows supports smart-card login using Active Directory out-of-the-box. AD is the MSFT solution for centralized administration of machines in a managed environment, such as a large company or branch of government with thousands of devices being maintained by an IT department. While AD bundles many different features, the interesting one for our purposes is a notion of centralized identity. While each machine joined to a particular AD domain still has local accounts— accounts that are only meaningful to that machine and not recognized anywhere— it is also possible to login using a domain account that is recognized across multiple devices. That involves verifying credentials with the domain controller using Kerberos. Kerberos in turn has an extension called PKINIT to complete the initial authentication using public-key cryptography. That is how Windows implements smart-card logon.

That’s all good and well for enterprises but what about the average home user? A typical home PC is not joined to an AD domain- in fact the low-end flavors of Windows that are typically bundled with consumer devices are not even capable of being joined to a domain. It’s one of those “enterprise features” only reserved for more the more expensive SKUs of the operating system, as a great example of discriminatory pricing to ensure those companies don’t get away with buying one of the cheaper versions. Even if the user was running  the right OS SKU, they would still require some other machine running Windows Server to act as the domain controller. In theory one can devise kludges such as a local virtual machine running on the same box to serve as the domain controller, but these fall short of our criteria for being within reach of a typical power-user.

Enter third-party solutions. These extend the Windows authentication framework for local accounts to support smart cards without introducing the extra baggage of Active Directory or Kerberos. A past blog post from January 2013 described one solution using eIDAuthenticate.

eIDAuthenticate entry
eIDAuthenticate entry
Choosing a certificate from attached card
Choosing a certificate from attached card

eIDAuthenticate allows associating a trusted certificate with a local Windows account via the control panel. The user can later login to the account using a smart-card that has the corresponding private-key for that certificate. (eIDAuthenticate can also disable the password option and require smart-cards. But the same effect can be obtained less elegantly by randomizing the password.)

eIDAuthenticate setup— trial run
eIDAuthenticate setup— trial run
Using smart-card for local logon
Using smart-card for local logon


This case is already covered in a post from February on smart-card authentication in OS X which includes screenshots of the user-experience.

Linux / BSD

Unix flavors have by far the most complete story for authentication with smart-cards. Similar to the notion of extensible architecture of credential providers in Windows, Linux has pluggable authentication modules or PAMs. One of these is pam-pkcs11 for using hardware tokens. As the name implies, this module relies on the PKCS #11 standard as an abstraction for interfacing with cryptographic hardware. That means whatever gadget we plan to use must have a PKCS #11 module. Luckily the OpenSC project ships modules covering a variety of different hardware, including PIV cards of interest for our purpose. The outline of a solution combining these raw ingredients looks like:

  • Install necessary packages for PC/SC, OpenSC & pam-pkcs11
  • Configure system to include the new PAM in the authentication sequence
  • Configure certificate mappings for pam-pkcs11 module

While the first two steps are straightforward, the last one calls for a more detailed explanation. Going back to our discussion of smart-card login on OS X, the process breaks down into two stages:

  • Mapping: retrieving a certificate from the smart-card, verifying its validity and determining which user that certificate represents
  • Proof-of-possession: Verifying that the user holds the private-key corresponding to the public-key in the certificate. Typically this involves prompting for a PIN and asking the card to perform some cryptographic operation using the key.

While there is occasional variety in the protocol used for step #2 (even within PKINIT, there are two different ways to prove possession of private-key) the biggest difference across implementations concerns the first part. Case in point:

  • Active Directory determines the user identity from attributes of the certificate, such as the email address or Universal Principal Name (UPN). Note that this is an open-ended mapping. Any certificate issued by a trusted certificate authority with the right set of fields will do the trick. There is no need to register each such certificate with AD ahead of time. If a certificate expires or is revoked, a new one can be issued with the same attributes without having to inform all servers about the change.
  • eIDAuthenticate has a far more rudimentary, closed-ended model. It requires explicitly enumerating certificates that are trusted for a local account.
  • Ditto for OSX, which white-lists recognized certificates by hash but at least can support multiple certificates per account.**

Flexible mapping

By contrast to these pam-pkcs11 has by far the most flexible and well-developed model. In fact it is not a single model so much as a collection of different mapping options. These include the OSX/eIDAuthenticate model as special case, namely enumerating trusted certificates by their digest. Other mappers allow trusting open-ended set of credentials, such as any certificate issued by a specific CA with a particular email address or distinguished name, effectively subsuming the Active Directory trust model. But pam-pkcs11 goes far beyond that. There are mappers for interfacing with LDAP directories and even a mapper for integrating with Kerberos. In theory it can even function without X509 certificates at all; there is an SSH-based mapper that looks at raw public-keys retrieved from the token and compares against authorized keys files for the user.

UI after detecting smart-card
UI after detecting smart-card
Command line smart-card usage
Command line smart-card usage

Assuming one or more mappers are configured, smart-card login changes the user experience for initial login, screen unlocking and command-line login, including sudo. Screenshots above give a flavor of this on Ubuntu 14.



** OSX also appears to have hard-coded rules around what type of certificates will be accepted; for example self-signed certificates and those with an unrecognized critical extension are declined. That makes no sense since trust in the key is not coming from X509- unlike the case of Active Directory- but explicitly whitelisting of individual public-keys. The certificate is just a glorified container for public keys in this model.

Getting by without passwords: local login (part II)

Getting by without passwords: the case for hardware tokens (part I)

For an authentication technology everyone loves to hate, there is still plenty of activity around passwords:

  • May 7 was international password day. Sponsored by the likes of MSFT, Intel and Dell the unofficial website urges users to “pledge to take passwords to the next level.”
  • An open competition to select a better password hashing scheme has recently concluded and crowned Argon2 as the winner.

Dedicated cryptographic hardware

What better time to shift gears from an endless stream of advice on choosing Better passwords? This series of posts will look at some pragmatic ways one can live without passwords. “Getting rid of passwords” is a vague objective and calls for some clarification— lest trivial/degenerate solutions become candidates. (Don’t like typing in a password? Enable automatic login after boot and your OS will never prompt you.) At a high level, the objective is: replace use of passwords by compact hardware tokens that both improve security and reduce cognitive burden on users.

The scope of that project goes beyond authentication. Typically passwords are used in conjunction with access control: logging into a website, connecting to a remote computer etc. But there are other scenarios: for example, full-disk encryption or protecting an SSH/PGP key stored on disk typically involves typing a passphrase chosen by the user. These are equally good candidates in search of better technology. For that reason we focus on using cryptographic hardware, instead of biometrics or “weak 2-factor” systems such as OTP which are trivially phishable. Aside from their security deficiencies, they are only usable for authentication and by themselves can not provide a full suite of functionality such as data encryption or document signing.

Hardware tokens have the advantage that a single token can be sufficient to displace passwords in a variety of scenarios and even multiple instances of the same scenario, such as logging into multiple websites each with their own authentication system. (In other words, no identity federation or single sign-on, otherwise the solution is trivial.) While reusing passwords is a dangerous practice that users are constantly cautioned against,  reusing same public-key credentials across multiple sites presents minimal risk.

It turns out the exact model of hardware token or its physical form-factor (card vs USB token vs mobile-device vs wearable) is not all that important, as long as it implements right functionality, specifically public-key cryptography. More important for our purposes is support from commodity operating systems with device-drivers, middleware etc. to provide the right level of interoperability with existing applications. The goal is not overhauling the environment from top to bottom by replacing every app, but working within existing constraints to introduce security improvements. For concrete examples here, we will stick with smart-cards and USB tokens that implement the US government PIV standard, which enjoys widespread support across both proprietary and open-source solutions.

PIN vs password: rearranging deck chairs?

One of the first objections might be that such tokens typically have a PIN of their own. In addition to physical possession of the token, the user must supply a secret PIN to convince it to perform cryptographic operations. That appears to contradict the original objective of getting rid of passwords. But there are two critical differences.

First as noted above, a single hardware token can be used for multiple scenarios. For example it can be used for login to any number of websites and send while sharing a password across multiple sites is a bad idea. In that sense the user only has to carry one physical object and remember one PIN.

More importantly, the security of the system is much less dependent on the choice of PIN compared to single-factor systems based on a password. PIN is only stored on and checked by the token itself. Without physical possession of the token it is meaningless. That is why short numeric codes are deemed sufficient; the bulk of the security is provided by having tamper-resistant hardware managing complex, random cryptographic keys that users could not be expected to memorize. You will not see elaborate guidelines on choosing an unpredictable PIN by combining random dictionary words. PIN is only used as an incremental barrier to gate logical access after the much stronger requirement of  access to the hardware. (Strictly speaking, “access” includes the possibility of remote control over a system connected to the token;  in other words compromising a machine where the token is used. While this is a very realistic threat model, it still relies on the user physically connecting their token to an attacker-controlled system.)

Here is a concrete example comparing two designs:

  • First website uses passwords to authenticate users. It stores password hashes in order to be able to validate logins.
  • Second website uses public-key cryptography to authenticate users. Their database stores a public-key for each user. (Corresponding private-key lives on a hardware token with a PIN, although this is an implementation detail as far as the website is concerned.)

Suppose both websites are breached and bad guys walk away with contents of their database. In the first scenario, the safety of a user account is at least partially a function of their skill at choosing good passwords. You would hope the website used proper password-hashing to make life more difficult for attackers, by making it costly to verify each guess. But there is a limit to that game. The costs of verifying hashed passwords increase alike for attackers and defenders attacker. Some users will pick predictable passwords and given enough computing resources these can be cracked.

In the second case, the attacker is out of luck. Difficulty of recovering a private-key from the corresponding public key is the mathematical foundation on which modern cryptography rests. There are certainly flaws that could aid such an attack: for example, weak randomness when generating keys has been implicated in creating predictable keys. But those factors are a property of hardware itself and independent of user skills. In particular, quality of the user PIN— whether it was 1234 or 588429301267— does not enter into the picture.

User skill at choosing a PIN only become relevant in case an attacker gains physical access. Even in that scenario, attacks against the PIN are far more difficult.

  • Well-designed tokens implement rate limiting, so it is not possible to try more than a handful guesses via the “official” PIN verification interface.
  • Bypassing that avenue calls for attacking the tamper-resistance of the hardware itself. This is certainly feasible given proper equipment and sufficient time. But assuming the token had appropriate physical protections, it is a manual, time-consuming attack that is far more costly than running an automated cracker on a password dump.

Starting out local

With that context, the next series of posts will walk through examples of replacing use of passwords in each scenario with a hardware token. In keeping with the maxim “be the change you want to see in the world,” we focus on use-cases that can be implemented by unilaterally by end-users, without requiring other people to cooperate. If you wanted to authenticate to your bank using a hardware token and their website only supports passwords, you are out of luck. There isn’t much that can be done to meaningfully implement a scheme that offers comparable security within that framework. You could implement a password manager that users credentials on the card to encrypt the password, but at the end of the day the protocol still involves submitting the same fixed secret over and over. By contrast local uses of hardware tokens can be implemented without waiting for any other party to become enlightened. Specifically we will cover:

  1. Logging into a local machine. Spoiler alert- due to how screen unlocking works, this also covers that vexing question of “how do I unlock my screen using NFC?”
  2. Full-disk encryption
  3. Email encryption/signing with PGP
  4. Connecting to a remote server over SSH (Strictly speaking this is not “local” usage, but it satisfies the criteria of not requiring changes to systems outside our control. Assuming the remote host is configured for public-key authentication, how the user manages their private-key is transparent to other peers.)


Getting by without passwords: the case for hardware tokens (part I)