As security professionals we are often guilty of focusing single-mindedly mindedly on one aspect of risk management, namely preventing vulnerabilities, to the exclusion of others: detection and response. This bias seems to have dominated discussion of the recent “goto fail” debacle in iOS/OS X and its wildly improbable close-cousin in GnuTLS. Apple has been roundly criticized and mocked for this self-explanatory flaw in SecureTransport, its homebrew SSL/TLS implementation. The bug voided all security guarantees the SSL/TLS protocol provides, rendering supposedly “protected” communications vulnerable to eavesdropping.
But much of the conversation and unofficial attempts at post-mortems (true to its secretive nature, Apple never published an official explanation, but conveniently created a well-timed distraction in the form of a whitepaper touting iOS security) focused on the low-level implementation details as root cause. Why is anyone using goto statements in this day-and-age, when the venerable Edsger Dijsktra declared way back in 1968 that they ought to be considered harmful? Why did they not adopt a coding convention requiring braces around all if/else conditionals? How could any intelligent compiler not flag the remainder of the function as unreachable code when the spurious goto statement was causing?** Why was the duplicate line missed in code reviews when it stands out blatantly in the delta? Did Apple not have a good change-control system for introducing code changes? Speaking of sane software engineering practices, how is it possible that code-flow jumps to a point labelled ”fail” and yet still returns success, misleading callers into believing that the function completed successfully? To step back one more level, why did Apple decide to maintain its own SSL/TLS implementation instead of leveraging open-source libraries such as NSS or openssl which have benefited from years of collective improvement and cryptographic expertise that Apple does not have in-house?
All good questions, partly motivated by a righteous indignation that such a catastrophic bug could be hiding in plain sight. But what about the aftermath? Once we accept the premise that a critical vulnerability exists, the focus shifts to response. Putting aside questions around why the flaw existed in the first place, let’s ask how well Apple handled its resolution.
- There was no prior announcement that an important update was about to be released. Compare this to the advance warning MSFT provides for upcoming bulletins.
- A passing mention in the release notes about the vulnerability, with an ominous statement to the effect that ”an attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS.” Not a word about the critical nature of the flaw or a pleas for users to upgrade urgently. One would imagine that an implementation error that defeats SSL– the most widely deployed protocol for protecting communications on the Internet– and allows eavesdropping on millions of users’ traffic would hit a raw nerve in this post-Snowden world of global surveillance. Compare Apple’s nonchalance and brevity to the level of detail in a past critical security update from Debian or even routine MSFT bulletins released every month.
- The update was released on a Friday afternoon Pacific-time. This is the end of the work-week in Northern America, and well into the weekend in Europe. Due to lack of upfront disclosure by Apple, the exact nature of the vulnerability was not reverse-engineered publicly until several hours later. That is suboptimal timing to say the least for dropping a critical fix, especially in a managed enterprise IT environment with a large Mac fleet and security team tasked with trying to ensure that all employees upgrade their devices. (Granted, Apple never seems to have cared much for the enterprise market, as evidenced by weak support for centralized management compared to Windows or even Linux with third-party solutions.)
- The update addressed the vulnerability only for iOS, leaving Mavericks, the latest and greatest desktop operating system vulnerable. In other words, Apple 0-dayed its own desktop/laptop users with an incomplete update aimed at mobile users. Why? At least three possibilities come to mind.
- Internal disconnect: Apple may not have realized the exact same bug existed in the OS X code base– but this is a stretch, given the extent of code sharing between them.
- Optimism/naiveté: Perhaps they were aware of the cross-platform nature of the vulnerability but assumed nobody would figure out exactly what had been fixed, giving Apple a leisurely time-frame to prepare an OS X update before the issue poses a risk to users. To anyone familiar with shrinking time-windows between patch release and exploit development, this is delusional thinking. There is 10 years worth of research on reverse-engineering vulnerabilities from patches, even when the vendor remains silent on details of the vulnerability or even existence of any vulnerabilities in the first place.
- Deliberate risk-taking / cost-minimization: The final possibility is Apple did not care or prioritized mobile platforms over traditional laptop & desktops. Some speculated that Apple was already planning to release an update to Mavericks incorporating this fix and saw no reason to rush an out-of-band patch. (Compare this to approach MSFT has taken towards critical vulnerabilities. When there is evidence of ongoing or imminent exploitation in the wild, the company has departed from the monthly cycle to deliver updates immediately as with MS13-008.)
- No explanation after the fact about the root-cause of the vulnerability or steps taken to reduce chances of similar mistakes in the future. This is perhaps the most damning part. The improbable nature of the bug– one line of code mysteriously duplicated, looking so obviously incorrect on even the most cursory review– fueled much speculation and conspiracy theories around whether it had been a deliberate attempt to introduce a backdoor into Apple products. Companies are understandably reluctant to release internal postmortem out of fear that they may reveal proprietary information or portray individual employees in an unflattering light. But in this case even an official blog post summarizing the results of an investigation could have sufficed to quell wild theories.
Coincidentally the same Friday this bug was exposed, this blogger gave a presentation at Airbnb arguing that OS X is a mediocre platform for enterprise security, citing lack of TPM, compatibility issues with smart-cards and dubious track record in delivering security updates. For the next four days of goto-fail fiasco, Apple piled on the evidence supporting that last point. In some ways the continuing silence out of Cupertino represents an even bigger failure to comprehend what it takes to maintain trust when vulnerabilities, even critical ones, are inevitable.
As described in earlier posts, Android 4.4 “Kitkat” has introduced host-based card emulation or HCE for NFC as a platform feature, opening this functionality up to third-party developers in ways that were not quite possible with the embedded secure element. In tandem with the platform API change, Nexus 5 launched without an embedded secure element, ending a run going back to the Nexus S where the hardware spec included that chip coupled to the NFC controller. Google Wallet was one of the first applications to migrate from using the eSE to HCE for its NFC use case, namely contactless payments.
An earlier four-part series compared HCE and hardware secure elements from a functional perspective, concluding that the current Android implementation is close (but not 100%) to feature parity with previous architecture when card-emulation route points to the eSE. The next set of posts will focus on security, looking at what additional risks are introduced by using HCE instead of dedicated hardware coupled to the NFC controller.
Another way to phrase this question: what did embedded SE buy in terms of security and what was lost when Android gave up on the SE due to opposition from wireless carriers? Can HCE achieve similar level of security assurance or are there scenarios inherently depending on special hardware incorporated into the device, regardless of its form factor as eSE, UICC or micro-SD?
Broadly speaking, there are 4 significant benefits ranging from the obvious to more subtle:
- Physical tamper resistance
- Reduced attack surface
- Taking Android out of the trusted computing base (TCB)
- Interface separation
Each of the following posts will tackle one of these aspects.
[continued from part I]
In a WSJ article, a representative from MasterCard describes the plan for incentivizing EMV adoption:
When the liability shift happens, what will change is that if there is an incidence of card fraud, whichever party has the lesser technology will bear the liability. [...] So if a merchant is still using the old system, they can still run a transaction with a swipe and a signature. But they will be liable for any fraudulent transactions if the customer has a chip card. And the same goes the other way – if the merchant has a new terminal, but the bank hasn’t issued a chip and PIN card to the customer, the bank would be liable.
This is an interesting approach. It leaves the card-holder out of the equation– no pesky consumer protection agencies to worry about. Instead banks and merchants square-off against each other in a race to adopt EMV before the other party does, lest they be left holding the bag for losses.
While MasterCard representative quoted in the article disclaims any attempt to move liability around, there is no question that the proposed scheme amounts to disrupting the current equilibrium temporarily. The way dispute resolution for charge-backs is handled today, typically the merchant gets the benefit of the doubt for card-present transactions– in other words in-store payments when there is a signed receipt proving that the merchant performed due diligence to confirm the transaction. Conversely for card-not-present transactions, benefit of the doubt goes to the issuer and the merchant eats the fraud loss, which explains many of the misguided schemes such as Verified-by-Visa desperately trying to make a dent in the incidence of such fraud. For now CNP is unlikely to play much of a role in EMV adoption. From a technology stance, all of the elements are in place to enable NFC payments over the web using mobile devices/tablets. Yet business/regulatory hurdles remain before such systems can be deployed broadly.
With the new incentive structure proposed by the card networks, merchants may find themselves on the losing side of an unauthorized transaction dispute even for card-present transactions, if they are dealing with chip & PIN cards. (One amusing consequence may be that such customers become persona non grata; merchants may decline to accept cards with chip & PIN, although such discrimination would almost certainly run afoul of network regulations.) In theory this gives merchants incentives to upgrade their POS and payment processing systems, in order to maintain the status quo vis-à-vis issuers. Dangling before issuers on the other side is the lure of a temporary reprieve from card-present fraud. Any bank that issues chip & PIN cards may enjoy an advantage against merchants if the merchant still processed the transaction the old-fashioned way.
The problem is all such gains are temporary. In equilibrium, after both issuers upgraded all of their customers to carrying chip & PIN cards and all merchants terminals process payments via EMV protocols, the exact same liability regime from today is restored.
This leads to a bizarre state of affairs. In game theoretical terms, either merchants or issuers can benefit in the short run by adopting EMV first, before the other actor does. (This assumes that savings from fraud exceeds the capital investments required for upgrading, whether that means the cost of buying new POS hardware or reissuing new cards to existing customers.) Such benefits need not correspond to an actual decrease in fraud as experience by consumers. After all chip & PIN cards still have magnetic stripes, so they can be cloned for fraudulent transactions at merchants still relying on swipe technology. The operative question for the merchant/issuer is not whether fraud exists but who is picking up the tab. From that perspective, preemptive EMV adoption pays-off, leaving the “other” side on the hook. But once both sides have upgraded, that advantages vanishes.
Put another way, the card networks have almost set up a text-book experiment in behavioral economics. Crash upgrade to chip & PIN pays off unilaterally for each player as long as the other one has not upgraded but such benefits disappear once the opponent also upgrades. What is the rational choice in this situation? Racing to upgrade is no doubt the outcome the card networks are hoping for. In the short-term, merchants could pass on the capital investment to consumers in the form of higher prices. (It would be particularly amusing, and a certain measure of poetic justice, if a special surcharge applied to chip & PIN card payments only. But card networks would likely crack down on such blatant attempts to single-out EMV mandate for higher prices.) Curiously there is another equilibrium point: status quo. Both sides can delaying upgrades, betting that no one else is deploying EMV, and consequently there is no additional liability incurred from the redistribution mandate. Another wild-card here is how international transactions are treated. Even if US banks move slowly on issuing EMV cards, merchants can still be exposed to a significant downside from transactions involving cards from other countries. Card fraud is very much a global business. To the extent chip & PIN frustrates certain types of fraud in Europe, it has also served to redirect the criminals’ attention to the US market where it is easier to monetize stolen EMV cards using traditional magnetic stripes. Cracking down on that by shifting liability to US merchants alone could be enough to tip the scales. Time will tell.
The Target data-breach has resurrected interest in the deployment of chip & PIN technology in the US. Part of the EMV suite of protocols dating back to the 1990s, this scheme aims to supplement the ubiquitous magnetic-stripes on credit and debit cards with a small embedded chip, capable of providing greater resilience against common threats against payment systems, such as compromised point-of-sale terminals– what appears to have been the root cause for Target’s headaches.
While chip & PIN is common in Europe, it remains something of a rarity in the US, both on the issuer and merchant side. Few banks issue cards containing chips, a market niche limited to the travelers planning to spend significant time overseas where some merchants may not accept a signature-based transaction. The acceptance story for merchants is worse for understandable reasons: there is little incentive for merchants to undertake the cost of upgrading the installed base of readers. Chip & PIN cards still have a plain magnetic stripe on the back usable for traditional swipe transactions, which means that even merchants catering to tourists from abroad can continue to accept card payments without upgrading. (In fairness even an enthusiastic merchant could not upgrade unilaterally. There is usually a third-party payment processing service connected to those terminals and handling the back-end of transactions. Without upgrades in that system, installing new point-of-sale terminals is not enough.) No wonder that articles going back to 2001 bemoan the fact that all the sophisticated hardware going into chipped cards is mostly sitting idle.
Ironically contactless payments using NFC may have done more to facilitate the adoption of EMV protocols than chip & PIN. Despite being a newer technology, NFC has faced the exact same uphill battle for adoption because the incentives for issuers and merchants have been unclear. Issuers benefit by having less fraud in theory since NFC eliminates some of the weaknesses of traditional magnetic stripes– provided users are actually transacting over NFC instead of swiping the cards, which in turn a function of the installed base at merchants. So the issuer savings depend on merchant adoption rate. If merchants also stood to gain from increased NFC issuance, this circular dependency could have at least created a positive feedback loop. Yet none of the savings are passed on to merchants. They are still paying the same interchange fee for every payment transacted using the card networks; there is no discount over plastic for accepting NFC. At best one could argue that tap & pay transactions are faster than traditional swipe, which matters mainly for a small category of merchants who stand to gain considerably from shaving a few seconds from the time for serving each customer: coffee shops, fast-food outlets and similar high-volume, low-margin businesses looking to squeeze more orders per hour.
There is of course an undeniable PR/reputation gain from being on the cutting edge of new technology, and this applies to all actors in the system: issuer, merchants and card-holders. Google Wallet arguably provided some of that momentum, by packaging the technology in smart phone form-factor and appealing to technology savvy early adopters with a virtual card proxying transactions in real-time. But even that remained limited by the installed base of NFC readers, prompting Google to offer the same virtual card in old-school plastic format.
Given that chip & PIN faces the same uphill battle, how will the card networks encourage adoption?
In the UK the answer was a unilateral mandate from issuing banks, accompanied by a liability shift. The banks adopted the convenient stance that because chip & PIN technology is so robust, any transaction authorized by PIN must have been carried out by the original card-holder. In case of disputed transactions, consumers are presumed guilty until proven innocent. Not surprisingly, this has lead to a strong backlash, coupled with a growing literature in security research suggesting that EMV protocols are far from being invincible; in fact basic design flaws allow fraudulent transaction without the PIN.
Either chastened by the contentious PR battle or perhaps reluctant to directly challenge protections afforded by federal laws around consumer liability, card networks have decided to take a different approach in the US: pitting merchants and issuers against each other.
[continued from part IV]
First problem is that attackers are still limited in the number of purchases they can make with captured data. Each simulated track-data contains a counter known as ATC which is unique to a transaction, limiting this scam by the number of “extra” transaction performed at the malicious POS. Given that a full transaction requires on the order of ~100 milliseconds minimum– due to limited processing capabilities of smart card hardware, particularly for contactless transactions when all power required for computing must be drawn from the induction field– we are looking at not more than a dozen spurious protocol executions before either the checkout delays become suspicious or customers tire of holding their cards against the reader. Granted this may not be too much of a problem for the crooks interested in perpetrating fraud. Between issuer back-ends searching for anomalous spending patterns and card-holders noticing strange charges on their card, even old-school plastic card fraud may not get much far farther than a handful of unauthorized uses before it is caught.
When counters jump around
But the same transaction counter presents a different problem for attackers. To pick a concrete example, suppose that the card starts out with ATC at 10. The compromised POS carries out 5 NFC transactions, receiving track-data with ATC=11, 12, 13, 14, 15. One of these must be used to authorize the current purchase, with others saved for future fraudulent transactions.
Suppose the attacker submits ATC=15 to the issuer. The issuer will notice a gap, a sudden jump in ATC from the last seen value of 10 without any intervening values observed. This is often explained by incomplete or failed transactions where the card/terminal only complete some of the protocol steps. (ATC is incremented fairly on, typically during the execution of GET PROCESSING OPTIONS command) But when the remaining track-data samples are used in fraudulent transaction, the issuer will observe something even more strange: ATC out-of-order, with lower values of the counter such as 11 and 12 appearing at a later date than higher values. By itself this is not sufficient to deem the transaction fraudulent. Occasionally transactions get submitted in one large “batch” after a delay, instead of being submitted incrementally in real-time, especially when charge amounts are small. That could explain isolated instances of ATC appearing to jump back-and-forth over short periods, until missing values in the sequence are finally submitted to the payment processor. But the same pattern repeated over a longer stretch, with low ATC values continuing to appear after higher ones have been observed could be either a signal for flagging the transaction as suspicious or outright rejecting it as a processing error. In other words, the useful lifetime of information captured by the malicious POS has been bounded.
Exact same pattern occurs if the attacker chooses to submit ATC=10 to complete the original purchase at the compromised POS. In this case, remaining track-data can be used for fraudulent transactions in correct order, as strictly monotone increasing counter 11, 12 etc. But there is a catch: if the card-holder herself starts making additional purchases, the issuer will receive higher ATC values starting at 16, giving rise to the same out-of-sequence ATC signal. This creates a race-condition between the crooks and legitimate user: they need to quickly monetize the stolen information before actions by the card-holder renders that information useless because the ATC has advanced too far for the issuer to honor lower values. That does not render the attack impossible per se, but like all good mitigations, it raises costs for the attacker and blocks certain avenues for exploit. Much like a stolen OTP in attacks against two-factor authentication systems, captured track-data from NFC must be cashed in quickly or it may become worthless. Squirreling it away to resell on the black-market days or weeks later is no longer a viable strategy. (Note the extreme case of trying to win the race is a real-time relay attack. The attacker can have an accomplice stand in front of another NFC reader with a mobile device and simply relay APDUs to the victim card via the compromised POS. The unauthorized transaction takes place almost simultaneously or even before the intended one.)
Part III in this series sketched a picture of how the most basic EMV protocol over NFC– backwards compatible with the ubiquitous magnetic stripes– can resist passive attacks taking place at the point-of-sale. If the miscreants’ strategy involves capturing transaction data as it is passing through a compromised POS and trying to stash it away for future use, card issuers can combat such fraud by checking for replay indicators. What about more sophisticated attacks when the POS attempts to influence the exchange between NFC reader and card, or try to monetize the stolen data immediately?
The price is always right
Looking back at how track-data and CVC3 are computed during NFC payments, there are two inputs conspicuously absent from the exchange: price and merchant identifier. That means the emulated magnetic stripe is not in any way bound to a particular purchase or even a specific merchant. In fact it is relatively easy to verify the first part experimentally. When paying with Google Wallet in-store for a purchase having multiple items, you can initiate the tap & pay before the cashier has finished ringing up all of them– exactly as one could with traditional plastic cards. The POS will typically display an interim message indicating that it is still waiting for the final amount, but the buyers will not have to tap again or otherwise confirm the details of the transaction.
That suggests a new avenue of attack:
- Trick the cardholder into performing multiple NFC transactions.
- Use one of the resulting track-data to complete the intended purchase at the specific merchant whose POS devices have been compromised
- Stash track-data from others, encode them on plain magnetic-stripe cards and rely on backwards compatibility to monetize them via swipe transactions at other merchants. There is nothing in the track-data per se limiting its use to the original merchant or identical transaction amount.
- Profit. Granted, fraud detection by the issuer can still flag transactions as suspicious based on merchant/amount patterns but that is based on statistical models of cardholder behavior. There is nothing in the track-data itself that indicates an attempt to divert captured track-data from a different merchant.
What is the feasibility of carrying out such an attack? First note that the multiple transactions are required for this plan. As noted earlier, each emulated track-data contains an incrementing counter, the ATC which is authenticated by the CVC3 and allows the issuer to detect replay attempts. At least one transaction is required for the cardholder to complete the legitimate purchase and that can not be reused by the miscreants. Racing the merchant to using that data first requires that attackers already have a purchase lined up, greatly limiting their options– good for our defensive position. More significantly it means that the card-holder gets a declined transaction and the issuer sees a repeated ATC value raising suspicion about what is going on. With full control over POS, the bad guys can try more subtle options where the compromised POS pretends that an authorization succeeded and prints out a receipt, without submitting anything to the payment processor. But such an attack equally risks being found-out quickly, due to accounting discrepancies; the merchant does not get paid.
Step #1 turns out not to be a major obstacle. As long as the card is in the induction field of the NFC reader, the reader can likely repeat payment protocol without additional user action required. Plastic-cards with NFC have no built-in clock or other means of detecting that a terminal is requesting multiple transactions in quick succession. Once the field is removed– powering off the chip inside the card which draws its current from that external field– and reintroduced, the card has no way to know whether it has been days or milliseconds since last activation.
Smartphone-based implementations do have access to an actual clock and could in principle detect such rapid-burst attempts.** Yet they are typically configured to only require PIN or other user explicit user confirmation based on time intervals, rather than transaction count. As long as the user has “primed” the application in the last 15 minutes or other interval configured, there is no additional confirmation required for individual transactions to proceed. This is partially driven by a desire to optimize for usability and handle flaky readers: in case a transaction failed, consumers can try again by holding the phone against the reader, without fiddling with the screen yet again.
Instead what stops this attack from working is the way ATC (application-transaction counter) frustrates steps #2 and #3.
** Even this part is tricky when a mobile wallet operates in NFC card-emulation mode with a secure element directly talking to the reader, by-passing the host operating system. SE has no concept of wall-clock either and can only limit transactions based on a signal received from the host device. Mobile versions of NFC payment protocols such as Paypass make provisions for host to grant single-shot approval to the SE, instead of the more common model of allowing them indefinitely until the host revokes such permission.
[continued from part II]
Armed with the background from previous posts on NFC payment protocols and how the magnetic-stripe profiles for EMV protocols achieve backwards compatibility, we can now look at what would happen in a hypothetical Target-type attack. The threat model assumes that point-of-sale (POS) terminals at the retailer stores have been compromised by attackers. Consumers will be making payments at these registers using their contactless cards, using NFC instead of swiping.
NFC at point-of-sale
One important distinction: we have spoken of POS devices having NFC as an integrated capability. In practice these can be distinct pieces of hardware. For example tap-and-pay capability can be added to legacy terminals via an external accessory, connected to the POS over a serial link. This peripheral contains the NFC reader and has knowledge of the payment protocols. It is responsible for abstracting away differences and quirks between different card networks and emitting familiar track-data to the POS. This is part of the interoperability benefit of mag-stripe profile; the POS may not have been designed for NFC but the output from the attached peripheral is still looks indistinguishable from an old-school swipe reader.
One corollary is that the POS does not micro-manage the NFC protocol or specify what bits to transmit. Instead there is a higher-level command structure, with the POS signaling that is ready for payment and NFC reader handling all exchange with the card, returning track-data after completion. Compromising the POS software– as in the case of Target breach– does not automatically translate to running arbitrary code on the NFC reader or otherwise controlling NFC traffic. For the purposes of this discussion, we will assume the worst-case scenario and still consider attacks where NFC reader is behaving maliciously instead of following the expected protocol.
Simplest attack involves capturing the emulated magnetic stripe and trying to reuse this for another transaction. There are two reasons this is unlikely to work. As noted earlier, that track data incorporates a challenge issued by the NFC reader, the so-called “unpredictable number” generated randomly. Captured data from compromised POS contains a CVC3 corresponding to the choice of UN dictated by the attached reader for that particular transaction. Trying to reuse the same information at another reader will fail unless the exact same UN is chosen, which is unlikely when these are generated randomly. Granted there have been reports in the literature of catastrophic failures of the random number generator in EMV hardware [Anderson et al.] in the context of ATMs. Since attackers can choose which retailers to target as well as the where they will monetize the stolen information, they could carefully pick stores employing such flawed hardware in their NFC terminals, to maximize the chances that UN values involved will be repeated with high-probability.
But there is a secondary mechanism to prevent such misuse: the application transaction counter or ATC. This value also appears in track-data and can be optionally incorporated into the CVC3 calculation. If ATC were not included in CVC3, then a repeated UN would permit replay. Attackers could even try to repeat the protocol multiple times with the card and sample multiple CVC3 values corresponding to different values of the UN. Fortunately “sane” configurations include ATC in the CVC3 computation. But there is another security check that is left up to the issuer: enforcing that ATC is indeed acting as a unique counter.
Suppose that our enterprising crooks also observed that NFC track-data can be encoded on a plain magnetic-stripe and swiped, due to the lenient behavior of payment processors and issuers. Could they create a new credit-card using the data captured from compromised POS and use it repeatedly for purchases?
If the issuer is not paying attention to the payment mode and willing to apply CVC3 validation logic to a swipe transaction, the cryptographic check will be satisfied. But the issuer will observe something strange: a repeated ATC value. When the consumer made a purchase at Target, suppose their ATC was at 10. Since it is supposed to increment for each transaction, the next time the issuer receives an authorization request containing NFC track-data, they would expect to see a value greater than 10. It could be 11 but it could also be 12 or higher due to incomplete transactions which caused ATC to be incremented without ever resulting in an authorization request sent to the card issuer. But observing the same ATC twice for two different transactions? There is no legitimate scenario for that. Provided the issuer declines a second transaction using the same counter, attackers can not monetize track-data passively captured at the POS.
Passive being the operative word in the previous statement. So far we have focused on attacks where the adversary watches an ordinary tap-and-pay transaction taking place, waits for the simulated NFC “track-data” to arrive at the POS, copies this information and exfiltrates it for future use. But it is not useful to to hypothesize new defenses without also allowing for the presumed attackers to adopt their strategy in response. Knowing that they are dealing with contactless cards, the adversary could also adopt a different strategy such as sampling multiple transactions. For example they could program their POS malware to instruct the NFC reader to repeat the transaction five times, ending up with not just one but five samples of track-data with different UN/ATC/CVC3 combinations. The final post in this series will discuss how such attacks can be frustrated by proper issuer configuration.