Cloud storage and encryption revisited: Bitlocker attacks (part I)

Interesting research from iSEC Partners on attacking the weakened Bitlocker design in Windows 8 has corollaries for the problem of trying to apply full-disk encryption to protect cloud storage. Last year we sketched out a series of experiments on trying to create private storage by combining existing full-disk encryption (FDE) scheme with standard cloud storage providers such as Dropbox or Google Drive. The simplest design is creating a virtual disk image (VHD)  backed by an ordinary file, syncing that file to the cloud, mounting the VHD as drive and enabling Bitlocker-To-Go on the volume same way one could enable it on a USB thumb-drive.

As noted these prototypes suffer from an intrinsic problem: FDE provides confidentiality but not integrity. Encryption alone stops the cloud storage service from finding out what is stored, but will not prevent them from being able to make changes. (Or for that matter, unauthorized changes by adversaries on the network path between user and cloud provider. That attack vector was more clear with an alternative design involving iSCSI targets stored in the cloud. iSCSI has no transport level security, unlike the SSL-based sync used by most cloud storage services.) Because there is no redundancy in FDE, any ciphertext will decrypt to something even after it has been tampered with. Granted, the resulting garbled plaintext may cause trouble further up in the application layer. For example in the case of an encrypted VHD, if the file-system structures stored on disk have been corrupted, the VHD will no longer mount correctly as an NTFS volume. Perhaps the file-system is intact but the first few hundred bytes for a particular file have been scrambled, with the result that it is no longer recognized as the correct type, because file formats typically have a fixed preamble. Other file formats such as text are more resilient, but they could end up with corrupted data in the middle resulting from incorrect decryption.

It’s clear such attacks can result in random data corruption. Less clear is whether controlled changes can be made to an encrypted volume to achieve more dangerous outcomes. iSEC research demonstrates such an outcome against Bitlocker in Windows 8. While their proof-of-concept works against a local boot volume, the same techniques would apply to disk images stored in the cloud, although it will be more difficult to reproduce the attack for reasons described in the next post.

Critical to this exploit is a weakening of Bitlocker in Windows 8. Recall that block ciphers are designed to encrypt a fixed amount of data at a time. For example AES operates on 16 bytes at a time. To encrypt larger amounts of plaintext requires choosing a mode of operation—a way to invoke that single-block encryption repeatedly as a blackbox until all of the data has been processed. The naive way of doing this, namely encrypting each block independently, is known as electronic codebook or ECB mode. It has significant drawbacks, dramatically illustrated with the penguin pictures. Cipher-block chaining or CBC mode does a better job of hiding patterns in plaintext such as repeated blocks by mixing one block into the encryption of the next one. But CBC still does not provide integrity. In fact it becomes easier to modify ciphertext to get desired changes. With ECB mode attackers are usually limited to replacing blocks with other known plaintext blocks. By contrast CBC mode allows getting full control over any block by making changes to the preceding one. More specifically, any difference XORed into the preceding block will result in the same difference XORed into the plaintext when current block is decrypted.  This capability comes with a caveat: the decryption of that preceding block is corrupted and replaced with junk that we can not control in the general case.

For these reasons simple block cipher modes are combined with a separate integrity check. Alternatively one can use authenticated encryption modes such as GCM, which also compute an integrity check along the way while processing plaintext blocks. Because disk-encryption schemes can not add an integrity check— encryption of one disk sector must fit in exactly one sector— they have to defend against these risks by alternative means. The original version of Bitlocker in Vista used a reversible diffuser dubbed Elephant to “mix” the contents of a sector before encryption, such that errors introduced in decryption will be amplified across the entire sector unpredictably, instead of being neatly confined to single blocks. While Elephant was an ad hoc design, later theoretical treatments of the subject in the literature introduced the notion of narrow-block ciphers and “wide-block ciphers to properly capture this intuitive notion: difficult to make controlled changes across a large chunk of plaintext, much larger than the block size of the underlying primitive such as AES. IEEE Security in Storage Working Group then standardized a number of modes such as XTS that have provable security properties based on assumptions about the underlying block cipher.

[continued]

CP

EMV: end of the road for dynamic mag-stripes?

A 2012 post on this blog discussed the renewed commercial interest in programmable magnetic-stripe (also known as “dynamic mag-stripe”) technology that allows a single plastic card to represent multiple credit/debit/loyalty/rewards cards by changing the information encoded on the fly. At the time, crowd-funded Coin was leading the charge. ArsTechnica recently reviewed other entrants in this increasingly crowded space including Wocket and Plastc, which have more advanced features including NFC. But the article still fails to answer a simple question: can these cards work with the upcoming shift to chip & PIN in the US? (More precisely, chip & signature initially—true to their conservative and cautious nature, US banks are planning to roll out EMV without changing the user-experience at least initially. The chip will participate in the transaction with point-of-sale but card-holders do not have to authenticate by entering their PIN.)

Somewhat confusing matters is that at least Plastc is described as being EMV compatible. On the surface that would suggest one could take a chip-card, somehow “load” it on Plastc and use the Plastc device in place of the original card for EMV transactions going forward. But that can’t work short of a serious vulnerability in the design of the original. While Plastc may possess all the necessary hardware and software required to participate in EMV transactions, emulating an existing EMV card requires access to the information provisioned on that card. Therein lies the problem: some of that information- card number and expiration date- are readily obtained, some of it is designed to be very difficult to extract.

A plain magnetic-stripe card is trivially copied: readers can be bought for a few dollars, or even free thanks to Square handing them out to anyone that asks. Card-writers that can encode new information on to the magnetic stripe are more expensive but within reach; entire kits complete with blank cards can be purchased for a few hundreds dollars off the shelf. “Cloning” such a card is as simple as reading the information on the magnetic stripe from the original card, and writing it into a new one. (A convincing replica for fraudulent purposes would also need to recreate the visual design features such as the embossed numbers, hologram and background image. Coin and similar programmable cards are deliberately designed to look distinctive; they are not attempting to pass for a perfect copy of the original card.)

EMV cards by design resist such cloning. Unlike the fixed information encoded on the magnetic stripe, the chips produce slightly different responses for every interaction with the point-of-sale terminal. These responses are generated using secret cryptographic keys stored inside the chip. They are deliberately difficult to extract: keys themselves are never output as part of the protocol and there is no “reader” to view the contents of internal storage. The card-holder can not look it up on a web page or swipe the card through a gadget; that would defeat the point of keeping the keys secret. Absent vulnerabilities in card software responsible for wielding those keys, only esoteric hardware-level attacks would successfully allow extracting them: monitoring RF output or power consumption with high precision, inducing calculated faults by aiming precisely timed laser pulses, peeling the circuitry layer-by-layer under high magnification. It’s a safe bet consumers will not be asked to repeat those procedures at home.

That rules out the type of do-it-yourself provisioning possible with magnetic stripes. While card-holders can easily “load”  plain magnetic stripe cards into Coin without involvement from the issuer (whether issuers condone or object to that practice is another story) that same approach will not fly for chip cards. Succeeding with EMV provisioning model requires buy-in from banks and card networks. Plastc could pursue agreements with issuers to provision card data directly on the Plastc hardware or they could pursue the tokenization approach of creating a proxy card that forward transactions to the original. But all of these require getting buy-in from banks, one risk-averse institution at a time. Apple took that route for Apple Pay, and despite its market power has not achieved 100% coverage among issuers. The odds look daunting for a start-up.

CP

Bitcoin transaction fees: beyond the 1% barrier

Here is a recent Bitcoin transaction:

8f1d3a8ef6b2d4a25d2f499279e01518b4770819ccbc39a765c4c326170c61b3

It stands out for several reasons. First is the number of inputs into the transaction: while most transactions spends funds from a handful of other transactions, this one is aggregating from dozens. More interesting is that it moved an astonishing 217500 bitcoins total, valued at roughly 81 million dollars based on the  exchange rate at the time of this transaction. That figure stands in sharp contrast to another one: the total commission (or “miners-fee” as the terminology goes in Bitcoin) paid for moving this amount: one-ten-thousandth of a Bitcoin which comes out to about 4¢ or less than 0.000001% of the transaction cost. Where nuclear energy once promised “electricity too cheap to meter,” Bitcoin seems to hold out the possibility of funds movement with negligible fees.

That rosy picture changes quickly once we look at fees charged for more typical transactions, which are often mediated by third-party payment processors, rather than going through the blockchain directly. For example Coinbase— one of the more prominent Bitcoin payment processing services— does not take any commission for accepting Bitcoin, but charges 1% to merchants who choose to convert their funds to fiat currency. (It is a safe bet that this is what most large merchants are doing, since the problem of vaulting Bitcoin is unfamiliar.) That rate is several orders of magnitude above the rate achievable using the blockchain directly. What accounts for such high margins? That may be the wrong question to ask. Perhaps a better approach is to look at garden-variety payment systems and ask what justifies their fee schedule.

  • Credit cards: A fixed cost on the order of 30-40 US cents, plus a per valor amount ranging from 2-3% of the transaction depending on the type of card. For example American Express has historically had highest processing fees than Visa or MasterCard. Similarly rewards cards tend to have higher fees. Large merchants such as Walmart have better leverage with card networks for negotiating fees— although not enough to stop being disgruntled about it— while smaller businesses tend to pay higher effective rates.
  • Debit networks: 2010 Durbin amendment ushered in a steep drop in debit fees. While small community banks were exempted from the regulation, for the majority of large banks in the US debit transactions now have a minimal transaction fee of 0.05% plus roughly a quarter. There is a different fee schedule with higher rate for “small-ticket” purchases less than $15 but for amounts in that range both formulas converge to similar amounts.
  • Square: Square handles credit-card processing for the long-tail of small businesses, except that the “merchant of record” as far as the network goes is Square itself. Meanwhile the actual retailer transacting with the customer pays Square a flat 2.75% for swipe transactions regardless of the type of card used. (Keyed-in cards— Square analog for card-not-present transactions incur a higher 3.5% rate.)
  • Paypal: Anywhere from 2.2% to 2.9% of transaction based on monthly volume, plus a small fixed cost. Higher rates and currency-conversion fees apply for international transactions.
  • Western Union: Complex schedule based on amount, funding source (credit-card vs bank account) and whether the funds are sent online, from mobile-app or in-person. Two sample data points along the spectrum: sending $100 to New York by credit-card costs $12 for a staggering 12% rate. But sending ten-times as much to the same zip-code funded by a bank account costs less at an effective 1% fee.

These numbers are all over the place, but one can explain at least why the orders of magnitude are stacked this way. For example, in the case of a credit card transaction, the fee is split between acquiring bank, the issuing bank— the one that gave the customer their credit-card—  and the network itself. (In the case of Discover and AmEx, “issuer” can be same entity as the network.) Of these three parties, most notable is the issuer taking risk of consumer credit. The cardholder can spend the money but never pay back the issuer for the charges. There are additional costs the issuer must contend width— dealing with disputes, absorbing the cost of fraud in case of charge-backs, reissuing cards etc.— but credit risk is by far the most significant one, hence the triumvirate of credit-reporting bureaus compiling massive dossiers to estimate the creditworthiness of every American. There are also large operational expense for the issuer: rewards programs. Those frequent-flier miles, cash-back awards and airline status upgrades all cost money, which the issuer hopes to recoup from its share of the transaction fees along with interest earned on customers carrying balances.

The absence of such frills explains why PIN-debit networks are able to operate with much lower overhead. A debit transaction only clears if customer has funds present in their bank account. There is no credit extended with the expectation of future payment. Second, debit cards also have nominally better security compared to plastic cards: entering PIN at the point-of-sale terminal is required for spending and there is no “card-not-present” model where simply entering in a number and expiration into a website— easily compromised— allows debit transactions. This is why debit-card skimmers have to both capture magnetic stripe and PIN entry.  Debit transactions can also be reversed and disputed, but the process and timelines associated are different from credit cards.

What about Square? Square pays one transaction fee to Visa/MasterCard/AmEx while charing the merchant a different one, presumably profiting from the difference. That means profitability for  is critically dependent on the actual mix of funds that customers are using and transaction amounts. If every customer of a Square merchant spent a few dollars only and paid with their American Express card, Square margins would be completely squeezed out, potentially putting the company into the red on each transaction. (In the extreme, this reduces to the time-honored dot-com business model: “We lose money on every transaction, but we make up for it in volume.”)

Returning to the problem of processing Bitcoin transactions for merchants, there is no “issuing bank”, nor is there an“acquirer.” Closest analog to the network are the Bitcoin miners competing to produce the next block in the public ledger that records all transactions, but their fees are nominal— dramatically illustrated by the $80M transaction above. So what is the risk that Coinbase is taking? There is no dispute resolution or fraud investigation to speak of: Bitcoin transactions are famously irreversible. It is the ultimate in self-reliance where your cryptographic keys are your funds: lose control over the key, there is no bank to call, no 1-800 number to report, no FDIC insurance to cover losses. There is no notion of charge-back or penalties levied by the network for having too many transactions reversed. There is some currency risk, considering most merchants seem to prefer getting paid in US dollars immediately instead of vaulting Bitcoin— likely due to the lack of expertise in storing and handling cryptocurrency. That means whenever there is a precipitous drop in the Bitcoin-to-USD exchange rate, Coinbase is left holding the bag, having acquired Bitcoins from the customer while paying out fiat currency to the merchant. And there are certainly no lavish vacations or frequent-flier status upgrades being awarded to anybody: Coinbase does not have any relationship with the consumer. (While Coinbase also provides a consumer wallet service, there is no guarantee that an incoming Bitcoin remittance is originating from one of these.)

In short, there is little reason to expect that merchant processing fees for Bitcoin should be in the 1% range- including currency exchange- as they are today. These rates are more likely to be a reflection of a nascent market, high volatility in exchange rates, lack of merchant expertise in directly handling Bitcoin and first-mover advantage enjoyed by initial participants, that is likely to face downward competitive pressure as the risk-models mature.

CP

How to fail at code-signing, the Sony edition

Another massive corporate security breach? Readers well-versed in contemporary history of data-breaches may not be surprised to learn that the target this time was Sony, a repeat-customer for enterprising attackers since 2011. But they might still be surprised to learn both the extent of damage and surprisingly weak internal processes used by Sony in managing critical assets. Leaked internals documents and emails are making the rounds, despite best efforts by Sony to put the genie back in the bottle both using alleged denial-of-service attacks and enlisting their army of attorneys to mail out nastygrams to journalists as Christmas presents.

Here we will focus on just one aspect of the breach: theft of code-signing keys. These were the credentials used by Sony for digitally signing applications authored by the company, proving their origin. Granted “authored by Sony” does not automatically translate into “safe” or “not riddled with vulnerabilities.” Knowledge of authorship attested by the digital signature is at best intended to be a proxy for the reputation of the software publisher. A reputable company would not be in the business of publishing harmful malware, goes the theory. Ironically Sony itself has provided a spectacular counter-example to that assumption: in 2006 the company willfully and deliberately published a malicious rootkit designed to implement gratuitous DRM restrictions against copying CDs on personal computers. Granted at the time digital signatures were not commonly required for Windows kernel-mode drivers, which is the specific type of binary this rootkit assumed. In that sense, the question of code-signing was decoupled from the problem of honesty on behalf of the software vendor. In principle the terms of use required by CAs prior to issuing a code-signing certificate includes warm-and-fuzzy promises by the publisher that they will not use the certificate to sign malicious code.

Back to Sony, the organization which we learned, keeps passwords in cleartext files and stores encryptions keys alongside the content they are protecting. It would not be out of character for such an enterprise to also mismanage their code-signing keys. Sure enough signing keys were also one of the reported casualties of the breach, with signed malware turning up not too long after reports of the incident surfaced. (One report claims the sample discovered was just a proof-of-concept created by a security researcher, as opposed to actual malware found in the wild for offensive purposes.) Regardless, the certificate in question was quickly revoked by the certificate authority. At least for the  Authenticode scheme which is the de facto standard on Windows, revocation checking is enabled by default— unlike the half-baked way it is implemented for SSL by most web browsers— providing at least a modicum of assurance that this particular certificate is unlikely to be used for hiding malware.

What about all of the other Sony certificates? Such a large organization is likely to have multiple certificates used for different applications and different teams. What happened to the rest? For that  what about Sony SSL certificates? In a data-breach the probability of various assets getting compromised is highly correlated. It would be astonishing if an organization so negligent at managing cryptographic keys somehow exercised utmost caution to protect SSL keys. Why aren’t certificate authorities promptly revoking all Sony certificates and forcing the company to reissue? It is worth pointing out a CA does not have to wait for Sony to request revocation. Terms-of-use for most CAs include provisions that allow the CA to unilaterally revoke a certificate when it is believed to have been compromised; here is the example from GlobalSign. So are CAs being negligent in waiting for a “smoking gun” in the form of malware signed by the compromised key instead of proactively taking action?

This question exposes one of the basic conflicts of interest in code-signing systems with trust mediated by third-party certificate authorities— Authenticode being an example of such open models. There are risks associated with revoking as well as risk associated with not taking action:

  • Revoking a certificate could mean application relying on it break. In the case of code-signing previously signed binaries will no longer validate. Doomsday predictions range from bricked devices to webpages not loading correctly.
  • Not revoking on the other hand could mean that malicious binaries are signed under the Sony name and users tricked into installing such applications. In that case the CA would be guilty of negligence, in aiding and abetting attacks against third-parties who relied on the CA to vouch for Sony identity.

It turns out that first problem was already taken into account by Authenticode very early via time-stamping. Binaries signed by the software publisher are further counter-signed by a trusted third-party vouching for the current time that signature was observed. Revocation standards in turn allow specifying the time from which point onwards the certificate is to be considered untrusted. All signatures made before that time— based on the third-party timestamp— continue to validate. Signatures bearing a later time-stamp or none at all are voided.

So why are certificate authorities still erring on the side of not revoking certificates, when there is every reason to believe that Sony is not in control of its private keys? Put bluntly, CAs get paid by the software publisher and at the end of the day, is primarily responsible to the publisher, while occasionally throwing a bone to the CAB forum, a balkanized consortium of certificate authorities and browser vendors attempting to set baseline standards for issuance. The CA has very little responsibility to anyone else. Those users who ended up installing malware signed by “Sony” thinking it was a legitimate certificate? Tough luck. They are not the ones paying $$$ to the certificate authority.
Granted this question of liability has never been tested in an actual court case. CA could plausibly  pass the buck by pointing the finger at Sony, arguing that it is the certificate “subject”— in other words the customer paying the CA— that is solely responsible for initiating revocation. (There is no denying that Sony should have been quicker on the draw here.) But the fact that CAs go out of their way to reserve unilateral revocation rights undermines that argument: why reserve that right in the first place when CA is too conflicted to take action when the customer is clearly negligent?

CP

NFC payments and powered-by-the-field mode (part II)

[continued from part I]

The security problem with running NFC payments in PBTF mode is that it takes user consent out of the picture. This defeats one of the main security improvements mobile wallets have over traditional plastic NFC cards: better control over when and which payments are authorized. In addition to the physical proximity requirement created by short-range of NFC, a number of additional pure software checks exist in the mobile application to guarantee that any payment taking place is one the user fully intended. For example:

  • Most implementations require that the screen is on
  • Correct PIN must have been entered “recently” in to the mobile application, based on a configurable time window. For example the original Google Wallet designed using secure-elements capped this at a conservative 5 minutes. (Interesting enough the more recent host-card emulation release defaults to 24 hours, effectively removing this control.)
  • Some implementations display a confirmation dialog with the requested amount/currency. Consumer must acknowledge this before the transaction is finalized.

Key observation is that such controls are implemented partly or completely by the mobile Android/iOS application. For example the PIN time-out in Google Wallet uses Android timers set to kick-in after the configured time elapses. Why can’t this logic reside in the SE applet itself? The first is a hardware limitation: SE can not maintain a true “wall” clock because it has no power source of its own and is only powered up for brief periods to perform specific tasks such as payments or authentication. The second is a usability/complexity trade-off: in theory payment applets could revert back to “unauthenticated” state after every transaction, requiring a new PIN entry. But that would make it difficult to deal with two common scenarios. First is retrying the same transaction that did not go through– common problem, as consumers not used to interacting with NFC readers. Second is completing several purchases in succession, for example in the case of large shopping centers with multiple merchants. In order to avoid manual PIN re-entry in these situations, the mobile application would have to cache the PIN outside the secure element for the same duration, create a more risky situation than just relying on a timer to issue some command to the SE.

Even if time-outs and deactivation could be implemented reliably with future hardware changes, there is still one fundamental problem with PBTF: the consumer has no inkling when a transaction is taking place or for that matter what exactly happened. Recall that PBTF can only generate enough juice to power up the secure element and NFC controller chips; the small burst of electricity supplied by the reader is not enough to power up the far-more resource hungry mobile device itself. (Inductive charging exists for mobile devices such as Nexus 4 from 2012. Even if point-of-sale terminals supplied that much power, it would take around half-minute for a phone to boot-up and reach the point of running the payments application, far too long to be useful for tap payments where deadlines are measured in fractions of a second.) With the main processor powered off, there is no way to inform user with a beep/vibration that a transaction has taken place, much less ask them for confirmation ahead of time. Given those constraints, the secure element applet responsible for payments must fend for itself when deciding whether to proceed. Imagine the code waking up to find itself powered by the external field and faced with a request for payment. (By the way, neither NXP or Broadcom/Oberthur chips at the time provided any means to distinguish that from regular powered-by-host mode but let’s assume future hardware will solve that.) Is that because the consumer is holding up their phone against a real point-of-sale terminal at a store to checkout? Or is this an unauthorized person holding a rogue NFC-reader while that same person is standing on a bus with their phone in their pocket, oblivious to the virtual pick-pocketing? PBTF takes away one of the main advantages of having dual-interface secure element coupled to smartphone: trusted user-interface for communicating user intent.

That determination can not be made without the host OS participating– which it can not while powered off. One can imagine various heuristics to permit “low-risk” transactions: authorize a maximum of 10 transactions, approve only if the requested payment amount is small (but don’t forget about currency in addition to numeric amount as this paper demonstrates) or approve if the merchant is recognized. The problem is that only the first of these can be implemented reliably. Merchant authentication is rudimentary at best in EMV. Meanwhile the mag-stripe profile of EMV employed for backwards compatibility does not authenticate purchase amounts at all. In fact there is not even a provision to send the payment amount to the phone. Amusingly MasterCard MMPP added that capability but this is window dressing: the amount/currency does not factor into the calculation of CVV3. POS can claim to charge 5¢ while interacting with the phone but then authorize $1000.

Finally there is the principle of least surprise: consumer do not expect their phones to be capable of doing anything— much less spending money— when they are powered off. (Contrary to claims that NSA can continue tracking phones when powered off.) That argues for making PBTF an opt-in choice for users to enable once they are familiar with the technology. This fact was impressed on a blogger’s colleague during the early days of Google Wallet. While trying to demonstrate that how new technology is perfectly safe by tapping a powered-off Nexus S against a Vivotech 4500, the engineer was shocked to hear the familiar beep of a successful transaction emitted from the reader. It turned out that he powered off the phone by yanking the battery. Because of that abrupt shutdown, the NFC stack never had a chance to disable card-emulation mode. With NFC controller set to card-emulation and PBTF enabled by-default in early NFC controller firmware, that left the secure element “armed” to conduct payments. (Luckily the reader was not connected to any POS, so no actual money changed hands during this instructive, if somewhat unexpected demonstration.) That experience promptly inspired a firmware update to disable PBTF by default for Android devices.

CP

When wallets runs out of juice: NFC with external power

One of the common refrains of criticism about NFC payments on mobile devices is what happens when the phone runs out of power. The common response is users can always fall back on their “regular wallet,” with the implicit assumption that nobody is foolish enough to rely only on their mobile wallet just yet. But unless the payment options are identical between their plastic and mobile incarnations —which is rarely the case, because of tokenization and virtual cards— that strategy has limitations. For example refunds are usually required to go on the same credit card used in the purchase. Worst case scenario is public transit, when the same card has to be used on both entry and exit to the system as in the case of BART turnstiles in the Bay Area. If the consumer were to tap-in with their phone and then the phone runs out of battery before they get a chance to tap-out on the other side, they have some explaining to do.

But it turns out that running out of battery power is not necessarily an impediment to making payments. The trick is a feature known as “powered-by-the-field” or PBTF for short. This is not exactly a new capability: consider how ordinary  plastic smart-cards operate. There is no internal battery powering most of these devices, which also explains why they do not have persistent clocks to track time. Instead they draw current from the reader to power their internal circuitry for those few seconds when the card is being called on to perform some cryptographic task. Initially that power was conducted via the brass-plate which comes into direct contact with the corresponding metal leads inside the reader. Later RFID cards were introduced, which can operate without direct metal-on-metal contact with reader surface– hence the name “contactless.” These cards instead draw power from the induction field generated by the reader by using an antenna that is typically placed around the outside perimeter of the card to maximize its surface area.

Since NFC is effectively a specific type of RFID technology, it is not surprising that the same principle translates to contactless cards used for tap-and-pay purchases. What is less obvious is that this capability carries over to their mobile incarnation, with some caveats. We covered earlier how NFC-equipped phones can be viewed as having the internals of a dual-interface smart card embedded inside, called “secure element” in these scenarios. (This is an over-simplification: cards usually have a simple analog antenna, while mobile versions are equipped with more complex NFC controller chips that support additional NFC functionality.) That same hardware can draw enough power from the external field to operate the complete payments and similar NFC scenarios such as physical access or public-transit. In theory then running out of battery is not a problem– although consumers  still need a battery present because the antenna is usually incorporated into the battery itself.

So why isn’t this feature advertised more prominently? Because it is optional and typically disabled for good reason. (Google Wallet designers also opted for disabling PBTF mode.) Next post will go into why conducting payments this way can be problematic.

CP