In the wake of Apple Pay launch, one interesting document captured a rare occasion: an insider described as “key developer at major bank” having a moment of honesty, voicing blunt opinions on Apple, ISIS/Softcard, Google Wallet and upcoming chip & PIN transition in the US. According to this anonymous source, one of the key selling points for Apple Pay from banks’ perspective is privacy. Apple provides the wallet software and operates a service, but it does not learn about user transactions. Given how often this presumed advantage is mindlessly repeated in other articles, it is worth drilling into the question: who knows exactly what the card-holder purchased? [Full disclosure: this blogger worked on Google Wallet security]
For the paranoid, cash remains the most privacy-friendly solution— perhaps to be rivaled by Bitcoin one day when it is more widely accepted as a payment scheme. It is peer-to-peer in the old-fashioned way: from customer to merchant. All of the other mainstream payment methods involve some other intermediary, giving that third-party an opportunity to learn about transactions. Some of the participants are visible in plain-sight, others operate behind the scenes. Checks involve the bank where the customer maintains an account. Less obviously they also involve a clearing-house responsible for settling checks among banks.
Credit card networks have more moving pieces. There is the issuing bank underwriting the card used by the consumer. There is the acquiring bank receiving the payment on behalf of the merchant. Typically there is also a payment processor linking the merchant to the acquirer. In the middle orchestrating this flow of funds on a massive scale is the card network: Visa, MasterCard, American Express or Discover. These players all have visibility into the total payment amount and the merchant. In most cases they also know the identity of the card-holder. Most banks fall under some type of know-your-customer (KYC) rules designed to deter terrorist financing, which involves validating the identity of customers prior to doing business. This is true even for gift/prepaid cards that are funded ahead of time with no credit risk for the issuer. That means at a minimum the issuing bank knows the true identity of the person involved in a transaction, which is not surprising. Less obvious is the fact it is frequently exposed to other parties along the way as part of the “track data” encoded on the card, since track data travels from the merchant point-of-sale to the payment processor and eventually into the network. (Cardholder name need not be identical to legal name, in which case there is a slight privacy improvement in the use of a pseudonym.) At the end of the day, it is a safe bet that Visa knows consumer Jane Smith has purchased $38.67 worth of goods from corner grocery store on Saturday, October 18th in Tribeca. No wonder NSA is on friendly terms with payment networks, tapping into this vast data-set for intelligence purposes.
But equally important is what none of these participants know: line-level transaction data. They have no visibility into the contents of Jane’s shopping-cart. There are some qualifications to this general principle, edge cases where contents can be inferred. For example if there is only one item at a coffee-shop that rings up for $3.47, that is exactly what the customer ordered. Fortunately the more general case of trying to come up with combination of goods tallying up to a given total is a surprisingly intractable computer-science challenge, known as the subset-sum problem. It is further complicated by the fact that many goods have identical price, a given sum could correspond to millions of different combinations of items and for goods priced by weight— such as produce— there is not even a single assigned price. There is also some information leaked by the pattern of transactions. For example, restaurants often will perform two transactions when settling a bill: one to authorize the original amount and one that collects the final tab including the gratuity. In other words, the bad news is both your bank and Visa can see whether you have been stingy or generous on tips. Yet they still have no idea exactly what you ordered, whether it was inline with FDA dietary recommendations (although there is likely a strong correlation with the establishment involved, which explains why insurers are peeking into patient credit-card records) or how many glasses of wine accompanied that meal.
That information flow is not changed by chip-and-pin, or for that matter NFC payments using a phone.** There is still a bank responsible for issuing the “card,” even when the physical manifestation of the card on a piece of plastic has been replaced by cryptographic keys residing on a mobile device. There is a more elaborate payment “protocol.” Before the cashier swiped the magnetic-stripe on the card, reading static information; now there is an interactive process where the point-of-sale register communicates back-and-forth with the phone over NFC. Does that somehow disclose more information? It turns out the answer is no. EMV standards prescribing that complicated song-and-dance have no provision for item-level information to be communicated. The phone can not receive this information from the POS and neither can the POS transmit it upstream when requesting payment authorization.
So why the persistent harping by our banker friend on the contrast between Apple Pay and Google Wallet? Because in the latter design, Google is effectively the “issuer” of the card used for payment. As described in earlier posts, Google Wallet switched to using virtual cards in 2012. These cards are only “virtual” in the sense that the consumer does not have to know about them. But as far as the merchant and payment network are concerned, they are very real. From the merchant point-of-view, the consumer paid with a MasterCard issued by Google. Behind the scenes Google proxies this transaction in real-time to the actual card the consumer added to their wallet. This is why Google has same visibility into purchases as an issuing bank such as Bank of America with Apple Pay or Softcard: transaction amounts and patterns, but not line-level items.
This answer is still unsatisfactory for one reason: we have limited ourselves to information that is exchanged as part of the purchase transaction defined by EMV specifications. Could a mobile wallet application on a smart-phone obtain more information out-of-band?
** There is one privacy improvement in that cardholder names are typically redacted from the simulated “track data” created by NFC transactions.
[continued from part I]
Returning to the problem of provisioning a PIV card given the PIV administrator key, we can now enlist the help of open source solutions. The venerable OpenSC suite includes a piv-tool utility for performing administrative operations on PIV cards. One cautionary warning: this approach is only suitable for very primitive key-management schemes. In an enterprise-grade credential management system, card keys would typically be diversified using a cryptographic function, where each card has a different administrator key that is computed as a cryptographic function of some master secret and an immutable property of the card, such as its serial ID. In high security applications that master secret is relegated to a hardware-security module, leaving the provisioning system only handling individual card keys. (For an additional level of paranoia, one can implement the entire PIV administrator authentication challenge-response in the HSM itself, with no keys ever relinquished outside the secure execution environment.)
piv-tool takes a slightly more cavalier attitude about where administrator keys come from: a local file. Assuming that one is willing to live with this approach to key management, here is how key generation would work:
- Save the PIV card administrator key to a local file. The format is similar to how GoldKey client UI accepts keys, with one cosmetic difference: each hex byte is separated by a column character.
$ cat admin_key 01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08
- Export the name of the file containing the key into a specific environment variable.
$ export PIV_EXT_AUTH_KEY=./admin_key
- Invoke piv-tool with the “-A” option to authenticate as administrator and “-G” option to generate keys:
$ piv-tool -A A:9B:03 -G 9C:07
In this example, the highlighted option indicates generating a new signature-key, identified by key reference 0x9C as defined in NIST 800-78 part 3 of type 2048-bit RSA, which is algorithm reference 07 also defined in the same standard. The first option indicates administrator authentication, using key ID 0x9B of type 3DES; this is a function of the card configuration, which users typically have little control over. Note that contrary to what one might optimistically assume about extensibility, PIV does not allow generating keys over arbitrary named curves or curves with user-defined parameters.
The command will trigger key-generation on the token and then output a binary encoding of the public half. It is easier to make sense of that output by having openssl parse the ASN1. For example, here is another example for generating a new ECDSA signature key over the NIST P256 curve (also known as prime256v1) which happens to be algorithm reference 11:
$ piv-tool -A A:9B:03 -G 9C:11 | openssl asn1parse -inform DER -i -dump Using reader with a card: GoldKey PIV Token 00 00 0:d=0 hl=2 l= 89 cons: SEQUENCE 2:d=1 hl=2 l= 19 cons: SEQUENCE 4:d=2 hl=2 l= 7 prim: OBJECT :id-ecPublicKey 13:d=2 hl=2 l= 8 prim: OBJECT :prime256v1 23:d=1 hl=2 l= 66 prim: BIT STRING 0000 - 00 04 78 95 ac 64 63 7f-9d 4d a8 b5 5d 2f 36 27 ..x..dc..M..]/6' 0010 - bf 73 6e fc ee bf de 29-6f ca 06 ee 85 a9 c5 42 .sn....)o......B 0020 - 83 cf 12 3f eb f6 ff eb-0a a8 78 f4 de 68 40 a4 ...?......x..h@. 0030 - 87 c9 81 2d 06 f0 5b 9b-a5 64 46 b5 12 3e 61 55 ...-..[..dF..>aU 0040 - 99 09
It is helpful to save the public-key because this is one of the few times it will be available directly. There is no APDU to retrieve a public-key in PIV. In steady state, the public-key exists in a certificate already loaded on the card; except that we have not yet obtained that certificate. (In some cases it is possible to indirectly recover the public-key. For example assuming a valid ECDSA signature on a known message, public key can be derived as implemented in Bitcoin libraries.)
Next: generating a certificate-signing request, which runs into an interesting circularity with the OpenSC middleware.
This is a follow-up to earlier series on using GoldKey hardware tokens on OS X for cryptographic applications such as SSH. One question left unanswered on the last post was how to provision certificates to the token on OS X and Linux, where there is no official support provided by GoldKey. It turns out the implications of that question reach far beyond the implementation details of a specific hardware vendor such as GoldKey. It forces a closer look at the PIV standard itself and canvas the web for open-source utilities available for working with PIV cards.
Provisioning from a high-level
First a quick recap on exactly what “provisioning” means for smart-cards that support PKI. Starting with a blank card, desired end state is to have a private-key residing on the card– ideally generated on-card and never having left the secure execution environment– along with an X509 digital certificate which has the corresponding public-key. (Repeat as desired: there can be more than one key/certificate slot. Case in point, PIV defines 4 types of keys with different use-cases such as signature and encryption.)
Provisioning, nuts and bolts
That process typically breaks down into four steps:
- Request the card to generate a key-pair and output the public key.
- Create a certificate signing-request (CSR) which contains that public key, and information about the user, and obtain a signature from the card.
- Submit the CSR to a certificate-authority and have the CA issue a certificate signed by the CA.
- Load the returned certificate on the card.
This is the most generic version of the flow. In particular it covers the self-signed certificate scenario as a special-case, where instead of signing a CSR we can sign a certificate to collapse steps #2 and #3. (In principle this could even have been implemented on-card but typically the complexity of ASN1 parsing and construction has discouraged smart-card developers from trying to do much with X509 on-card.) It is also worth noting that the CSR generation is a practical necessity because most CA implementations are designed to take a valid CSR as input. “Valid” being the operative adjective: that includes checking that the CSR is signed by a private-key which corresponds to the public key appearing in it. One could imagine a hypothetical CA that simply issued certificates given a public key– the signature on the CSR does not factor into the final X509 certificate issued in any way. In real life CA implementations are a bit more demanding so it is important to build that flexibility into the card provisioning flow.
Returning to these four steps, first observations is that #3 is the odd-man out. It takes place out-of-band without any interaction with the card. So we focus our attention on the remaining three. Of these #1 and #4 actually change the state of the card by updating its contents, while #2 only involves using existing key material. This turns out to be an important distinction for many smart-card standards including PIV because the authentication model for those two modes are different. For example in PIV using a key typically requires user authentication, in other words entering a PIN. Card management on the other hand requires authenticating as the card administrator, which involves using a more complex challenge-response protocol using cryptographic keys described in NIST 800-73 part #2.**
Card-administrator keys for GoldKey
That brings us to the first problem: in order to perform key-generation or loading certificates on the card, we must be in possession of the card management key (also referred to as PIV card application-administrator key, with key reference 0x9B per NIST SP 800-78-3) must be known for that specific card. PIV allows this key to use any number of the algorithms defined in the standard including 3DES, AES, even public-key options RSA or ECDSA. But in practice most vendors including Gemalto, Oberthur and GoldKey all appear to have settled on old-fashioned 3DES, in keeping with the retro-chic commitment to DES in smart-cards and financial applications.
In the case of GoldKey, all tokens are provisioned with an unspecified “default key” from the factory. Luckily the proprietary client software allows changing this key to one controlled by the user. This step requires a master-token, and having associated the current token with that particular master.
3DES keys are entered as 48 hex-bytes or 24 bytes total. While 3DES keys are only 168-bits, in keeping with traditional DES key representation, only 7 bits out of each byte are significant. Least-significant bit reserved for a parity check, which is ignored in modern implementations. GoldKey appears to follow that model. Somewhat confusingly the PIV standard refers to this as “card management keys” but they are not related to the Global Platform issuer-security-domain aka “card manager” keys. This is strictly a PIV application-level concept, which is independent of whether the underlying card is Global Platform compliant. (Of course in reality PIV applications are typically implemented on GP-compliant cards.)
Assuming we now have a token provisioned with a card-administration key we control– or perhaps we saved ourselves the effort by using a brand that comes preloaded with well-known default keys from the manufacturer– we can tackle steps #1 and #4 above.
[continue to part II]
** Incidentally this is why provisioning via the GoldKey Windows driver can not possibly be compliant with FIPS 201: it requires users to enter a PIN. There is no such mode in standard PIV; user PIN is neither sufficient nor necessary for administrative actions.
In No Place to Hide Glenn Greenwald presents a harsh critique of mass-surveillance conducted by the NSA as revealed in the massive stash of documents from Edward Snowden. In addition to the obligatory George Orwell 1984 references, the author also evokes the 18th century British philosopher Jeremy Bantham’s Panopticon, a hypothetical prison design optimized for constant observation for inmates. Its unfortunate denizens occupy the outer rings of a circular structure, with an inner one-way transparent tower reserved for the wardens. These wardens can look out and observe the inmates at any time, but the inmates can not see what is going on inside that tower. They can not even be certain at any given time if there is any one on the other side watching. Yet the suspicion is always there– and that is the point. The mere possibility of being under observation will silently coerce the otherwise unruly denizens into behaving themselves, goes the argument, far more effectively than any random meting out of punishment or violence.
This tried-and-true dystopian construct is recycled yet again by Greenwald to argue that ubiquitous mass-surveillance will have the effect of chilling speech, smothering dissent and turning Americans into obedient conformists, afraid to challenge their government:
… Bentham’s solution was to create “the apparent omnipresence of the inspector” in the minds of the inhabitants. ‘The persons to be inspected should always feel themselves as if under inspection, at least as standing a great chance of being so.” They would thus act is they were always being watched, even if they weren’t. The result would be compliance, obedience and conformity with expectations.
A later paragraph quotes Foucoult:
Those who believe they are watched will instinctively choose to do that which is wanted of them without even realizing that they are being controlled– the Panopticon induces “in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power.” With the control internalized, the overt evidence of repression disappears because it is no longer necessary: …
There is one problem with the parallel: until Edward Snowden, few people realized that they were living in the Panopticon. There has never been a shortage of tinfoil-hat wielding conspiracy theorists convinced that the FBI is tuning into our brain patterns from outer space. Occasionally we were given glimpses into the extent of data collection: New York Times expose in 2005 on warrantless surveillance, AT&T whistle-blower coming forward with the existence of NSA monitoring equipment on AT&T premises. But until Guardian, New York Times and Washington Post started publishing from the cache of leaked documents, there was no convincing proof, no reason to suspect that every email, phone conversation and text message would be scanned as part of a massive communications dragnet, that every routine credit-card purchase feeds into an operation designed to ferret out terrorist financing networks. Delusions aside, it is difficult for society to be intimated or cowed into submission by something they do not believe exists. Not only was the extent of surveillance unknown, but intelligence agencies very much preferred to keep it that way, using every opportunity to refute the allegations.
This is a key difference from Bentham’s frequently misused Panopticon: in the Panopticon inmates lived every moment fully aware of the constant possibility– if not actual reality of– being under observation. The architecture not only enabled constant surveillance, but it was fully transparent about that objective. Manifesting the presence of surveillance to its targets is an integral part of the design. There is no question about whether privacy exists in this system, no warm-fuzzy concepts of due process, 4th amendment protections and warrant requirements.
Intelligence operations by contrast thrive on secrecy. Extraordinary measures are taken to keep their existence hidden. A surveillance target aware of being watched, the theory goes, will modify his/her behavior, exercise greater caution or attempt to hide their tracks. In the extreme scenario, they might even refrain from communicating the information or carrying out actions the operation hoped to uncover. If surveillance was perfectly ubiquitous one could argue that state of affairs is just as good as having collected actionable intelligence. If all terrorists gave up on conspiring to plan new attacks out of fear that they are being watched, fewer attacks may result and society is safer. But frequent exaggeration and hyperbole aside, no system of surveillance built is quite that omnipresent. Knowing that communications in one medium are carefully watched motivates the targets to conduct their activities over different channels where greater privacy is assumed to exist. Surveillance exposed is surveillance rendered ineffective. Distrust in US technology companies in the aftermath of PRISM revelations is one example of this effect. Fewer people entrusting their data to Google, MSFT and Yahoo will be counterproductive for a system that relied on those companies providing a wealth of information.
The Internet became a true Panopticon only after Edward Snowden came forward and his message reached a mass audience. Paradoxically then, to the extent that chilling effects on political speech have resulted in the wake of NSA revelations, Greenwald, Poitras and Gellman were key players in creating the Panopticon. In true ignorance-as-bliss fashion, our previous state of misguided expectations around privacy may have been far more conducive to free expression after all. But in this case we can only be grateful to Snowden for resetting those expectations and starting the debate on intelligence reform.
[continued from part I]
Handshakes with PFS
Perfect-forward secrecy precludes decrypting past traffic which was collected passively earlier. But we still can pull off a real-time active attack with the help of our friend CloudFlare. Suppose we have man-in-the-middle capability, controlling the network around our victim. When the victim tries to connect to the CDN, we impersonate the site and start a bogus handshake. Given access to a decryption oracle as in #1, we could always downgrade the choice of ciphersuite to avoid PFS but that is not very elegant. Users might get suspicious why they are not seeing the higher-security option. (Not that any web browser actually surfaces the distinction to users. While the address bar turns green for extended validation certificates– purely cosmetic, since they have little security benefit– there is no reassuring icon to mark the presence of PFS.)
Luckily we can carry out a forged SSL handshake with PFS intact by enlisting the help of CloudFlare. This time instead of asking our friendly CDN to decrypt an arbitrary ciphertext, we ask for assistance with signing an opaque message. CloudFlare will turn around and pass on this request to the origin site. Once again origin is oblivious to the fact that this request is for MITMing a user as opposed to a new legitimate connection. Unlike simple RSA decryption, this time the transcript getting signed (more accurately its hash) is different each time so there is not even a way for the careful origin implementation to distinguish.
One could object this approach is highly inefficient. Why not let the target connect directly to CloudFlare and ask them to store a transcript of decrypted traffic for later retrieval? Because that would reveal the target of surveillance. Network MITM combined with well-defined interaction with the CDN (“sign this hash”) avoids divulging such information.
It’s worth emphasizing again that in neither case does origin site do anything extra or different to enable interception. As far that customer is concerned, they are simply holding up their part of the CloudFlare “keyless SSL” bargain. There is no need to send national-security letters to secure cooperation from the origin site. They can remain blissfully ignorant, publishing a squeaky clean transparency report where they can boast about never having received requests for customer data. That’s because such requests are routed to the CDN, who is then legally obligated to keep its own customers in the dark about what is going on. (In fact CloudFlare claims having received “between 0-249″ NSLs in its own transparency report, which is not broken down by customers.)
This is why one of the touted benefits around revoking trust is moot. In principle the customer can instantly revoke access by refusing to decrypt for CloudFlare if the CDN is suddenly considered untrusted. (Of course they could have achieved the same effect in the traditional setup by revoking the certificates given to the CDN, but that runs into the vagaries botched and half-baked revocation checking in various browsers.) Minor problem: there is no way to know if the CDN is operating as advertised or helping third-parties intercept private communications to origin. There is no accountability in this design.
This blogger is not asserting such things are happening routinely at CloudFlare. The point is that it can happen and in spite of best intentions, a CDN can not provide guarantees against such compelled assistance. Even the NSL canary in CloudFlare transparency report is fully consistent with offering such on-demand decryption assistance:
- CloudFlare has never turned over our SSL keys or our customers SSL keys to anyone.
- CloudFlare has never installed any law enforcement software or equipment anywhere on our network.
- CloudFlare has never provided any law enforcement organization a feed of our customers’ content transiting our network.
Providing a controlled interface for law-enforcement to request decryption/signing does not violate the letter or spirit of any of these assertions. When origin site provides an API for CloudFlare to call and request decryption, surely that does not count as the origin site installing CloudFlare software or equipment on its network. By the same token, if CloudFlare were to provide an API for law-enforcement to call and request decryption (which must be proxied over to origin site for “keyless SSL”) it does not count as installing law-enforcement software. Neither does it count as providing a feed of content transiting the network– that “content” is captured by government in encrypted form as part of its intelligence activities, and CloudFlare simply provides tactical assistance in decryption. There is of course the question of whether such canaries are meaningful to begin with. If it turns out that CloudFlare was in fact colluding with US government all along in violation of the above statements, would the FTC– a different part of that same government– go after CloudFlare for deceptive advertising?
Clarifying threat models
This is not to say keyless SSL has no benefits. Only having one location containing sensitive keys as opposed to two reduces attack surface. (This is true even if origin uses a different hostname than externally visible one to avoid having another key that can enable MITM attacks. The links between CDN-origin are highly concentrated targets for surveillance.) It protects the origin from mistakes and vulnerabilities on the part of the CDN that lead to disclosure of the private key– such as the Heartbleed scenario that affected CDNs in April. But there is a clear difference between incompetence and malice. Keyless SSL provides no defense against a CDN colluding with governments to enable surveillance while keeping its customers in the dark.
CloudFlare recently announced the availability of keyless SSL for serving SSL traffic without having direct access to cryptographic keys used to establish those SSL connections. This post takes a closer look at the implications of the architecture for security and compelled-interception by governments.
Content distribution networks
Quick recap: a content distribution network or CDN is a distributed service for making a website available to users with higher availability, reduced latency and lower load on the website itself. This is accomplished by having CDN servers sit in front of the origin site, acting as a proxy by fielding requests from users. Since many of these requests involve the same piece of static content such as an image, the CDN can serve that content without ever having to turn around and interact with the origin site. Also CDN systems are typically located around the world on optimized network connections, with much faster paths to end-users than the typical service can afford to build out itself. Over time CDNs have expanded their offerings to everything from DDoS protection to image rescaling and optimizing sites for mobile browsers.
There is one hitch to using a CDN with SSL: CDN infrastructure must terminate the connection. For example MSFT’s Bing search engine uses Akamai. When users type “https://www.bing.com” into their browser, that request is in fact going to Akamai infrastructure rather than MSFT. But SSL uses digital certificates and associated secret keys for authentication. That means either the CDN obtains a new certificate on behalf of the customer (with CDN-generated keys and customer vouching for the CDN) or the customer provides their CDN with existing certificate/key.
Getting by without keys
“Keyless SSL” is a misnomer since it is unavoidable for the SSL/TLS protocol to rely on cryptographic keys for security. But the twist is that the CDN no longer has direct control of the private-key. Instead the specific parts of the SSL protocol that call for using the private-key are forwarded to the origin site who performs that particular operation (either decryption or signing depending on whether PFS is enabled.) Everything else involved in that request is still handled by the CDN. There is a slight regression in performance. Public-key cryptography operations in a SSL handshake are one of the more computationally demanding parts of the protocol. Origin site must be involved in handling each of these again, forfeiting one of the benefits of using CDN in the first place. What do we get in return?
CloudFlare goes to great lengths to emphasize that this design guarantees they can not be compelled to reveal customer keys to law enforcement– because they do not have those keys. This is a legitimate concern. CDNs create a centralized, single point of failure for mass surveillance. A CDN might be the best friend for data-hungry intelligence agencies. Instead of having to issue multiple requests to tap into traffic for different websites, they can directly work with 1 CDN serving those customers to get access to all content going through the CDN, without the to decrypt any content going through. To what extend does that picture change? It turns out the answer is, not much.
First observation is that the ability to use a cryptographic key without restriction can be just as good as having direct access to raw key-bits. Recall that CloudFlare can make requests to origin site and ask for arbitrary operations to be performed using the key. In other words the origin presents an “oracle” interface for performing arbitrary operations. In other contexts this is enough to inflict serious damage. Here is a parallel from the Bitcoin world: Bitcoin wallets are represented by cryptographic keys. Moving funds involves digitally signing transactions using that key. If you do not trust someone with all of your money, you would not give them access to your wallet keys. But would you be comfortable with a system where that same person can submit opaque messages to you for signing? Clearly this would not end well: they could craft a series of Bitcoin transactions to transfer all funds out of your wallet into a new one that they control. You would become an accessory to theft of your own funds by rubber-stamping these transactions with a cryptographic signature whenever asked. A different low-tech example is withholding your checkbook from an associate who is not trusted with spending authority, but being perfectly happy to give them a signed blank-check whenever asked. Strictly speaking the checkbook itself is “safe” but your associate can still empty out your account.
Building on that first observation, we note that possession of private keys is sufficient but not necessary condition for intercepting communications. Putting ourselves in the position of a government trying to monitor a particular user, let’s consider how we can enlist ClouldFlare to achieve our objectives even when keyless SSL is employed.
For simple RSA-based key exchange, suppose our intelligence agency has collected and stored some SSL traffic in the past. Now we want to go back and decrypt that connection. All we need to do is decrypt the client key-exchange SSL handshake message that appears near the beginning. This message contains the so-called “premaster secret” encrypted in the origin site’s RSA key. So we take that message and enlist the help of our friendly CDN to decrypt it. When keyless SSL is in effect, CloudFlare can not perform that decryption locally. But it can ask origin site to do so using the exact same API, interface etc, used to terminate SSL connections for legitimate use-cases. Given the premaster secret, we can then derive session keys used for the remainder of the connection for bulk data encryption, unraveling all of the contents. Meanwhile the origin site is none-the-wiser about what just went on. There is no indication anywhere that past traffic is being decrypted under coercion as opposed to a new SSL connection being negotiated with a legitimate user.** The operations are identical.
[continue to part II]
** A diligent origin implementation could notice that it is being asked to decrypt a handshake message that has already been observed in the past. Such a collision is extremely unlikely to happen between messages chosen by different users.
Apple is expected to launch an NFC payments solution for iPhone. For the small community working at the intersection of NFC, payments and mobile devices, Apple’s ambitions in NFC has been one of the worst-kept secrets in the industry. The cast of characters overlaps significantly: there are only so many NFC hardware providers to source from, so many major card networks to partner with and very similar challenges to overcome on the way. Of the many parallel efforts going on in this space, some played out in full public view. Wireless carriers have been forging ahead with field trails for their ISIS project– now rebranding to Softcard to avoid confusion with the terrorist group. Others subtly hinted at their plans, as when Samsung insisted on specific changes to support its own wallets. Then there was Apple, proceeding quietly until now. With almost three years since the initial launch of Google Wallet, now is a good time to look back on that experience, if only to gauge how the story might play out differently for Apple. [Full disclosure: this blogger worked on Google Wallet 2012-2013. Opinion expressed here are personal.]
There are many reasons why launching a new payment system is difficult and for precisely the same reasons, to pinpoint the root cause for why a deployed system has been slow to gain traction. Is it the unfamiliar experience for consumers, tapping instead of swiping plastic cards? (But that same novelty can also drive early adopters.) Were there other usability challenges? Is it the lack of NFC-equipped cash registers at all but largest merchants? Or was that just a symptom of an underlying problem: unclear value proposition for merchants. Tap-transactions have higher security and less fraud risk, yet merchants are still paying same old card-present interchange rate. For that matter did users perceive sufficient value beyond the gee-whiz factor? Initial product only supported a prepaid card and Chase MasterCards, limiting the audience further. All of these likely contributed to a slow start for Google Wallet.
But there was one additional impediment having nothing do with technology, design, human factors or economics of credit cards. It was solely a function of the unique position Google occupies, both competing against wireless carriers over key mobile initiatives, while courting the very same companies to drive Android market share.
When consumers root their phone to run your app
When the project launched in 2011, it was limited to Sprint phones. That is bizarre to say the least. All mobile app developers crave more users. Why would any software publisher limit their prospects to one carrier alone, and not even the one with largest customer base at that? There is no subtle technical incompatibility involved. There is nothing magical about the choice of wireless carrier that unlocks hidden features out of the same exact commodity hardware that is not available to a different user. It was a completely arbitrary restriction that can be traced to the strained relationship between Google and wireless carriers who had cast their lot with ISIS.
Outwardly Verizon stuck to the fiction that they were not blocking the application deliberately. In a figurative sense, that was correct. Google Wallet itself contained a built-in list of approved configurations. At start-up the app would check if it was running on one of these blessed devices and politely inform the user that they were not allowed to run this application. In effect the application censored itself. This was a way of making sure that even if a determined user managed to get hold of the application package (so-called APK, which was not directly available from Android Play Store for Verizon, AT&T and T-Mobile customers) and side-load it, it would still not refuse to work. That charade continued to play out for the better part of 2 years, with occasional grumblings from consumers and Verizon continuing to deny any overt blocking.
Users were furious. Early reviews on Play Store were a mix of gushing praise with 5-stars, and angry 1-star rants complaining that it was not supported on their device. Many opted for rooting their phone or side-loading the application to get it working on the “wrong” carrier. (Die-hard users going out of their way to run your mobile app would have been a great sign of success in any other context.) Interestingly there was one class of devices where it worked even on Verizon: the Galaxy Nexus phones that Google handed out as holiday gifts to employees in 2011. In a rare act of symbolic defiance, it was decided that since Google covered every last penny of these devices with no carrier subsidy, our employees were entitled to run whatever application they wanted.
One could cynically argue that capitulating to pressure from carriers was the right call in the overall scheme of things. It may have been a bad outcome for the mobile payments initiative per se, but it was the globally optimal decision for Google shareholders. Android enjoys a decisive edge over iPhone in market share but that race is far from being decided. And US carriers have great control over the distribution of mobile devices. Phones are typically bought straight from the carrier at below-cost, subsidized by ongoing service charges. Google made some attempts to rock the boat with line of unlocked Nexus devices, as did T-Mobile with their recent crusade against hardware subsidies. But these collectively made only a small dent in the prevailing model. Carriers still have a lot of say in which model of phone running what operating system gets prime placement on their store shelves and marketing campaigns. Despite the occasional criticism as surrender monkeys on net-neutrality, Google leadership had a keen understanding of these dynamics. They had intuited that a fine line had to be walked. Keeping carriers happy was priority for #1, while making room for occasional muck-racking with unlocked devices and spectrum auctions. It is simply not worth alienating AT&T and Verizon over an experiment in mobile payments, an initiative that was neither strategic nor likely to generate significant revenue.
The secure element distraction
Curiously the original justification for why Google Wallet could be treated differently than all other apps came down to quirks of hardware. During its first two years, NFC payments on Google Wallet required the presence of a special chip, called the embedded secure element. This is where sensitive financial information, including credit-card numbers and cryptographic keys used to complete purchases were stored. Verizon pinned the blame on SE when trying to justify its alleged non-blocking of Google Wallet:
Google Wallet is different from other widely-available m-commerce services. Google Wallet does not simply access the operating system and basic hardware of our phones like thousands of other applications. Instead, in order to work as architected by Google, Google Wallet needs to be integrated into a new, secure and proprietary hardware element in our phones. We are continuing our commercial discussion with Google on this issue.
One part of this is undeniably true: the secure element is not an open platform in the traditional sense. Unlike a mobile device or PC, installing new applications on the SE requires special privileges for the developer. This is intentional and part of the security model for this type of hardware; limiting what code can run on a platform can reduce its susceptibility to attacks. But the great irony of course is that a different type of secure element with exact same restriction has been present all-along on phones: SIM cards. Both the embedded secure element and SIM cards follow the same standard called Global Platform. Global Platform lays down the rules around who gets to control applications on a given chip and exactly what steps are involved. Short version is that each chip is configured at the factory with a unique set of cryptographic secrets, informally called “card manager keys.” Possession of these keys is required for installing new applications on the chip.
For SIM cards the keys are controlled by, you guessed it, wireless carriers. ISIS relies on carriers ability to install their mobile wallet applications on SIM cards, in exactly the same way Google Wallet relied on access to embedded secure element. SIM cards have been around for much longer than embedded secure elements. Curiously their alleged lack of openness seems to have escaped attention. When was the last time Google threw a temper tantrum for not being allowed to install code on SIMs?
The closer one looks at Global Platform and SE architecture, the flimsier these excuses about platform limitations begin to sound. The specific hardware used in Android devices supported at least 4 different card-manager keys. One spot was occupied by Google and used for managing Google Wallet payments code. Another one was reserved by the hardware manufacturer to help manage the chip if necessary. Remaining two slots? Unused. Nothing at the technology level would have prevented an arrangement for wireless carriers to attain the same level of access as Google. This is true for even for devices already in the field; keys can be rotated over the air. One can envision a system where the consumer gets to decide exactly who will be in charge of their SE and the current owner is responsible for rotating keys to hand off control to the new one. If that sounds like too many cooks in the kitchen, newer versions of Global Platform support an even cleaner model for delegating partial SE access. Multiple companies can each get a virtual slice of the hardware, complete with freedom to manage their own applications, without being able to interfere with each other. In other words multiple payment solutions could well have co-existed on the same hardware. There is no reason for users to pledge allegiance to Google or ISIS; they could opt for all of the above, switching wallets on-the-fly. Those wallets could run along-side applications using NFC to open doors, login to the enterprise system or access cloud services with 2-factor authentication, all powered by the same hardware.
Who controls the hardware?
But that is all water under the bridge. Google gave up on the secure element and switched to using a different NFC technology called “host-card emulation” for payments. There is no SE on the Nexus 5, latest in the Nexus line of flagship devices. With the controversial hardware gone, any remaining excuses to claim Google Wallet was somehow special also went out the door. Newly emboldened, the application was launched to all users on all carriers for the first time. “Google gets around wireless carriers” cried the headline on NFC World, with only a slight exaggeration of that gesture. (It probably didn’t hurt that that competitive pressure on ISIS had eased up, since they were finally ready for launch after multiple setbacks.) Installed-base and usage predictably jumped. Play Store reviews improved, the sharp spread in opinion between angry users denied access and happy ones raving about the technology narrowed. A few questioned whether payments would have been more secure with the SE. Otherwise quirks of Android hardware were quickly forgotten.
A good contrast here is with the TPM or Trusted Platform Module on PCs. Much like the secure element, TPM is a tamper-resistant special chip that is part of the motherboard on traditional desktops and laptops. TPMs first made their appearance with the ill-fated Windows Vista release. They were used to help protect user data as part of the Bitlocker disk-encryption scheme. Later Windows 7 expanded the use-cases, introducing virtual smart-cards to securely store generic credentials for authentication and encryption. The situation here is akin to Microsoft shipping Bitlocker, Dell choosing to include a TPM in their hardware and a consumer buying that model, only to be told by an ISP that customers using their broadband service are not allowed to enable Bitlocker disk encryption. Such an absurd scenario does not play out in PC market because everyone realizes that ISPs simply provide the pipes for delivering bits. Their control ends at the network port; an ISP has no say over what applications you can run.
In retrospect NFC payments were an unfortunate choice of first scenario to introduce secure elements. The contemporary notion of “payments” is laden with the expectation that one more middleman can always be squeezed into the proceed to take their cut of the transaction. It is hardly surprising that wireless carriers wanted a piece of that opportunity. Nevermind that Google itself never aspired to be one of those middleman vying for a few basis-points of the interchange. One imagines there would have been much less of a land-grab from carriers if the new hardware was instead tasked with obscure enterprise security scenarios such as protecting VPN access or hardening disk encryption (Unless backed by hardware, disk encryption on Android is largely security theater: it is based on predictable user-chosen PIN or short passphrase.)
Hardware techn0logy with significant potential for mobile security has been forced out of the market by intransigence of wireless carriers in promoting a particular vision of mobile payments. This is by no means the first or only time that wireless carriers have undermined security. Persistent failure to ship Android security updates is a better known, visible problem. But at least one can argue that is a sin of omission, of inaction. Integrating security updates from upstream Android code-base, verifying them against all the customizations small and large that OEMs/carrier made to differentiate themselves, takes time and effort. It is natural to favor the profitable path of selling new devices to subscribers over servicing existing ones already sold. But the case of hardware secure elements is a different type of failure. Carriers went out of their way to obstruct Google Wallet. Reasonable persons may disagree on whether that is a legitimate use of existing market power to tilt the playing field in favor of a competing solution. But one thing is clear: that strategy has all but eliminated a promising technology that holds significant potential for improving mobile security.