Continuing the exploration of privacy in NFC payments, we next focus on the code that runs on the smart phone and what those applications can learn about user transactions. Unlike an inert plastic smart-card, a smart phone is a general purpose computer connected to a network. Given that mobile devices have been implicated in many privacy intrusions, it is also natural to ask whether applications can snoop on purchase history. Due to the variation in mobile wallet architecture, there is no clear-cut answer to this but we can explore the design space for privacy.
First it is worth distinguishing between the wallet application itself and third-party apps. For example if a bank provided a mobile application for NFC payments, the privacy question is largely moot– the bank already has visibility into payments by virtue of authorizing them. (Unless of course the application allows the bank to learn additional information such as line-level item data, above and beyond what the bank is entitled to observing as card issuer.) On the other hand, if the wallet provider is distinct from the issuing bank as in the case of Apple Pay, it is fair game to ask what the mobile application sees. There is an even greater concern with third-party applications unrelated to the wallet provider or financial institution trying to snoop on purchase history.
The wallet application is typically involved in the purchase transaction for user-experience reasons. This holds true even when the actual payment credentials live on a dedicated secure element chip, with all NFC communication takes place between the point-of-sale and that secure element, as is the case when card-emulation is enabled. Recall that the secure element itself has no user-interface and can not communicate with the user to walk through various steps of the flow, such as entering a PIN, displaying a dollar amount for confirmation or signaling that a transaction has completed. Since NFC communications from the POS would be normally routed directly to the SE, this typically involves some other mechanism for notifying the host operating system. For example on Android NFC controller generates notifications for specific events, such as detecting the presence of an external NFC field or observing a command to SELECT a particular application on the secure element. Such notifications are delivered to relevant applications such as Google Wallet in the case of Android. If these notifications were more broadly available, third-party applications could also observe a transaction taking place. Typically this is prevented by having the notifications only delivered to privileged applications.
At a minimum then the wallet application knows that a transaction was attempted. In fact the payment applets on the secure element will typically have even more information that can be queried by the host, including purchase amount and status of last transaction. This is all part of the specification for SE applets mandated by the payment network for interoperability, such as MasterCard Mobile PayPass. What the host application does not learn is also a direct consequence of what the protocol communicates to the card from the point-of-sale terminal:
- Identity of the merchant. Typically merchant is not authenticated in the transaction.
- Line-level item information; in other words, the contents of the shopping cart.
The first one is not too difficult to infer. A mobile device is equipped with a number of radios including GPS, wireless and Bluetooth. Approximate GPS location, the identity of a nearby wireless network or the presence of an iBeacon can all be used to identify a particular merchant as the setting for a shopping excursion. (This is easier in the current situation when few merchants support NFC payments, making it easier to pinpoint the only possibility given an approximate area.) In principle then the wallet application supervising NFC transactions can have at least as much visibility into transaction history as the issuing bank. Apple could in principle compile the same information on Apple Pay users that our banker friend was worried about Google getting as the issuer-of-record for virtual cards. That is not to say that Apple is collecting that information currently. iOS security guide emphasizes that “Payment transactions are between the user, the merchant and the bank,” no doubt to allay any concerns of Apple becoming yet another middleman. But it is technically possible for a mobile wallet application– and by extension its provider– to help itself to that information.
The second one however is not that easy to work around. Unless the core EMV payment protocol is somehow augmented with out-of-band data provided by the cash register, there is no way for a mobile device to divine this information using its own sensors. For all the worrying about keeping data away from Google, there is however one other leading company that does have visibility into this highly coveted consumer purchase history: Square. By providing the complete point-of-sale terminal software, Square controls the check out experience. That includes the way items are rung-up. Square in that sense is the ultimate intermediary. It acts as “merchant” as far as the payment network is concerned while also tapping into a rich data stream of line-level item information from every purchase.
Continuing to the next step of PIV provisioning with open-source utilities, we need to generate a certificate signing request (CSR) corresponding to the key-pair generated on the card. This is a signed document generated by the person enrolling for a certificate, containing information they would like to have appear on the certificate, including their name, organization affiliation and public-key. The certificate authority verifies this information, makes changes/amendments as necessary and issues a certificate. This ritual for using CSRs is largely a matter of convention. Nothing stops a CA from issuing any certificate to anyone given just the public key. But most CA implementations are designed to accept only valid CSRs as input to the issuance procedure.
Offloading crypto in openssl
That calls for constructing the CSR with proper fields, including the public-key that exists on the card and getting that object signed using the PIV card. openssl provides both these ingredients:
- “req” subcommand for working with CSRs in PKCS #10 format
- Ability to offload cryptographic operations such as signing to external hardware using the concept of “engines.” An engine is the openssl abstraction layer for cryptographic tokens, similar to PKCS #11 or Windows smart-card minidrivers.
Our strategy then is to find a suitable engine implementation that can interop with PIV cards and use it for signing the CSR. Fortunately OpenSC project provides precisely such an engine, building on top of its PKCS #11 library that figured in previous PIV scenarios.
It is possible to script openssl to load an engine first and perform an operation using keys associated with that engine. Here is an example gist for CMS decryption. It issues two commands to openssl. First one loads the engine by specifying the shared library path and assigning it a name for future reference. Second one performs the CMS decryption using a private-key associated with that engine, in this case key ID 01 corresponding to the PIV authentication key. (Somewhat confusingly the PKCS11 layer uses does not use the PIV key-reference identifiers.)
It would appear straightforward to create a similar script to load the engine and invoke CSR generation while referencing a key associated with the card. But this runs into a subtle problem: OpenSC middleware assumes there is no key present in a particular slot unless there is also an associated certificate. This is a reasonable assumption for PIV cards in steady-state: each key-type defined in the standard such as PIV authentication or digital signature has both an X509 certificate and associated private key material. But during provisioning that assumption is violated temporarily: a key-pair must be generated first before we can have a certificate containing the public piece.
That leads to a boot-strapping problem: OpenSC middleware can not use the private-key on card without an associated certificate also present on the same card. But we can not obtain a certificate unless we first sign a CSR using the private-key.
It turns out OpenSC is not to blame for this either. Windows PIV minidriver has the same behavior; it would report no credentials present on a card without certificates even if keys had been generated for all four slots. Root cause is a more subtle problem with the PIV command interface itself: there is no mechanism for retrieving a stand-alone public-key. The assumption is that public-keys are extracted from the certificate. In the absence of the certificate, there is no uniform way to even learn the public-key for the purpose of placing it into the CSR.**
Signing the hard way
The above observation also rules out a kludge that at first sight seems promising: provision a dummy certificate containing an unrelated public-key to make OpenSC happy. That would allow signing to “work”– it turns out the middleware does not check the signature for consistency against public key in the certificate. But the CSR generated this way would still have the wrong public-key drawn from the placeholder certificate.
Boot-strapping requires a more complex sequence involving manual ASN1 manipulation:
- Save the output from key generation step
- Craft an unsigned CSR containing that public-key
- This is easiest to accomplish by taking an existing CSR with all of the other fields correct, removing the signature and “grafting” a new public-key in place of the existing field.
- Get a signature from the card.
- This can be done either by using the placeholder-certificate approach described above or directly issuing a GENERAL AUTHENTICATE command to the PIV application.
- Combine the signature with the modified request from step #2 above to create a valid signed CSR.
As a side-note, generating a new CSR once a valid certificate exists is much easier. In that case the key is not changed and the certificate is consistent with the key. It is much easier to go from a card already set up with self-signed certificates (for example provisioned on Windows via certreq) to issuing a new certificates from a different CA using off-the-shelf open source utilities.
Once CSR is created and exchanged for a valid certificate, the final step is straightforward: load the certificate. This is done using piv-tool and authenticating with the card management key, similar to key generation.
** For ECDSA the public-key can be derived from one valid signature and associated hash. For RSA there is no similar quick fix, but most cards will give an error when asked to operate on an input larger than the modulus. This can be used to binary-search for the correct modulus.
In the wake of Apple Pay launch, one interesting document captured a rare occasion: an insider described as “key developer at major bank” having a moment of honesty, voicing blunt opinions on Apple, ISIS/Softcard, Google Wallet and upcoming chip & PIN transition in the US. According to this anonymous source, one of the key selling points for Apple Pay from banks’ perspective is privacy. Apple provides the wallet software and operates a service, but it does not learn about user transactions. Given how often this presumed advantage is mindlessly repeated in other articles, it is worth drilling into the question: who knows exactly what the card-holder purchased? [Full disclosure: this blogger worked on Google Wallet security]
For the paranoid, cash remains the most privacy-friendly solution— perhaps to be rivaled by Bitcoin one day when it is more widely accepted as a payment scheme. It is peer-to-peer in the old-fashioned way: from customer to merchant. All of the other mainstream payment methods involve some other intermediary, giving that third-party an opportunity to learn about transactions. Some of the participants are visible in plain-sight, others operate behind the scenes. Checks involve the bank where the customer maintains an account. Less obviously they also involve a clearing-house responsible for settling checks among banks.
Credit card networks have more moving pieces. There is the issuing bank underwriting the card used by the consumer. There is the acquiring bank receiving the payment on behalf of the merchant. Typically there is also a payment processor linking the merchant to the acquirer. In the middle orchestrating this flow of funds on a massive scale is the card network: Visa, MasterCard, American Express or Discover. These players all have visibility into the total payment amount and the merchant. In most cases they also know the identity of the card-holder. Most banks fall under some type of know-your-customer (KYC) rules designed to deter terrorist financing, which involves validating the identity of customers prior to doing business. This is true even for gift/prepaid cards that are funded ahead of time with no credit risk for the issuer. That means at a minimum the issuing bank knows the true identity of the person involved in a transaction, which is not surprising. Less obvious is the fact it is frequently exposed to other parties along the way as part of the “track data” encoded on the card, since track data travels from the merchant point-of-sale to the payment processor and eventually into the network. (Cardholder name need not be identical to legal name, in which case there is a slight privacy improvement in the use of a pseudonym.) At the end of the day, it is a safe bet that Visa knows consumer Jane Smith has purchased $38.67 worth of goods from corner grocery store on Saturday, October 18th in Tribeca. No wonder NSA is on friendly terms with payment networks, tapping into this vast data-set for intelligence purposes.
But equally important is what none of these participants know: line-level transaction data. They have no visibility into the contents of Jane’s shopping-cart. There are some qualifications to this general principle, edge cases where contents can be inferred. For example if there is only one item at a coffee-shop that rings up for $3.47, that is exactly what the customer ordered. Fortunately the more general case of trying to come up with combination of goods tallying up to a given total is a surprisingly intractable computer-science challenge, known as the subset-sum problem. It is further complicated by the fact that many goods have identical price, a given sum could correspond to millions of different combinations of items and for goods priced by weight— such as produce— there is not even a single assigned price. There is also some information leaked by the pattern of transactions. For example, restaurants often will perform two transactions when settling a bill: one to authorize the original amount and one that collects the final tab including the gratuity. In other words, the bad news is both your bank and Visa can see whether you have been stingy or generous on tips. Yet they still have no idea exactly what you ordered, whether it was inline with FDA dietary recommendations (although there is likely a strong correlation with the establishment involved, which explains why insurers are peeking into patient credit-card records) or how many glasses of wine accompanied that meal.
That information flow is not changed by chip-and-pin, or for that matter NFC payments using a phone.** There is still a bank responsible for issuing the “card,” even when the physical manifestation of the card on a piece of plastic has been replaced by cryptographic keys residing on a mobile device. There is a more elaborate payment “protocol.” Before the cashier swiped the magnetic-stripe on the card, reading static information; now there is an interactive process where the point-of-sale register communicates back-and-forth with the phone over NFC. Does that somehow disclose more information? It turns out the answer is no. EMV standards prescribing that complicated song-and-dance have no provision for item-level information to be communicated. The phone can not receive this information from the POS and neither can the POS transmit it upstream when requesting payment authorization.
So why the persistent harping by our banker friend on the contrast between Apple Pay and Google Wallet? Because in the latter design, Google is effectively the “issuer” of the card used for payment. As described in earlier posts, Google Wallet switched to using virtual cards in 2012. These cards are only “virtual” in the sense that the consumer does not have to know about them. But as far as the merchant and payment network are concerned, they are very real. From the merchant point-of-view, the consumer paid with a MasterCard issued by Google. Behind the scenes Google proxies this transaction in real-time to the actual card the consumer added to their wallet. This is why Google has same visibility into purchases as an issuing bank such as Bank of America with Apple Pay or Softcard: transaction amounts and patterns, but not line-level items.
This answer is still unsatisfactory for one reason: we have limited ourselves to information that is exchanged as part of the purchase transaction defined by EMV specifications. Could a mobile wallet application on a smart-phone obtain more information out-of-band?
** There is one privacy improvement in that cardholder names are typically redacted from the simulated “track data” created by NFC transactions.
[continued from part I]
Returning to the problem of provisioning a PIV card given the PIV administrator key, we can now enlist the help of open source solutions. The venerable OpenSC suite includes a piv-tool utility for performing administrative operations on PIV cards. One cautionary warning: this approach is only suitable for very primitive key-management schemes. In an enterprise-grade credential management system, card keys would typically be diversified using a cryptographic function, where each card has a different administrator key that is computed as a cryptographic function of some master secret and an immutable property of the card, such as its serial ID. In high security applications that master secret is relegated to a hardware-security module, leaving the provisioning system only handling individual card keys. (For an additional level of paranoia, one can implement the entire PIV administrator authentication challenge-response in the HSM itself, with no keys ever relinquished outside the secure execution environment.)
piv-tool takes a slightly more cavalier attitude about where administrator keys come from: a local file. Assuming that one is willing to live with this approach to key management, here is how key generation would work:
- Save the PIV card administrator key to a local file. The format is similar to how GoldKey client UI accepts keys, with one cosmetic difference: each hex byte is separated by a column character.
$ cat admin_key 01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08
- Export the name of the file containing the key into a specific environment variable.
$ export PIV_EXT_AUTH_KEY=./admin_key
- Invoke piv-tool with the “-A” option to authenticate as administrator and “-G” option to generate keys:
$ piv-tool -A A:9B:03 -G 9C:07
In this example, the highlighted option indicates generating a new signature-key, identified by key reference 0x9C as defined in NIST 800-78 part 3 of type 2048-bit RSA, which is algorithm reference 07 also defined in the same standard. The first option indicates administrator authentication, using key ID 0x9B of type 3DES; this is a function of the card configuration, which users typically have little control over. Note that contrary to what one might optimistically assume about extensibility, PIV does not allow generating keys over arbitrary named curves or curves with user-defined parameters.
The command will trigger key-generation on the token and then output a binary encoding of the public half. It is easier to make sense of that output by having openssl parse the ASN1. For example, here is another example for generating a new ECDSA signature key over the NIST P256 curve (also known as prime256v1) which happens to be algorithm reference 11:
$ piv-tool -A A:9B:03 -G 9C:11 | openssl asn1parse -inform DER -i -dump Using reader with a card: GoldKey PIV Token 00 00 0:d=0 hl=2 l= 89 cons: SEQUENCE 2:d=1 hl=2 l= 19 cons: SEQUENCE 4:d=2 hl=2 l= 7 prim: OBJECT :id-ecPublicKey 13:d=2 hl=2 l= 8 prim: OBJECT :prime256v1 23:d=1 hl=2 l= 66 prim: BIT STRING 0000 - 00 04 78 95 ac 64 63 7f-9d 4d a8 b5 5d 2f 36 27 ..x..dc..M..]/6' 0010 - bf 73 6e fc ee bf de 29-6f ca 06 ee 85 a9 c5 42 .sn....)o......B 0020 - 83 cf 12 3f eb f6 ff eb-0a a8 78 f4 de 68 40 a4 ...?......x..h@. 0030 - 87 c9 81 2d 06 f0 5b 9b-a5 64 46 b5 12 3e 61 55 ...-..[..dF..>aU 0040 - 99 09
It is helpful to save the public-key because this is one of the few times it will be available directly. There is no APDU to retrieve a public-key in PIV. In steady state, the public-key exists in a certificate already loaded on the card; except that we have not yet obtained that certificate. (In some cases it is possible to indirectly recover the public-key. For example assuming a valid ECDSA signature on a known message, public key can be derived as implemented in Bitcoin libraries.)
Next: generating a certificate-signing request, which runs into an interesting circularity with the OpenSC middleware.
This is a follow-up to earlier series on using GoldKey hardware tokens on OS X for cryptographic applications such as SSH. One question left unanswered on the last post was how to provision certificates to the token on OS X and Linux, where there is no official support provided by GoldKey. It turns out the implications of that question reach far beyond the implementation details of a specific hardware vendor such as GoldKey. It forces a closer look at the PIV standard itself and canvas the web for open-source utilities available for working with PIV cards.
Provisioning from a high-level
First a quick recap on exactly what “provisioning” means for smart-cards that support PKI. Starting with a blank card, desired end state is to have a private-key residing on the card– ideally generated on-card and never having left the secure execution environment– along with an X509 digital certificate which has the corresponding public-key. (Repeat as desired: there can be more than one key/certificate slot. Case in point, PIV defines 4 types of keys with different use-cases such as signature and encryption.)
Provisioning, nuts and bolts
That process typically breaks down into four steps:
- Request the card to generate a key-pair and output the public key.
- Create a certificate signing-request (CSR) which contains that public key, and information about the user, and obtain a signature from the card.
- Submit the CSR to a certificate-authority and have the CA issue a certificate signed by the CA.
- Load the returned certificate on the card.
This is the most generic version of the flow. In particular it covers the self-signed certificate scenario as a special-case, where instead of signing a CSR we can sign a certificate to collapse steps #2 and #3. (In principle this could even have been implemented on-card but typically the complexity of ASN1 parsing and construction has discouraged smart-card developers from trying to do much with X509 on-card.) It is also worth noting that the CSR generation is a practical necessity because most CA implementations are designed to take a valid CSR as input. “Valid” being the operative adjective: that includes checking that the CSR is signed by a private-key which corresponds to the public key appearing in it. One could imagine a hypothetical CA that simply issued certificates given a public key– the signature on the CSR does not factor into the final X509 certificate issued in any way. In real life CA implementations are a bit more demanding so it is important to build that flexibility into the card provisioning flow.
Returning to these four steps, first observations is that #3 is the odd-man out. It takes place out-of-band without any interaction with the card. So we focus our attention on the remaining three. Of these #1 and #4 actually change the state of the card by updating its contents, while #2 only involves using existing key material. This turns out to be an important distinction for many smart-card standards including PIV because the authentication model for those two modes are different. For example in PIV using a key typically requires user authentication, in other words entering a PIN. Card management on the other hand requires authenticating as the card administrator, which involves using a more complex challenge-response protocol using cryptographic keys described in NIST 800-73 part #2.**
Card-administrator keys for GoldKey
That brings us to the first problem: in order to perform key-generation or loading certificates on the card, we must be in possession of the card management key (also referred to as PIV card application-administrator key, with key reference 0x9B per NIST SP 800-78-3) must be known for that specific card. PIV allows this key to use any number of the algorithms defined in the standard including 3DES, AES, even public-key options RSA or ECDSA. But in practice most vendors including Gemalto, Oberthur and GoldKey all appear to have settled on old-fashioned 3DES, in keeping with the retro-chic commitment to DES in smart-cards and financial applications.
In the case of GoldKey, all tokens are provisioned with an unspecified “default key” from the factory. Luckily the proprietary client software allows changing this key to one controlled by the user. This step requires a master-token, and having associated the current token with that particular master.
3DES keys are entered as 48 hex-bytes or 24 bytes total. While 3DES keys are only 168-bits, in keeping with traditional DES key representation, only 7 bits out of each byte are significant. Least-significant bit reserved for a parity check, which is ignored in modern implementations. GoldKey appears to follow that model. Somewhat confusingly the PIV standard refers to this as “card management keys” but they are not related to the Global Platform issuer-security-domain aka “card manager” keys. This is strictly a PIV application-level concept, which is independent of whether the underlying card is Global Platform compliant. (Of course in reality PIV applications are typically implemented on GP-compliant cards.)
Assuming we now have a token provisioned with a card-administration key we control– or perhaps we saved ourselves the effort by using a brand that comes preloaded with well-known default keys from the manufacturer– we can tackle steps #1 and #4 above.
[continue to part II]
** Incidentally this is why provisioning via the GoldKey Windows driver can not possibly be compliant with FIPS 201: it requires users to enter a PIN. There is no such mode in standard PIV; user PIN is neither sufficient nor necessary for administrative actions.
In No Place to Hide Glenn Greenwald presents a harsh critique of mass-surveillance conducted by the NSA as revealed in the massive stash of documents from Edward Snowden. In addition to the obligatory George Orwell 1984 references, the author also evokes the 18th century British philosopher Jeremy Bantham’s Panopticon, a hypothetical prison design optimized for constant observation for inmates. Its unfortunate denizens occupy the outer rings of a circular structure, with an inner one-way transparent tower reserved for the wardens. These wardens can look out and observe the inmates at any time, but the inmates can not see what is going on inside that tower. They can not even be certain at any given time if there is any one on the other side watching. Yet the suspicion is always there– and that is the point. The mere possibility of being under observation will silently coerce the otherwise unruly denizens into behaving themselves, goes the argument, far more effectively than any random meting out of punishment or violence.
This tried-and-true dystopian construct is recycled yet again by Greenwald to argue that ubiquitous mass-surveillance will have the effect of chilling speech, smothering dissent and turning Americans into obedient conformists, afraid to challenge their government:
… Bentham’s solution was to create “the apparent omnipresence of the inspector” in the minds of the inhabitants. ‘The persons to be inspected should always feel themselves as if under inspection, at least as standing a great chance of being so.” They would thus act is they were always being watched, even if they weren’t. The result would be compliance, obedience and conformity with expectations.
A later paragraph quotes Foucoult:
Those who believe they are watched will instinctively choose to do that which is wanted of them without even realizing that they are being controlled– the Panopticon induces “in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power.” With the control internalized, the overt evidence of repression disappears because it is no longer necessary: …
There is one problem with the parallel: until Edward Snowden, few people realized that they were living in the Panopticon. There has never been a shortage of tinfoil-hat wielding conspiracy theorists convinced that the FBI is tuning into our brain patterns from outer space. Occasionally we were given glimpses into the extent of data collection: New York Times expose in 2005 on warrantless surveillance, AT&T whistle-blower coming forward with the existence of NSA monitoring equipment on AT&T premises. But until Guardian, New York Times and Washington Post started publishing from the cache of leaked documents, there was no convincing proof, no reason to suspect that every email, phone conversation and text message would be scanned as part of a massive communications dragnet, that every routine credit-card purchase feeds into an operation designed to ferret out terrorist financing networks. Delusions aside, it is difficult for society to be intimated or cowed into submission by something they do not believe exists. Not only was the extent of surveillance unknown, but intelligence agencies very much preferred to keep it that way, using every opportunity to refute the allegations.
This is a key difference from Bentham’s frequently misused Panopticon: in the Panopticon inmates lived every moment fully aware of the constant possibility– if not actual reality of– being under observation. The architecture not only enabled constant surveillance, but it was fully transparent about that objective. Manifesting the presence of surveillance to its targets is an integral part of the design. There is no question about whether privacy exists in this system, no warm-fuzzy concepts of due process, 4th amendment protections and warrant requirements.
Intelligence operations by contrast thrive on secrecy. Extraordinary measures are taken to keep their existence hidden. A surveillance target aware of being watched, the theory goes, will modify his/her behavior, exercise greater caution or attempt to hide their tracks. In the extreme scenario, they might even refrain from communicating the information or carrying out actions the operation hoped to uncover. If surveillance was perfectly ubiquitous one could argue that state of affairs is just as good as having collected actionable intelligence. If all terrorists gave up on conspiring to plan new attacks out of fear that they are being watched, fewer attacks may result and society is safer. But frequent exaggeration and hyperbole aside, no system of surveillance built is quite that omnipresent. Knowing that communications in one medium are carefully watched motivates the targets to conduct their activities over different channels where greater privacy is assumed to exist. Surveillance exposed is surveillance rendered ineffective. Distrust in US technology companies in the aftermath of PRISM revelations is one example of this effect. Fewer people entrusting their data to Google, MSFT and Yahoo will be counterproductive for a system that relied on those companies providing a wealth of information.
The Internet became a true Panopticon only after Edward Snowden came forward and his message reached a mass audience. Paradoxically then, to the extent that chilling effects on political speech have resulted in the wake of NSA revelations, Greenwald, Poitras and Gellman were key players in creating the Panopticon. In true ignorance-as-bliss fashion, our previous state of misguided expectations around privacy may have been far more conducive to free expression after all. But in this case we can only be grateful to Snowden for resetting those expectations and starting the debate on intelligence reform.
[continued from part I]
Handshakes with PFS
Perfect-forward secrecy precludes decrypting past traffic which was collected passively earlier. But we still can pull off a real-time active attack with the help of our friend CloudFlare. Suppose we have man-in-the-middle capability, controlling the network around our victim. When the victim tries to connect to the CDN, we impersonate the site and start a bogus handshake. Given access to a decryption oracle as in #1, we could always downgrade the choice of ciphersuite to avoid PFS but that is not very elegant. Users might get suspicious why they are not seeing the higher-security option. (Not that any web browser actually surfaces the distinction to users. While the address bar turns green for extended validation certificates– purely cosmetic, since they have little security benefit– there is no reassuring icon to mark the presence of PFS.)
Luckily we can carry out a forged SSL handshake with PFS intact by enlisting the help of CloudFlare. This time instead of asking our friendly CDN to decrypt an arbitrary ciphertext, we ask for assistance with signing an opaque message. CloudFlare will turn around and pass on this request to the origin site. Once again origin is oblivious to the fact that this request is for MITMing a user as opposed to a new legitimate connection. Unlike simple RSA decryption, this time the transcript getting signed (more accurately its hash) is different each time so there is not even a way for the careful origin implementation to distinguish.
One could object this approach is highly inefficient. Why not let the target connect directly to CloudFlare and ask them to store a transcript of decrypted traffic for later retrieval? Because that would reveal the target of surveillance. Network MITM combined with well-defined interaction with the CDN (“sign this hash”) avoids divulging such information.
It’s worth emphasizing again that in neither case does origin site do anything extra or different to enable interception. As far that customer is concerned, they are simply holding up their part of the CloudFlare “keyless SSL” bargain. There is no need to send national-security letters to secure cooperation from the origin site. They can remain blissfully ignorant, publishing a squeaky clean transparency report where they can boast about never having received requests for customer data. That’s because such requests are routed to the CDN, who is then legally obligated to keep its own customers in the dark about what is going on. (In fact CloudFlare claims having received “between 0-249″ NSLs in its own transparency report, which is not broken down by customers.)
This is why one of the touted benefits around revoking trust is moot. In principle the customer can instantly revoke access by refusing to decrypt for CloudFlare if the CDN is suddenly considered untrusted. (Of course they could have achieved the same effect in the traditional setup by revoking the certificates given to the CDN, but that runs into the vagaries botched and half-baked revocation checking in various browsers.) Minor problem: there is no way to know if the CDN is operating as advertised or helping third-parties intercept private communications to origin. There is no accountability in this design.
This blogger is not asserting such things are happening routinely at CloudFlare. The point is that it can happen and in spite of best intentions, a CDN can not provide guarantees against such compelled assistance. Even the NSL canary in CloudFlare transparency report is fully consistent with offering such on-demand decryption assistance:
- CloudFlare has never turned over our SSL keys or our customers SSL keys to anyone.
- CloudFlare has never installed any law enforcement software or equipment anywhere on our network.
- CloudFlare has never provided any law enforcement organization a feed of our customers’ content transiting our network.
Providing a controlled interface for law-enforcement to request decryption/signing does not violate the letter or spirit of any of these assertions. When origin site provides an API for CloudFlare to call and request decryption, surely that does not count as the origin site installing CloudFlare software or equipment on its network. By the same token, if CloudFlare were to provide an API for law-enforcement to call and request decryption (which must be proxied over to origin site for “keyless SSL”) it does not count as installing law-enforcement software. Neither does it count as providing a feed of content transiting the network– that “content” is captured by government in encrypted form as part of its intelligence activities, and CloudFlare simply provides tactical assistance in decryption. There is of course the question of whether such canaries are meaningful to begin with. If it turns out that CloudFlare was in fact colluding with US government all along in violation of the above statements, would the FTC– a different part of that same government– go after CloudFlare for deceptive advertising?
Clarifying threat models
This is not to say keyless SSL has no benefits. Only having one location containing sensitive keys as opposed to two reduces attack surface. (This is true even if origin uses a different hostname than externally visible one to avoid having another key that can enable MITM attacks. The links between CDN-origin are highly concentrated targets for surveillance.) It protects the origin from mistakes and vulnerabilities on the part of the CDN that lead to disclosure of the private key– such as the Heartbleed scenario that affected CDNs in April. But there is a clear difference between incompetence and malice. Keyless SSL provides no defense against a CDN colluding with governments to enable surveillance while keeping its customers in the dark.