PIV provisioning on Linux and Mac: getting certificates (part III)

Continuing to the next step of PIV provisioning with open-source utilities, we need to generate a certificate signing request (CSR) corresponding to the key-pair generated on the card. This is a signed document generated by the person enrolling for a certificate, containing information they would like to have appear on the certificate, including their name, organization affiliation and public-key. The certificate authority verifies this information, makes changes/amendments as necessary and issues a certificate. This ritual for using CSRs is largely a matter of convention. Nothing stops a CA from issuing any certificate to anyone given just the public key. But most CA implementations are designed to accept only valid CSRs as input to the issuance procedure.

Offloading crypto in openssl

That calls for constructing the CSR with proper fields, including the public-key that exists on the card and getting that object signed using the PIV card. openssl provides both these ingredients:

Our strategy then is to find a suitable engine implementation that can interop with PIV cards and use it for signing the CSR. Fortunately OpenSC project provides precisely such an engine, building on top of its PKCS #11 library that figured in previous PIV scenarios.

It is possible to script openssl to load an engine first and perform an operation using keys associated with that engine. Here is an example gist for CMS decryption. It issues two commands to openssl. First one loads the engine by specifying the shared library path and assigning it a name for future reference. Second one performs the CMS decryption using a private-key associated with that engine, in this case key ID 01 corresponding to the PIV authentication key. (Somewhat confusingly the PKCS11 layer uses does not use the PIV key-reference identifiers.)

Boot-strapping problem

It would appear straightforward to create a similar script to load the engine and invoke  CSR generation while referencing a key associated with the card. But this runs into a subtle problem: OpenSC middleware assumes there is no key present in a particular slot unless there is also an associated certificate. This is a reasonable assumption for PIV cards in steady-state: each key-type defined in the standard such as PIV authentication or digital signature has both an X509 certificate and associated private key material. But during provisioning that assumption is violated temporarily: a key-pair must be generated first before we can have a certificate containing the public piece.

That leads to a boot-strapping problem: OpenSC middleware can not use the private-key on card without an associated certificate also present on the same card. But we can not obtain a certificate unless we first sign a CSR using the private-key.

It turns out OpenSC is not to blame for this either. Windows PIV minidriver has the same behavior; it would report no credentials present on a card without certificates even if keys had been generated for all four slots. Root cause is a more subtle problem with the PIV command interface itself: there is no mechanism for retrieving a stand-alone public-key. The assumption is that public-keys are extracted from the certificate. In the absence of the certificate, there is no uniform way to even learn the public-key for the purpose of placing it into the CSR.**

Signing the hard way

The above observation also rules out a kludge that at first sight seems promising: provision a dummy certificate containing an unrelated public-key to make OpenSC happy. That would allow signing to “work”– it turns out the middleware does not check the signature for consistency against public key in the certificate. But the CSR generated this way would still have the wrong public-key drawn from the placeholder certificate.

Boot-strapping requires a more complex sequence involving manual ASN1 manipulation:

  1. Save the output from key generation step
  2. Craft an unsigned CSR containing that public-key
    • This is easiest to accomplish by taking an existing CSR with all of the other fields correct, removing the signature and “grafting” a new public-key in place of the existing field.
  3. Get a signature from the card.
    • This can be done either by using the placeholder-certificate approach described  above or directly issuing a GENERAL AUTHENTICATE command to the PIV application.
  4. Combine the signature with the modified request from step #2 above to create a valid signed CSR.

As a side-note, generating a new CSR once a valid certificate exists is much easier. In that case the key is not changed and the certificate is consistent with the key. It is much easier to go from a card already set up with self-signed certificates (for example provisioned on Windows via certreq) to issuing a new certificates from a different CA using off-the-shelf open source utilities.

Completing provisioning

Once CSR is created and exchanged for a valid certificate, the final step is straightforward: load the certificate. This is done using piv-tool and authenticating with the card management key, similar to key generation.

CP

** For ECDSA the public-key can be derived from one valid signature and associated hash. For RSA there is no similar quick fix, but most cards will give an error when asked to operate on an input larger than the modulus. This can be used to binary-search for the correct modulus.


They know what you bought last summer: privacy in NFC payments

In the wake of Apple Pay launch, one interesting document captured a rare occasion: an insider described as “key developer at major bank” having a moment of honesty, voicing blunt opinions on Apple, ISIS/Softcard, Google Wallet and upcoming chip & PIN transition in the US. According to this anonymous source, one of the key selling points for Apple Pay from banks’ perspective is privacy. Apple provides the wallet software and operates a service, but it does not learn about user transactions. Given how often this presumed advantage is mindlessly repeated in other articles, it is worth drilling into the question: who knows exactly what the card-holder purchased? [Full disclosure: this blogger worked on Google Wallet security]

For the paranoid, cash remains the most privacy-friendly solution— perhaps to be rivaled by Bitcoin one day when it is more widely accepted as a payment scheme. It is peer-to-peer in the old-fashioned way: from customer to merchant. All of the other mainstream payment methods involve some other intermediary, giving that third-party an opportunity to learn about transactions. Some of the participants are visible in plain-sight, others operate behind the scenes. Checks involve the bank where the customer maintains an account. Less obviously they also involve a clearing-house responsible for settling checks among banks.

Credit card networks have more moving pieces. There is the issuing bank underwriting the  card used by the consumer. There is the acquiring bank receiving the payment on behalf of the merchant. Typically there is also a payment processor linking the merchant to the acquirer. In the middle orchestrating this flow of funds on a massive scale is the card network: Visa, MasterCard, American Express or Discover. These players all have visibility into the total payment amount and the merchant. In most cases they also know the identity of the card-holder. Most banks fall under some type of know-your-customer (KYC) rules designed to deter terrorist financing, which involves validating the identity of customers prior to doing business. This is true even for gift/prepaid cards that are funded ahead of time with no credit risk for the issuer. That means at a minimum the issuing bank knows the true identity of the person involved in a transaction, which is not surprising. Less obvious is the fact it is frequently exposed to other parties along the way as part of the “track data” encoded on the card, since track data travels from the merchant point-of-sale to the payment processor and eventually into the network. (Cardholder name need not be identical to legal name, in which case there is a slight privacy improvement in the use of a pseudonym.) At the end of the day, it is a safe bet that Visa knows consumer Jane Smith has purchased $38.67 worth of goods from corner grocery store on Saturday, October 18th in Tribeca. No wonder NSA is on friendly terms with payment networks, tapping into this vast data-set for intelligence purposes.

But equally important is what none of these participants know: line-level transaction data. They have no visibility into the contents of Jane’s shopping-cart. There are some qualifications to this general principle, edge cases where contents can be inferred. For example if there is only one item at a coffee-shop that rings up for $3.47, that is exactly what the customer ordered. Fortunately the more general case of trying to come up with combination of goods tallying up to a given total is a surprisingly intractable computer-science challenge, known as the subset-sum problem. It is further complicated by the fact that many goods have identical price, a given sum could correspond to millions of different combinations of items and for goods priced by weight— such as produce— there is not even a single assigned price. There is also some information leaked by the pattern of transactions. For example, restaurants often will perform two transactions when settling a bill: one to authorize the original amount and one that collects the final tab including the gratuity. In other words, the bad news is both your bank and Visa can see whether you have been stingy or generous on tips. Yet they still have no idea exactly what you ordered, whether it was inline with FDA dietary recommendations (although there is likely a strong correlation with the establishment involved, which explains why insurers are peeking into patient credit-card records) or how many glasses of wine accompanied that meal.

That information flow is not changed by chip-and-pin, or for that matter NFC payments using a phone.** There is still a bank responsible for issuing the “card,” even when the physical manifestation of the card on a piece of plastic has been replaced by cryptographic keys residing on a mobile device. There is a more elaborate payment “protocol.” Before the cashier swiped the magnetic-stripe on the card, reading static information; now there is an interactive process where the point-of-sale register communicates back-and-forth with the phone over NFC. Does that somehow disclose more information? It turns out the answer is no. EMV standards prescribing that complicated song-and-dance have no provision for item-level information to be communicated. The phone can not receive this information from the POS and neither can the POS transmit it upstream when requesting payment authorization.

So why the persistent harping by our banker friend on the contrast between Apple Pay and Google Wallet? Because in the latter design, Google is effectively the “issuer” of the card used for payment. As described in earlier posts, Google Wallet switched to using virtual cards in 2012. These cards are only “virtual” in the sense that the consumer does not have to know about them. But as far as the merchant and payment network are concerned, they are very real. From the merchant point-of-view, the consumer paid with a MasterCard issued by Google. Behind the scenes Google proxies this transaction in real-time to the actual card the consumer added to their wallet. This is why Google has same visibility into purchases as an issuing bank such as Bank of America with Apple Pay or Softcard: transaction amounts and patterns, but not line-level items.

This answer is still unsatisfactory for one reason: we have limited ourselves to information that is exchanged as part of the purchase transaction defined by EMV specifications. Could a mobile wallet application on a smart-phone obtain more information out-of-band?

[continued]

CP

** There is one privacy improvement in that cardholder names are typically redacted from the simulated “track data” created by NFC transactions.


PIV provisioning on Linux and Mac: key generation (part II)

[continued from part I]

Key generation

Returning to the problem of provisioning a PIV card given the PIV administrator key, we can now enlist the help of open source solutions. The venerable OpenSC suite includes a piv-tool utility for performing administrative operations on PIV cards. One cautionary warning: this approach is only suitable for very primitive key-management schemes. In an enterprise-grade credential management system, card keys would typically be diversified using a cryptographic function, where each card has a different administrator key that is computed as a cryptographic function of some master secret and an immutable property of the card, such as its serial ID. In high security applications that master secret is relegated to a hardware-security module, leaving the provisioning system only handling individual card keys. (For an additional level of paranoia, one can implement the entire PIV administrator authentication challenge-response in the HSM itself, with no keys ever relinquished outside the secure execution environment.)

piv-tool takes a slightly more cavalier attitude about where administrator keys come from: a local file. Assuming that one is willing to live with this approach to key management, here is how key generation would work:

  • Save the PIV card administrator key to a local file. The format is similar to how GoldKey client UI accepts keys, with one cosmetic difference: each hex byte is separated by a column character.
$ cat admin_key
01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08:01:02:03:04:05:06:07:08
  • Export the name of the file containing the key into a specific environment variable.
$ export PIV_EXT_AUTH_KEY=./admin_key
  • Invoke piv-tool with the “-A” option to authenticate as administrator and “-G” option to generate keys:
$ piv-tool -A A:9B:03 -G 9C:07

In this example, the highlighted option indicates generating a new signature-key, identified by key reference 0x9C as defined in NIST 800-78 part 3 of type 2048-bit RSA, which is algorithm reference 07 also defined in the same standard. The first option indicates administrator authentication, using key ID 0x9B of type 3DES; this is a function of the card configuration, which users typically have little control over. Note that contrary to what one might optimistically assume about extensibility, PIV does not allow generating keys over arbitrary named curves or curves with user-defined parameters.

The command will trigger key-generation on the token and then output a binary encoding of the public half. It is easier to make sense of that output by having openssl parse the ASN1. For example, here is another example for generating a new ECDSA signature key over the NIST P256 curve (also known as prime256v1) which happens to be algorithm reference 11:

$ piv-tool -A A:9B:03 -G 9C:11 | openssl asn1parse -inform DER -i -dump
Using reader with a card: GoldKey PIV Token 00 00
   0:d=0  hl=2 l=  89 cons: SEQUENCE          
   2:d=1  hl=2 l=  19 cons:  SEQUENCE          
   4:d=2  hl=2 l=   7 prim:   OBJECT            :id-ecPublicKey
  13:d=2  hl=2 l=   8 prim:   OBJECT            :prime256v1
  23:d=1  hl=2 l=  66 prim:  BIT STRING        
     0000 - 00 04 78 95 ac 64 63 7f-9d 4d a8 b5 5d 2f 36 27   ..x..dc..M..]/6'
     0010 - bf 73 6e fc ee bf de 29-6f ca 06 ee 85 a9 c5 42   .sn....)o......B
     0020 - 83 cf 12 3f eb f6 ff eb-0a a8 78 f4 de 68 40 a4   ...?......x..h@.
     0030 - 87 c9 81 2d 06 f0 5b 9b-a5 64 46 b5 12 3e 61 55   ...-..[..dF..>aU
     0040 - 99 09

It is helpful to save the public-key because this is one of the few times it will be available directly. There is no APDU to retrieve a public-key in PIV. In steady state, the public-key exists in a certificate already loaded on the card; except that we have not yet obtained that certificate. (In some cases it is possible to indirectly recover the public-key. For example assuming a valid ECDSA signature on a known message, public key can be derived as implemented in Bitcoin libraries.)

Next: generating a certificate-signing request, which runs into an interesting circularity with the OpenSC middleware.

[continued]

CP


PIV provisioning on Linux and Mac (part I)

This is a follow-up to earlier series on using GoldKey hardware tokens on OS X for cryptographic applications such as SSH. One question left unanswered on the last post was how to provision certificates to the token on OS X and Linux, where there is no official support provided by GoldKey. It turns out the implications of that question reach far beyond the implementation details of a specific hardware vendor such as GoldKey. It forces a closer look at the PIV standard itself and canvas the web for open-source utilities available for working with PIV cards.

Provisioning from a high-level

First a quick recap on exactly what “provisioning” means for smart-cards that support PKI. Starting with a blank card, desired end state is to have a private-key residing on the card– ideally generated on-card and never having left the secure execution environment– along with an X509 digital certificate which has the corresponding public-key. (Repeat as desired: there can be more than one key/certificate slot. Case in point, PIV defines 4 types of keys with different use-cases such as signature and encryption.)

Provisioning, nuts and bolts

That process typically breaks down into four steps:

  1. Request the card to generate a key-pair and output the public key.
  2. Create a certificate signing-request (CSR) which contains that public key, and information about the user, and obtain a signature from the card.
  3. Submit the CSR to a certificate-authority and have the CA issue a certificate signed by the CA.
  4. Load the returned certificate on the card.

This is the most generic version of the flow. In particular it covers the self-signed certificate scenario as a special-case, where instead of signing a CSR we can sign a certificate to collapse steps #2 and #3. (In principle this could even have been implemented on-card but typically the complexity of ASN1 parsing and construction has discouraged smart-card developers from trying to do much with X509 on-card.) It is also worth noting that the CSR generation is a practical necessity because most CA implementations are designed to take a valid CSR as input. “Valid” being the operative adjective: that includes checking that the CSR is signed by a private-key which corresponds to the public key appearing in it. One could imagine a hypothetical CA that simply issued certificates given a public key– the signature on the CSR does not factor into the final X509 certificate issued in any way. In real life CA implementations are a bit more demanding so it is important to build that flexibility into the card provisioning flow.

Returning to these four steps, first observations is that #3 is the odd-man out. It takes place out-of-band without any interaction with the card. So we focus our attention on the remaining three.  Of these #1 and #4 actually change the state of the card by updating its contents, while #2 only involves using existing key material. This turns out to be an important distinction for many smart-card standards including PIV because the authentication model for those two modes are different. For example in PIV using a key typically requires user authentication, in other words entering a PIN. Card management on the other hand requires authenticating as the card administrator, which involves using a more complex challenge-response protocol using cryptographic keys described in NIST 800-73 part #2.**

Card-administrator keys for GoldKey

Accessing PIV settings from GoldKey client

That brings us to the first problem: in order to perform key-generation or loading certificates on the card, we must be in possession of the card management key (also referred to as PIV card application-administrator key, with key reference 0x9B per NIST SP 800-78-3) must be known for that specific card. PIV allows this key to use any number of the algorithms defined in the standard including 3DES, AES, even public-key options RSA or ECDSA. But in practice most vendors including Gemalto, Oberthur and GoldKey all appear to have settled on old-fashioned 3DES, in keeping with the retro-chic commitment to DES in smart-cards and financial applications.

In the case of GoldKey, all tokens are provisioned with an unspecified “default key” from the factory. Luckily the proprietary client software allows changing this key to one controlled by the user. This step requires a master-token, and having associated the current token with that particular master.

Configuring the PIV card-administration key

Configuring the PIV card-administration key

3DES keys are entered as 48 hex-bytes or 24 bytes total. While 3DES keys are only 168-bits, in keeping with traditional DES key representation, only 7 bits out of each byte are significant. Least-significant bit reserved for a parity check, which is ignored in modern implementations. GoldKey appears to follow that model. Somewhat confusingly the PIV standard refers to this as “card management keys” but they are not related to the Global Platform issuer-security-domain aka “card manager” keys. This is strictly a PIV application-level concept, which is independent of whether the underlying card is Global Platform compliant. (Of course in reality PIV applications are typically implemented on GP-compliant cards.)

Assuming we now have a token provisioned with a card-administration key we control– or perhaps we saved ourselves the effort by using a brand that comes preloaded with well-known default keys from the manufacturer– we can tackle steps #1 and #4 above.

[continue to part II]

CP

** Incidentally this is why provisioning via the GoldKey Windows driver can not possibly be compliant with FIPS 201: it requires users to enter a PIN. There is no such mode in standard PIV; user PIN is neither sufficient nor necessary for administrative actions.


NSA, Panopticon and paradox of surveillance exposed

In No Place to Hide Glenn Greenwald presents a harsh critique of mass-surveillance conducted by the NSA as revealed in the massive stash of documents from Edward Snowden. In addition to the obligatory George Orwell 1984 references, the author also evokes the 18th century British philosopher Jeremy Bantham’s Panopticon, a hypothetical prison design optimized for constant observation for inmates. Its unfortunate denizens occupy the outer rings of a circular structure, with an inner one-way transparent tower reserved for the wardens. These wardens can look out and observe the inmates at any time, but the inmates can not see what is going on inside that tower. They can not even be certain at any given time if there is any one on the other side watching. Yet the suspicion is always there– and that is the point. The mere possibility of being under observation will silently coerce the otherwise unruly denizens into behaving themselves, goes the argument, far more effectively than any random meting out of punishment or violence.

This tried-and-true dystopian construct is recycled yet again by Greenwald to argue that ubiquitous mass-surveillance will have the effect of chilling speech, smothering dissent and turning Americans into obedient conformists, afraid to challenge their government:

… Bentham’s solution was to create “the apparent omnipresence of the inspector” in the minds of the inhabitants. ‘The persons to be inspected should always feel themselves as if under inspection, at least as standing a great chance of being so.” They would thus act is they were always being watched, even if they weren’t. The result would be compliance, obedience and conformity with expectations.

A later paragraph quotes Foucoult:

Those who believe they are watched will instinctively choose to do that which is wanted of them without even realizing that they are being controlled– the Panopticon induces “in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power.” With the control internalized, the overt evidence of repression disappears because it is no longer necessary: …


There is one problem with the parallel: until Edward Snowden, few people realized that they were living in the Panopticon. There has never been a shortage of tinfoil-hat wielding conspiracy theorists convinced that the FBI is tuning into our brain patterns from outer space. Occasionally we were given glimpses into the extent of data collection: New York Times expose in 2005 on  warrantless surveillance, AT&T whistle-blower coming forward with the existence of NSA monitoring equipment on AT&T premises. But until Guardian, New York Times and Washington Post started publishing from the cache of leaked documents, there was no convincing proof, no reason to suspect that every email, phone conversation and text message would be scanned as part of a massive communications dragnet, that every routine credit-card purchase feeds into an operation designed to ferret out terrorist financing networks. Delusions aside, it is difficult for society to be intimated or cowed into submission by something they do not believe exists. Not only was the extent of surveillance unknown, but intelligence agencies very much preferred to keep it that way, using every opportunity to refute the allegations.

This is a key difference from Bentham’s frequently misused Panopticon: in the Panopticon inmates lived every moment fully aware of the constant possibility– if not actual reality of– being under observation. The architecture not only enabled constant surveillance, but it was fully transparent about that objective. Manifesting the presence of surveillance to its targets is an integral part of the design. There is no question about whether privacy exists in this system, no warm-fuzzy concepts of due process, 4th amendment protections and warrant requirements.

Intelligence operations by contrast thrive on secrecy. Extraordinary measures are taken to keep their existence hidden. A surveillance target aware of being watched, the theory goes, will modify his/her behavior, exercise greater caution or attempt to hide their tracks. In the extreme scenario, they might even refrain from communicating the information or carrying out actions the operation hoped to uncover. If surveillance was perfectly ubiquitous one could argue that state of affairs is just as good as having collected actionable intelligence. If all terrorists gave up on conspiring to plan new attacks out of fear that they are being watched, fewer attacks may result and society is safer. But frequent exaggeration and hyperbole aside, no system of surveillance built is quite that omnipresent. Knowing that communications in one medium are carefully watched motivates the targets to conduct their activities over different channels where greater privacy is assumed to exist. Surveillance exposed is surveillance rendered ineffective. Distrust in US technology companies in the aftermath of PRISM revelations is one example of this effect. Fewer people entrusting their data to Google, MSFT and Yahoo will be counterproductive for a system that relied on those companies providing a wealth of information.

The Internet became a true Panopticon only after Edward Snowden came forward and his message reached a mass audience. Paradoxically then, to the extent that chilling effects on political speech have resulted in the wake of NSA revelations, Greenwald, Poitras and Gellman were key players in creating the Panopticon. In true ignorance-as-bliss fashion, our previous state of misguided expectations around privacy may have been far more conducive to free expression after all. But in this case we can only be grateful to Snowden for resetting those expectations and starting the debate on intelligence reform.

CP


CloudFlare and keyless SSL: far from NSL-proof (part II)

[continued from part I]

Handshakes with PFS

Perfect-forward secrecy precludes decrypting past traffic which was collected passively earlier. But we still can pull off a real-time active attack with the help of our friend CloudFlare. Suppose we have man-in-the-middle capability, controlling the network around our victim. When the victim tries to connect to the CDN, we impersonate the site and start a bogus handshake. Given access to a decryption oracle as in #1, we could always downgrade the choice of ciphersuite to avoid PFS but that is not very elegant. Users might get suspicious why they are not seeing the higher-security option. (Not that any web browser actually surfaces the distinction to users.  While the address bar turns green for extended validation certificates– purely cosmetic, since they have little security benefit– there is no reassuring icon to mark the presence of PFS.)

Luckily we can carry out a forged SSL handshake with PFS intact by enlisting the help of CloudFlare. This time instead of asking our friendly CDN to decrypt an arbitrary ciphertext, we ask for assistance with signing an opaque message. CloudFlare will turn around and pass on this request to the origin site. Once again origin is oblivious to the fact that this request is for MITMing a user as opposed to a new legitimate connection. Unlike simple RSA decryption, this time the transcript getting signed (more accurately its hash) is different each time so there is not even a way for the careful origin implementation to distinguish.

One could object this approach is highly inefficient. Why not let the target connect directly to CloudFlare and ask them to store a transcript of decrypted traffic for later retrieval? Because that would reveal the target of surveillance. Network MITM combined with well-defined interaction with the CDN (“sign this hash”) avoids divulging such information.

Oblivious customers

It’s worth emphasizing again that in neither case does origin site do anything extra or different to enable interception. As far that customer is concerned, they are simply holding up their part of the CloudFlare “keyless SSL” bargain. There is no need to send national-security letters to secure  cooperation from the origin site. They can remain blissfully ignorant, publishing a squeaky clean transparency report where they can boast about never having received requests for customer data. That’s because such requests are routed to the CDN, who is then legally obligated to keep its own customers in the dark about what is going on. (In fact CloudFlare claims having received “between 0-249″ NSLs in its own transparency report, which is not broken down by customers.)

This is why one of the touted benefits around revoking trust is moot. In principle the customer can instantly revoke access by refusing to decrypt for CloudFlare if the CDN is suddenly considered  untrusted. (Of course they could have achieved the same effect in the traditional setup by revoking the certificates given to the CDN, but that runs into the vagaries botched and half-baked revocation checking in various browsers.) Minor problem: there is no way to know if the CDN is operating as advertised or helping third-parties intercept private communications to origin. There is no accountability in this design.

NSL canaries?

This blogger is not asserting such things are happening routinely at CloudFlare. The point is that it can happen and in spite of best intentions, a CDN can not provide guarantees against such compelled assistance. Even the NSL canary in CloudFlare transparency report is fully consistent with offering such on-demand decryption assistance:

  • CloudFlare has never turned over our SSL keys or our customers SSL keys to anyone.
  • CloudFlare has never installed any law enforcement software or equipment anywhere on our network.
  • CloudFlare has never provided any law enforcement organization a feed of our customers’ content transiting our network.

Providing a controlled interface for law-enforcement to request decryption/signing does not violate the letter or spirit of any of these assertions. When origin site provides an API for CloudFlare to call and request decryption, surely that does not count as the origin site installing CloudFlare software or equipment on its network. By the same token, if CloudFlare were to provide an API for law-enforcement to call and request decryption (which must be proxied over to origin site for “keyless SSL”) it does not count as installing law-enforcement software. Neither does it count as providing a feed of content transiting the network– that  “content” is captured by government in encrypted form as part of its intelligence activities, and CloudFlare simply provides tactical assistance in decryption. There is of course the question of whether such canaries are meaningful to begin with. If it turns out that CloudFlare was in fact colluding with US government all along in violation of the above statements, would the FTC– a different part of that same government– go after CloudFlare for deceptive advertising?

Clarifying threat models

This is not to say keyless SSL has no benefits. Only having one location containing sensitive keys as opposed to two reduces attack surface. (This is true even if origin uses a different hostname than externally visible one to avoid having another key that can enable MITM attacks. The links between CDN-origin are highly concentrated targets for surveillance.) It protects the origin from mistakes and vulnerabilities on the part of the CDN that lead to disclosure of the private key– such as the Heartbleed scenario that affected CDNs in April. But there is a clear difference between incompetence and malice. Keyless SSL provides no defense against a CDN colluding with governments to enable surveillance while keeping its customers in the dark.

CP


CloudFlare and keyless SSL: far from NSL-proof (part I)

CloudFlare recently announced the availability of keyless SSL for serving SSL traffic without having direct access to cryptographic keys used to establish those SSL connections. This post takes a closer look at the implications of the architecture for security and compelled-interception by governments.

Content distribution networks

Quick recap: a content distribution network or CDN is a distributed service for making a website  available to users with higher availability, reduced latency and lower load on the website itself. This is accomplished by having CDN servers sit in front of the origin site, acting as a proxy by fielding requests from users. Since many of these requests involve the same piece of static content such as an image, the CDN can serve that content without ever having to turn around and interact with the origin site. Also CDN systems are typically located around the world on optimized network connections, with much faster paths to end-users than the typical service can afford to build out itself. Over time CDNs have expanded their offerings to everything from DDoS protection to image rescaling and optimizing sites for mobile browsers.

SSL problem

There is one hitch to using a CDN with SSL: CDN infrastructure must terminate the connection. For example MSFT’s Bing search engine uses Akamai. When users type “https://www.bing.com” into their browser, that request is in fact going to Akamai infrastructure rather than MSFT. But SSL uses digital certificates and associated secret keys for authentication. That means either the CDN obtains a new certificate on behalf of the customer (with CDN-generated keys and customer vouching for the CDN) or the customer provides their CDN with existing certificate/key.

Getting by without keys

“Keyless SSL” is a misnomer since it is unavoidable for the SSL/TLS protocol to rely on cryptographic keys for security. But the twist is that the CDN no longer has direct control of the private-key. Instead the specific parts of the SSL protocol that call for using the private-key are forwarded to the origin site who performs that particular operation (either decryption or signing depending on whether PFS is enabled.) Everything else involved in that request is still handled by the CDN. There is a slight regression in performance. Public-key cryptography operations in a SSL handshake are one of the more computationally demanding parts of the protocol. Origin site must be involved in handling each of these again, forfeiting one of the benefits of using CDN in the first place. What do we get in return?

Security improvement?

CloudFlare goes to great lengths to emphasize that this design guarantees they can not be compelled to reveal customer keys to law enforcement– because they do not have those keys. This is a legitimate concern. CDNs create a centralized, single point of failure for mass surveillance. A CDN might be the best friend for data-hungry intelligence agencies. Instead of having to issue multiple requests to tap into traffic for different websites, they can directly work with 1 CDN serving those customers to get access to all content going through the CDN, without the to decrypt any content going through. To what extend does that picture change? It turns out the answer is, not much.

First observation is that the ability to use a cryptographic key without restriction can be just as good as  having direct access to raw key-bits. Recall that CloudFlare can make requests to origin site and ask for arbitrary operations to be performed using the key. In other words the origin presents an “oracle” interface for performing arbitrary operations. In other contexts this is enough to inflict serious damage. Here is a parallel from the Bitcoin world: Bitcoin wallets are represented by cryptographic keys. Moving funds involves digitally signing transactions using that key. If you do not trust someone with all of your money, you would not give them access to your wallet keys. But would you be comfortable with a system where that same person can submit opaque messages to you for signing? Clearly this would not end well: they could craft a series of Bitcoin transactions to transfer all funds out of your wallet into a new one that they control. You would become an accessory to theft of your own funds by rubber-stamping these transactions with a cryptographic signature whenever asked. A different low-tech example is withholding your checkbook from an associate who is not trusted with spending authority, but being perfectly happy to give them a signed blank-check whenever asked. Strictly speaking the checkbook itself is “safe” but your associate can still empty out your account.

Law-enforcement perspective

Building on that first observation, we note that possession of private keys is sufficient but not necessary condition for intercepting communications. Putting ourselves in the position of a government trying to monitor a particular user, let’s consider how we can enlist ClouldFlare to achieve our objectives even when keyless SSL is employed.

Simple handshake

For simple RSA-based key exchange, suppose our intelligence agency has collected and stored some SSL traffic in the past. Now we want to go back and decrypt that connection. All we need to do is decrypt the client key-exchange SSL handshake message that appears near the beginning. This message contains the so-called “premaster secret” encrypted in the origin site’s RSA key. So we take that message and enlist the help of our friendly CDN to decrypt it. When keyless SSL is in effect, CloudFlare can not perform that decryption locally. But it can ask origin site to do so using the exact same API, interface etc, used to terminate SSL connections for legitimate use-cases. Given the premaster secret, we can then derive session keys used for the remainder of the connection for bulk data encryption, unraveling all of the contents. Meanwhile the origin site is none-the-wiser about what just went on. There is no indication anywhere that past traffic is being decrypted under coercion as opposed to a new SSL connection being negotiated with a legitimate user.** The operations are identical.

[continue to part II]

CP

** A diligent origin implementation could notice that it is being asked to decrypt a handshake message that has already been observed in the past. Such a collision is extremely unlikely to happen between messages chosen by different users.


Follow

Get every new post delivered to your Inbox.

Join 56 other followers