The challenge of cryptographic agility

Getting stuck with retro/vintage cryptography

Developers and system designers are overly attached to their cryptographic algorithms. How else to explain the presence of an MD5 function in PHP? This is not exactly a state of the art hash function. Collisions were discovered in the round-function as  early as 1996. An actual collision arrived in 2004 with the work of Wang et al. As usual one paper is never enough for the problem to register broadly. Many certificate authorities continued to use MD5 for issuing certificates, with the predictable outcome: a spectacular break in 2008 when security researchers were able to obtain a fraudulent intermediate CA certificate from RapidSSL.

Yet die-hard MD5 fans continue to lurk among the open-source development community. State-of-the art “code integrity” for open source projects often involves publishing an MD5 hash of a tarball on the download page. (That approach does not work.) The persistence of MD5 also explains why Windows Security team at one point had an official “MD5 program manager” role. This unfortunate soul was tasked with identifying and deprecating all MD5 usage across the operating system. It turns out he/she still missed one: Terminal Service licensing CA continued to issue certificates using MD5 until 2012, a vulnerability exploited by the highly sophisticated Flame malware believed to have been authored by nation-states.

How many hash functions?

It would be easy to chalk-up the presence of an MD5 function in PHP, to well, it being PHP. But it is not only language designers that make these mistakes. What about the Linux md5sum command-line utility for computing cryptographic hashes of files? Microsoft one-upped with the “file checksum integrity verification” or FCIV utility which can do not only MD5 but also the more recent vintage SHA1. Sadly for the author of that piece of software, SHA1 is also on its way out. While no one has yet demonstrated even a single collision, it is widely believed that attacks targeting MD5 can also be extended to SHA1. Chrome is trying phase-out SHA1 usage in SSL certificates.

One person cataloged 10 GUI-based applications for computing file-hashes. Several of these utilities also hard-coded MD5 and SHA1 as their choice of hash. Even the ones that decided to go all-out by adding SHA2 variants are now stuck: NIST has selected Keccak as the final SHA-3 standard.

The mistake is not limited to amateurs. Designers of the SSL/TLS protocols went to great pains to provide a choice of different algorithm combinations called “ciphersuites,” negotiated between the web-browser & web-server based on their capabilities and preferences. Yet until recent versions, the protocol also hard-coded a choice of hash function into the record layer, as well as into the client-authentication scheme. (MD5 and SHA1 concatenated together, just in case one of them turns out to be weak. There is an extra measure of irony here beyond the hapless choice of MD5: as Joux pointed out in 2004, concatenating hash functions based on the Merkle-Damgard construction– which includes MD5 and SHA1– does not result in a hash function with the naively expected “total” security of both algorithms added together.)

Seemed like a good choice– at the time

Lest we assume these failures are a relic of the past: the mysterious person/group who designed Bitcoin under the pseudonym Satoshi Nakamato also fell into this trap of hard-coding the choice of elliptic-curve and hash functions. And bad news is that choice is going to be a much harder to undo compared to introducing more options into TLS. After all, no one has to go back and “repeat” past SSL connections when an algorithm shows weaknesses. If Bitcoin ever needs to change curves because the discrete logarithm problem in secp256k1 turns out to be easier than expected, users will have to “convert” their existing funds into new coins protected by different algorithms. Since that involves extending the scripting language used to verify transactions, it will represent a hard-fork to the protocol.

It is unclear why Satoshi picked ECDSA over RSA. Space savings? But this is illusory; RSA signatures permit message recovery, allowing the signer to “reclaim” most of the padding taken up in the signature for storing additional information. Also ECDSA has the problem that signature verification is just as expensive as signing; see Crypto++ benchmarks as one data-point. Typically a transaction is signed once, but verified thousands of times by other participants in the system. RSA has just the right distribution of work for this: signatures are slow, verification is quick. Given that Bitcoin emerged around 2008, the Brainpool family of elliptic curves were relatively new at the time and probably considered a safe bet. Of course we can also imagine someone in 1995 deciding that this brand-new hash-function called “MD5″ would be a great choice for basing their entire security on.

Cryptographic agility

The common failure in all of these cases is cryptographic agility. Agility can be described as three properties, in order of priority:

  • Users can choose which cryptographic primitives (such as encryption algorithm, hash-function, key-exchange scheme etc.) are used in a given protocol
  • Users can replace the implementation of an existing cryptographic primitive, with one of their choosing
  • Users can set system-wide policy on choice of default algorithms

It is much easier to point out failures of cryptographic agility than success stories. This is all the more surprising because it is a relatively well-defined problem, unlike software extensibility. Instead of having to worry about all possible ways that a piece of code may be called on to perform a function not envisioned by its authors, all of the problems above involve swapping out interchangeable components. Yet flawed protocol design, implementation laziness and sometimes plain bad luck have often conspired to frustrate that. Future blog posts will take up this question of why it has proved so difficult to get past beyond MD5 and SHA1, and how more subtle types of  agility-failure continue to plague even modern, greenfield projects such as Bitcoin wallets.


Privacy in NFC payments: mobile wallets and prying eyes

Continuing the exploration of privacy in NFC payments, we next focus on the code that runs on the smart phone and what those applications can learn about user transactions. Unlike an inert plastic smart-card, a smart phone is a general purpose computer connected to a network. Given that mobile devices have been implicated in many privacy intrusions, it is also natural to ask whether applications can snoop on purchase history. Due to the variation in mobile wallet architecture, there is no clear-cut answer to this but we can explore the design space for privacy.

First it is worth distinguishing between the wallet application itself and third-party apps. For example if a bank provided a mobile application for NFC payments, the privacy question is largely moot– the bank already has visibility into payments by virtue of authorizing them. (Unless of course the application allows the bank to learn additional information such as line-level item data, above and beyond what the bank is entitled to observing as card issuer.) On the other hand, if the wallet provider is distinct from the issuing bank as in the case of Apple Pay, it is fair game to ask what the mobile application sees. There is an even greater concern with third-party applications unrelated to the wallet provider or financial institution trying to snoop on purchase history.

The wallet application is typically involved in the purchase transaction for user-experience reasons. This holds true even when the actual payment credentials live on a dedicated secure element chip, with all NFC communication takes place between the point-of-sale and that secure element, as is the case when card-emulation is enabled. Recall that the secure element itself has no user-interface and can not communicate with the user to walk through various steps of the flow, such as entering a PIN, displaying a dollar amount for confirmation or signaling that a transaction has completed. Since NFC communications from the POS would be normally routed directly to the SE, this typically involves some other mechanism for notifying the host operating system. For example on Android NFC controller generates notifications for specific events, such as detecting the presence of an external NFC field or observing a command to SELECT a particular application on the secure element. Such notifications are delivered to relevant applications such as Google Wallet in the case of Android. If these notifications were more broadly available, third-party applications could also observe a transaction taking place. Typically this is prevented by having the notifications only delivered to privileged applications.

At a  minimum then the wallet application knows that a transaction was attempted. In fact the payment applets on the secure element will typically have even more information that can be queried by the host, including purchase amount and status of last transaction. This is all part of the specification for SE applets mandated by the payment network for interoperability, such as MasterCard Mobile PayPass. What the host application does not learn is also a direct consequence of what the protocol communicates to the card from the point-of-sale terminal:

  1. Identity of the merchant. Typically merchant is not authenticated in the transaction.
  2. Line-level item information; in other words, the contents of the shopping cart.

The first one is not too difficult to infer. A mobile device is equipped with a number of radios including GPS, wireless and Bluetooth. Approximate GPS location, the identity of a nearby wireless network or the presence of an iBeacon can all be used to identify a particular merchant as the setting for a shopping excursion. (This is easier in the current situation when few merchants support NFC payments, making it easier to pinpoint the only possibility given an approximate area.) In principle then the wallet application supervising NFC transactions can have at least as much visibility into transaction history as the issuing bank. Apple could in principle compile the same information on Apple Pay users that our banker friend was worried about Google getting as the issuer-of-record for virtual cards. That is not to say that Apple is collecting that information currently. iOS security guide emphasizes that “Payment transactions are between the user, the merchant and the bank,” no doubt to allay any concerns of Apple becoming yet another middleman. But it is technically possible for a mobile wallet application– and by extension its provider– to help itself to that information.

The second one however is not that easy to work around. Unless the core EMV payment protocol is somehow augmented with out-of-band data provided by the cash register, there is no way for a mobile device to divine this information using its own sensors. For all the worrying about keeping data away from Google, there is however one other leading company that does have visibility into this highly coveted consumer purchase history: Square. By providing the complete point-of-sale terminal software, Square controls the check out experience. That includes the way items are rung-up. Square in that sense is the ultimate intermediary. It acts as “merchant” as far as the payment network is concerned while also tapping into a rich data stream of line-level item information from every purchase.



PIV provisioning on Linux and Mac: getting certificates (part III)

Continuing to the next step of PIV provisioning with open-source utilities, we need to generate a certificate signing request (CSR) corresponding to the key-pair generated on the card. This is a signed document generated by the person enrolling for a certificate, containing information they would like to have appear on the certificate, including their name, organization affiliation and public-key. The certificate authority verifies this information, makes changes/amendments as necessary and issues a certificate. This ritual for using CSRs is largely a matter of convention. Nothing stops a CA from issuing any certificate to anyone given just the public key. But most CA implementations are designed to accept only valid CSRs as input to the issuance procedure.

Offloading crypto in openssl

That calls for constructing the CSR with proper fields, including the public-key that exists on the card and getting that object signed using the PIV card. openssl provides both these ingredients:

Our strategy then is to find a suitable engine implementation that can interop with PIV cards and use it for signing the CSR. Fortunately OpenSC project provides precisely such an engine, building on top of its PKCS #11 library that figured in previous PIV scenarios.

It is possible to script openssl to load an engine first and perform an operation using keys associated with that engine. Here is an example gist for CMS decryption. It issues two commands to openssl. First one loads the engine by specifying the shared library path and assigning it a name for future reference. Second one performs the CMS decryption using a private-key associated with that engine, in this case key ID 01 corresponding to the PIV authentication key. (Somewhat confusingly the PKCS11 layer uses does not use the PIV key-reference identifiers.)

Boot-strapping problem

It would appear straightforward to create a similar script to load the engine and invoke  CSR generation while referencing a key associated with the card. But this runs into a subtle problem: OpenSC middleware assumes there is no key present in a particular slot unless there is also an associated certificate. This is a reasonable assumption for PIV cards in steady-state: each key-type defined in the standard such as PIV authentication or digital signature has both an X509 certificate and associated private key material. But during provisioning that assumption is violated temporarily: a key-pair must be generated first before we can have a certificate containing the public piece.

That leads to a boot-strapping problem: OpenSC middleware can not use the private-key on card without an associated certificate also present on the same card. But we can not obtain a certificate unless we first sign a CSR using the private-key.

It turns out OpenSC is not to blame for this either. Windows PIV minidriver has the same behavior; it would report no credentials present on a card without certificates even if keys had been generated for all four slots. Root cause is a more subtle problem with the PIV command interface itself: there is no mechanism for retrieving a stand-alone public-key. The assumption is that public-keys are extracted from the certificate. In the absence of the certificate, there is no uniform way to even learn the public-key for the purpose of placing it into the CSR.**

Signing the hard way

The above observation also rules out a kludge that at first sight seems promising: provision a dummy certificate containing an unrelated public-key to make OpenSC happy. That would allow signing to “work”– it turns out the middleware does not check the signature for consistency against public key in the certificate. But the CSR generated this way would still have the wrong public-key drawn from the placeholder certificate.

Boot-strapping requires a more complex sequence involving manual ASN1 manipulation:

  1. Save the output from key generation step
  2. Craft an unsigned CSR containing that public-key
    • This is easiest to accomplish by taking an existing CSR with all of the other fields correct, removing the signature and “grafting” a new public-key in place of the existing field.
  3. Get a signature from the card.
    • This can be done either by using the placeholder-certificate approach described  above or directly issuing a GENERAL AUTHENTICATE command to the PIV application.
  4. Combine the signature with the modified request from step #2 above to create a valid signed CSR.

As a side-note, generating a new CSR once a valid certificate exists is much easier. In that case the key is not changed and the certificate is consistent with the key. It is much easier to go from a card already set up with self-signed certificates (for example provisioned on Windows via certreq) to issuing a new certificates from a different CA using off-the-shelf open source utilities.

Completing provisioning

Once CSR is created and exchanged for a valid certificate, the final step is straightforward: load the certificate. This is done using piv-tool and authenticating with the card management key, similar to key generation.


** For ECDSA the public-key can be derived from one valid signature and associated hash. For RSA there is no similar quick fix, but most cards will give an error when asked to operate on an input larger than the modulus. This can be used to binary-search for the correct modulus.

They know what you bought last summer: privacy in NFC payments

In the wake of Apple Pay launch, one interesting document captured a rare occasion: an insider described as “key developer at major bank” having a moment of honesty, voicing blunt opinions on Apple, ISIS/Softcard, Google Wallet and upcoming chip & PIN transition in the US. According to this anonymous source, one of the key selling points for Apple Pay from banks’ perspective is privacy. Apple provides the wallet software and operates a service, but it does not learn about user transactions. Given how often this presumed advantage is mindlessly repeated in other articles, it is worth drilling into the question: who knows exactly what the card-holder purchased? [Full disclosure: this blogger worked on Google Wallet security]

For the paranoid, cash remains the most privacy-friendly solution— perhaps to be rivaled by Bitcoin one day when it is more widely accepted as a payment scheme. It is peer-to-peer in the old-fashioned way: from customer to merchant. All of the other mainstream payment methods involve some other intermediary, giving that third-party an opportunity to learn about transactions. Some of the participants are visible in plain-sight, others operate behind the scenes. Checks involve the bank where the customer maintains an account. Less obviously they also involve a clearing-house responsible for settling checks among banks.

Credit card networks have more moving pieces. There is the issuing bank underwriting the  card used by the consumer. There is the acquiring bank receiving the payment on behalf of the merchant. Typically there is also a payment processor linking the merchant to the acquirer. In the middle orchestrating this flow of funds on a massive scale is the card network: Visa, MasterCard, American Express or Discover. These players all have visibility into the total payment amount and the merchant. In most cases they also know the identity of the card-holder. Most banks fall under some type of know-your-customer (KYC) rules designed to deter terrorist financing, which involves validating the identity of customers prior to doing business. This is true even for gift/prepaid cards that are funded ahead of time with no credit risk for the issuer. That means at a minimum the issuing bank knows the true identity of the person involved in a transaction, which is not surprising. Less obvious is the fact it is frequently exposed to other parties along the way as part of the “track data” encoded on the card, since track data travels from the merchant point-of-sale to the payment processor and eventually into the network. (Cardholder name need not be identical to legal name, in which case there is a slight privacy improvement in the use of a pseudonym.) At the end of the day, it is a safe bet that Visa knows consumer Jane Smith has purchased $38.67 worth of goods from corner grocery store on Saturday, October 18th in Tribeca. No wonder NSA is on friendly terms with payment networks, tapping into this vast data-set for intelligence purposes.

But equally important is what none of these participants know: line-level transaction data. They have no visibility into the contents of Jane’s shopping-cart. There are some qualifications to this general principle, edge cases where contents can be inferred. For example if there is only one item at a coffee-shop that rings up for $3.47, that is exactly what the customer ordered. Fortunately the more general case of trying to come up with combination of goods tallying up to a given total is a surprisingly intractable computer-science challenge, known as the subset-sum problem. It is further complicated by the fact that many goods have identical price, a given sum could correspond to millions of different combinations of items and for goods priced by weight— such as produce— there is not even a single assigned price. There is also some information leaked by the pattern of transactions. For example, restaurants often will perform two transactions when settling a bill: one to authorize the original amount and one that collects the final tab including the gratuity. In other words, the bad news is both your bank and Visa can see whether you have been stingy or generous on tips. Yet they still have no idea exactly what you ordered, whether it was inline with FDA dietary recommendations (although there is likely a strong correlation with the establishment involved, which explains why insurers are peeking into patient credit-card records) or how many glasses of wine accompanied that meal.

That information flow is not changed by chip-and-pin, or for that matter NFC payments using a phone.** There is still a bank responsible for issuing the “card,” even when the physical manifestation of the card on a piece of plastic has been replaced by cryptographic keys residing on a mobile device. There is a more elaborate payment “protocol.” Before the cashier swiped the magnetic-stripe on the card, reading static information; now there is an interactive process where the point-of-sale register communicates back-and-forth with the phone over NFC. Does that somehow disclose more information? It turns out the answer is no. EMV standards prescribing that complicated song-and-dance have no provision for item-level information to be communicated. The phone can not receive this information from the POS and neither can the POS transmit it upstream when requesting payment authorization.

So why the persistent harping by our banker friend on the contrast between Apple Pay and Google Wallet? Because in the latter design, Google is effectively the “issuer” of the card used for payment. As described in earlier posts, Google Wallet switched to using virtual cards in 2012. These cards are only “virtual” in the sense that the consumer does not have to know about them. But as far as the merchant and payment network are concerned, they are very real. From the merchant point-of-view, the consumer paid with a MasterCard issued by Google. Behind the scenes Google proxies this transaction in real-time to the actual card the consumer added to their wallet. This is why Google has same visibility into purchases as an issuing bank such as Bank of America with Apple Pay or Softcard: transaction amounts and patterns, but not line-level items.

This answer is still unsatisfactory for one reason: we have limited ourselves to information that is exchanged as part of the purchase transaction defined by EMV specifications. Could a mobile wallet application on a smart-phone obtain more information out-of-band?



** There is one privacy improvement in that cardholder names are typically redacted from the simulated “track data” created by NFC transactions.

PIV provisioning on Linux and Mac: key generation (part II)

[continued from part I]

Key generation

Returning to the problem of provisioning a PIV card given the PIV administrator key, we can now enlist the help of open source solutions. The venerable OpenSC suite includes a piv-tool utility for performing administrative operations on PIV cards. One cautionary warning: this approach is only suitable for very primitive key-management schemes. In an enterprise-grade credential management system, card keys would typically be diversified using a cryptographic function, where each card has a different administrator key that is computed as a cryptographic function of some master secret and an immutable property of the card, such as its serial ID. In high security applications that master secret is relegated to a hardware-security module, leaving the provisioning system only handling individual card keys. (For an additional level of paranoia, one can implement the entire PIV administrator authentication challenge-response in the HSM itself, with no keys ever relinquished outside the secure execution environment.)

piv-tool takes a slightly more cavalier attitude about where administrator keys come from: a local file. Assuming that one is willing to live with this approach to key management, here is how key generation would work:

  • Save the PIV card administrator key to a local file. The format is similar to how GoldKey client UI accepts keys, with one cosmetic difference: each hex byte is separated by a column character.
$ cat admin_key
  • Export the name of the file containing the key into a specific environment variable.
$ export PIV_EXT_AUTH_KEY=./admin_key
  • Invoke piv-tool with the “-A” option to authenticate as administrator and “-G” option to generate keys:
$ piv-tool -A A:9B:03 -G 9C:07

In this example, the highlighted option indicates generating a new signature-key, identified by key reference 0x9C as defined in NIST 800-78 part 3 of type 2048-bit RSA, which is algorithm reference 07 also defined in the same standard. The first option indicates administrator authentication, using key ID 0x9B of type 3DES; this is a function of the card configuration, which users typically have little control over. Note that contrary to what one might optimistically assume about extensibility, PIV does not allow generating keys over arbitrary named curves or curves with user-defined parameters.

The command will trigger key-generation on the token and then output a binary encoding of the public half. It is easier to make sense of that output by having openssl parse the ASN1. For example, here is another example for generating a new ECDSA signature key over the NIST P256 curve (also known as prime256v1) which happens to be algorithm reference 11:

$ piv-tool -A A:9B:03 -G 9C:11 | openssl asn1parse -inform DER -i -dump
Using reader with a card: GoldKey PIV Token 00 00
   0:d=0  hl=2 l=  89 cons: SEQUENCE          
   2:d=1  hl=2 l=  19 cons:  SEQUENCE          
   4:d=2  hl=2 l=   7 prim:   OBJECT            :id-ecPublicKey
  13:d=2  hl=2 l=   8 prim:   OBJECT            :prime256v1
  23:d=1  hl=2 l=  66 prim:  BIT STRING        
     0000 - 00 04 78 95 ac 64 63 7f-9d 4d a8 b5 5d 2f 36 27   ..x..dc..M..]/6'
     0010 - bf 73 6e fc ee bf de 29-6f ca 06 ee 85 a9 c5 42   .sn....)o......B
     0020 - 83 cf 12 3f eb f6 ff eb-0a a8 78 f4 de 68 40 a4   ...?......x..h@.
     0030 - 87 c9 81 2d 06 f0 5b 9b-a5 64 46 b5 12 3e 61 55   ...-..[..dF..>aU
     0040 - 99 09

It is helpful to save the public-key because this is one of the few times it will be available directly. There is no APDU to retrieve a public-key in PIV. In steady state, the public-key exists in a certificate already loaded on the card; except that we have not yet obtained that certificate. (In some cases it is possible to indirectly recover the public-key. For example assuming a valid ECDSA signature on a known message, public key can be derived as implemented in Bitcoin libraries.)

Next: generating a certificate-signing request, which runs into an interesting circularity with the OpenSC middleware.



PIV provisioning on Linux and Mac (part I)

This is a follow-up to earlier series on using GoldKey hardware tokens on OS X for cryptographic applications such as SSH. One question left unanswered on the last post was how to provision certificates to the token on OS X and Linux, where there is no official support provided by GoldKey. It turns out the implications of that question reach far beyond the implementation details of a specific hardware vendor such as GoldKey. It forces a closer look at the PIV standard itself and canvas the web for open-source utilities available for working with PIV cards.

Provisioning from a high-level

First a quick recap on exactly what “provisioning” means for smart-cards that support PKI. Starting with a blank card, desired end state is to have a private-key residing on the card– ideally generated on-card and never having left the secure execution environment– along with an X509 digital certificate which has the corresponding public-key. (Repeat as desired: there can be more than one key/certificate slot. Case in point, PIV defines 4 types of keys with different use-cases such as signature and encryption.)

Provisioning, nuts and bolts

That process typically breaks down into four steps:

  1. Request the card to generate a key-pair and output the public key.
  2. Create a certificate signing-request (CSR) which contains that public key, and information about the user, and obtain a signature from the card.
  3. Submit the CSR to a certificate-authority and have the CA issue a certificate signed by the CA.
  4. Load the returned certificate on the card.

This is the most generic version of the flow. In particular it covers the self-signed certificate scenario as a special-case, where instead of signing a CSR we can sign a certificate to collapse steps #2 and #3. (In principle this could even have been implemented on-card but typically the complexity of ASN1 parsing and construction has discouraged smart-card developers from trying to do much with X509 on-card.) It is also worth noting that the CSR generation is a practical necessity because most CA implementations are designed to take a valid CSR as input. “Valid” being the operative adjective: that includes checking that the CSR is signed by a private-key which corresponds to the public key appearing in it. One could imagine a hypothetical CA that simply issued certificates given a public key– the signature on the CSR does not factor into the final X509 certificate issued in any way. In real life CA implementations are a bit more demanding so it is important to build that flexibility into the card provisioning flow.

Returning to these four steps, first observations is that #3 is the odd-man out. It takes place out-of-band without any interaction with the card. So we focus our attention on the remaining three.  Of these #1 and #4 actually change the state of the card by updating its contents, while #2 only involves using existing key material. This turns out to be an important distinction for many smart-card standards including PIV because the authentication model for those two modes are different. For example in PIV using a key typically requires user authentication, in other words entering a PIN. Card management on the other hand requires authenticating as the card administrator, which involves using a more complex challenge-response protocol using cryptographic keys described in NIST 800-73 part #2.**

Card-administrator keys for GoldKey

Accessing PIV settings from GoldKey client

That brings us to the first problem: in order to perform key-generation or loading certificates on the card, we must be in possession of the card management key (also referred to as PIV card application-administrator key, with key reference 0x9B per NIST SP 800-78-3) must be known for that specific card. PIV allows this key to use any number of the algorithms defined in the standard including 3DES, AES, even public-key options RSA or ECDSA. But in practice most vendors including Gemalto, Oberthur and GoldKey all appear to have settled on old-fashioned 3DES, in keeping with the retro-chic commitment to DES in smart-cards and financial applications.

In the case of GoldKey, all tokens are provisioned with an unspecified “default key” from the factory. Luckily the proprietary client software allows changing this key to one controlled by the user. This step requires a master-token, and having associated the current token with that particular master.

Configuring the PIV card-administration key

Configuring the PIV card-administration key

3DES keys are entered as 48 hex-bytes or 24 bytes total. While 3DES keys are only 168-bits, in keeping with traditional DES key representation, only 7 bits out of each byte are significant. Least-significant bit reserved for a parity check, which is ignored in modern implementations. GoldKey appears to follow that model. Somewhat confusingly the PIV standard refers to this as “card management keys” but they are not related to the Global Platform issuer-security-domain aka “card manager” keys. This is strictly a PIV application-level concept, which is independent of whether the underlying card is Global Platform compliant. (Of course in reality PIV applications are typically implemented on GP-compliant cards.)

Assuming we now have a token provisioned with a card-administration key we control– or perhaps we saved ourselves the effort by using a brand that comes preloaded with well-known default keys from the manufacturer– we can tackle steps #1 and #4 above.

[continue to part II]


** Incidentally this is why provisioning via the GoldKey Windows driver can not possibly be compliant with FIPS 201: it requires users to enter a PIN. There is no such mode in standard PIV; user PIN is neither sufficient nor necessary for administrative actions.

NSA, Panopticon and paradox of surveillance exposed

In No Place to Hide Glenn Greenwald presents a harsh critique of mass-surveillance conducted by the NSA as revealed in the massive stash of documents from Edward Snowden. In addition to the obligatory George Orwell 1984 references, the author also evokes the 18th century British philosopher Jeremy Bantham’s Panopticon, a hypothetical prison design optimized for constant observation for inmates. Its unfortunate denizens occupy the outer rings of a circular structure, with an inner one-way transparent tower reserved for the wardens. These wardens can look out and observe the inmates at any time, but the inmates can not see what is going on inside that tower. They can not even be certain at any given time if there is any one on the other side watching. Yet the suspicion is always there– and that is the point. The mere possibility of being under observation will silently coerce the otherwise unruly denizens into behaving themselves, goes the argument, far more effectively than any random meting out of punishment or violence.

This tried-and-true dystopian construct is recycled yet again by Greenwald to argue that ubiquitous mass-surveillance will have the effect of chilling speech, smothering dissent and turning Americans into obedient conformists, afraid to challenge their government:

… Bentham’s solution was to create “the apparent omnipresence of the inspector” in the minds of the inhabitants. ‘The persons to be inspected should always feel themselves as if under inspection, at least as standing a great chance of being so.” They would thus act is they were always being watched, even if they weren’t. The result would be compliance, obedience and conformity with expectations.

A later paragraph quotes Foucoult:

Those who believe they are watched will instinctively choose to do that which is wanted of them without even realizing that they are being controlled– the Panopticon induces “in the inmate a state of conscious and permanent visibility that assures the automatic functioning of power.” With the control internalized, the overt evidence of repression disappears because it is no longer necessary: …

There is one problem with the parallel: until Edward Snowden, few people realized that they were living in the Panopticon. There has never been a shortage of tinfoil-hat wielding conspiracy theorists convinced that the FBI is tuning into our brain patterns from outer space. Occasionally we were given glimpses into the extent of data collection: New York Times expose in 2005 on  warrantless surveillance, AT&T whistle-blower coming forward with the existence of NSA monitoring equipment on AT&T premises. But until Guardian, New York Times and Washington Post started publishing from the cache of leaked documents, there was no convincing proof, no reason to suspect that every email, phone conversation and text message would be scanned as part of a massive communications dragnet, that every routine credit-card purchase feeds into an operation designed to ferret out terrorist financing networks. Delusions aside, it is difficult for society to be intimated or cowed into submission by something they do not believe exists. Not only was the extent of surveillance unknown, but intelligence agencies very much preferred to keep it that way, using every opportunity to refute the allegations.

This is a key difference from Bentham’s frequently misused Panopticon: in the Panopticon inmates lived every moment fully aware of the constant possibility– if not actual reality of– being under observation. The architecture not only enabled constant surveillance, but it was fully transparent about that objective. Manifesting the presence of surveillance to its targets is an integral part of the design. There is no question about whether privacy exists in this system, no warm-fuzzy concepts of due process, 4th amendment protections and warrant requirements.

Intelligence operations by contrast thrive on secrecy. Extraordinary measures are taken to keep their existence hidden. A surveillance target aware of being watched, the theory goes, will modify his/her behavior, exercise greater caution or attempt to hide their tracks. In the extreme scenario, they might even refrain from communicating the information or carrying out actions the operation hoped to uncover. If surveillance was perfectly ubiquitous one could argue that state of affairs is just as good as having collected actionable intelligence. If all terrorists gave up on conspiring to plan new attacks out of fear that they are being watched, fewer attacks may result and society is safer. But frequent exaggeration and hyperbole aside, no system of surveillance built is quite that omnipresent. Knowing that communications in one medium are carefully watched motivates the targets to conduct their activities over different channels where greater privacy is assumed to exist. Surveillance exposed is surveillance rendered ineffective. Distrust in US technology companies in the aftermath of PRISM revelations is one example of this effect. Fewer people entrusting their data to Google, MSFT and Yahoo will be counterproductive for a system that relied on those companies providing a wealth of information.

The Internet became a true Panopticon only after Edward Snowden came forward and his message reached a mass audience. Paradoxically then, to the extent that chilling effects on political speech have resulted in the wake of NSA revelations, Greenwald, Poitras and Gellman were key players in creating the Panopticon. In true ignorance-as-bliss fashion, our previous state of misguided expectations around privacy may have been far more conducive to free expression after all. But in this case we can only be grateful to Snowden for resetting those expectations and starting the debate on intelligence reform.



Get every new post delivered to your Inbox.

Join 56 other followers