How to fail at code-signing, the Sony edition

Another massive corporate security breach? Readers well-versed in contemporary history of data-breaches may not be surprised to learn that the target this time was Sony, a repeat-customer for enterprising attackers since 2011. But they might still be surprised to learn both the extent of damage and surprisingly weak internal processes used by Sony in managing critical assets. Leaked internals documents and emails are making the rounds, despite best efforts by Sony to put the genie back in the bottle both using alleged denial-of-service attacks and enlisting their army of attorneys to mail out nastygrams to journalists as Christmas presents.

Here we will focus on just one aspect of the breach: theft of code-signing keys. These were the credentials used by Sony for digitally signing applications authored by the company, proving their origin. Granted “authored by Sony” does not automatically translate into “safe” or “not riddled with vulnerabilities.” Knowledge of authorship attested by the digital signature is at best intended to be a proxy for the reputation of the software publisher. A reputable company would not be in the business of publishing harmful malware, goes the theory. Ironically Sony itself has provided a spectacular counter-example to that assumption: in 2006 the company willfully and deliberately published a malicious rootkit designed to implement gratuitous DRM restrictions against copying CDs on personal computers. Granted at the time digital signatures were not commonly required for Windows kernel-mode drivers, which is the specific type of binary this rootkit assumed. In that sense, the question of code-signing was decoupled from the problem of honesty on behalf of the software vendor. In principle the terms of use required by CAs prior to issuing a code-signing certificate includes warm-and-fuzzy promises by the publisher that they will not use the certificate to sign malicious code.

Back to Sony, the organization which we learned, keeps passwords in cleartext files and stores encryptions keys alongside the content they are protecting. It would not be out of character for such an enterprise to also mismanage their code-signing keys. Sure enough signing keys were also one of the reported casualties of the breach, with signed malware turning up not too long after reports of the incident surfaced. (One report claims the sample discovered was just a proof-of-concept created by a security researcher, as opposed to actual malware found in the wild for offensive purposes.) Regardless, the certificate in question was quickly revoked by the certificate authority. At least for the  Authenticode scheme which is the de facto standard on Windows, revocation checking is enabled by default— unlike the half-baked way it is implemented for SSL by most web browsers— providing at least a modicum of assurance that this particular certificate is unlikely to be used for hiding malware.

What about all of the other Sony certificates? Such a large organization is likely to have multiple certificates used for different applications and different teams. What happened to the rest? For that  what about Sony SSL certificates? In a data-breach the probability of various assets getting compromised is highly correlated. It would be astonishing if an organization so negligent at managing cryptographic keys somehow exercised utmost caution to protect SSL keys. Why aren’t certificate authorities promptly revoking all Sony certificates and forcing the company to reissue? It is worth pointing out a CA does not have to wait for Sony to request revocation. Terms-of-use for most CAs include provisions that allow the CA to unilaterally revoke a certificate when it is believed to have been compromised; here is the example from GlobalSign. So are CAs being negligent in waiting for a “smoking gun” in the form of malware signed by the compromised key instead of proactively taking action?

This question exposes one of the basic conflicts of interest in code-signing systems with trust mediated by third-party certificate authorities— Authenticode being an example of such open models. There are risks associated with revoking as well as risk associated with not taking action:

  • Revoking a certificate could mean application relying on it break. In the case of code-signing previously signed binaries will no longer validate. Doomsday predictions range from bricked devices to webpages not loading correctly.
  • Not revoking on the other hand could mean that malicious binaries are signed under the Sony name and users tricked into installing such applications. In that case the CA would be guilty of negligence, in aiding and abetting attacks against third-parties who relied on the CA to vouch for Sony identity.

It turns out that first problem was already taken into account by Authenticode very early via time-stamping. Binaries signed by the software publisher are further counter-signed by a trusted third-party vouching for the current time that signature was observed. Revocation standards in turn allow specifying the time from which point onwards the certificate is to be considered untrusted. All signatures made before that time— based on the third-party timestamp— continue to validate. Signatures bearing a later time-stamp or none at all are voided.

So why are certificate authorities still erring on the side of not revoking certificates, when there is every reason to believe that Sony is not in control of its private keys? Put bluntly, CAs get paid by the software publisher and at the end of the day, is primarily responsible to the publisher, while occasionally throwing a bone to the CAB forum, a balkanized consortium of certificate authorities and browser vendors attempting to set baseline standards for issuance. The CA has very little responsibility to anyone else. Those users who ended up installing malware signed by “Sony” thinking it was a legitimate certificate? Tough luck. They are not the ones paying $$$ to the certificate authority.
Granted this question of liability has never been tested in an actual court case. CA could plausibly  pass the buck by pointing the finger at Sony, arguing that it is the certificate “subject”— in other words the customer paying the CA— that is solely responsible for initiating revocation. (There is no denying that Sony should have been quicker on the draw here.) But the fact that CAs go out of their way to reserve unilateral revocation rights undermines that argument: why reserve that right in the first place when CA is too conflicted to take action when the customer is clearly negligent?


NFC payments and powered-by-the-field mode (part II)

[continued from part I]

The security problem with running NFC payments in PBTF mode is that it takes user consent out of the picture. This defeats one of the main security improvements mobile wallets have over traditional plastic NFC cards: better control over when and which payments are authorized. In addition to the physical proximity requirement created by short-range of NFC, a number of additional pure software checks exist in the mobile application to guarantee that any payment taking place is one the user fully intended. For example:

  • Most implementations require that the screen is on
  • Correct PIN must have been entered “recently” in to the mobile application, based on a configurable time window. For example the original Google Wallet designed using secure-elements capped this at a conservative 5 minutes. (Interesting enough the more recent host-card emulation release defaults to 24 hours, effectively removing this control.)
  • Some implementations display a confirmation dialog with the requested amount/currency. Consumer must acknowledge this before the transaction is finalized.

Key observation is that such controls are implemented partly or completely by the mobile Android/iOS application. For example the PIN time-out in Google Wallet uses Android timers set to kick-in after the configured time elapses. Why can’t this logic reside in the SE applet itself? The first is a hardware limitation: SE can not maintain a true “wall” clock because it has no power source of its own and is only powered up for brief periods to perform specific tasks such as payments or authentication. The second is a usability/complexity trade-off: in theory payment applets could revert back to “unauthenticated” state after every transaction, requiring a new PIN entry. But that would make it difficult to deal with two common scenarios. First is retrying the same transaction that did not go through– common problem, as consumers not used to interacting with NFC readers. Second is completing several purchases in succession, for example in the case of large shopping centers with multiple merchants. In order to avoid manual PIN re-entry in these situations, the mobile application would have to cache the PIN outside the secure element for the same duration, create a more risky situation than just relying on a timer to issue some command to the SE.

Even if time-outs and deactivation could be implemented reliably with future hardware changes, there is still one fundamental problem with PBTF: the consumer has no inkling when a transaction is taking place or for that matter what exactly happened. Recall that PBTF can only generate enough juice to power up the secure element and NFC controller chips; the small burst of electricity supplied by the reader is not enough to power up the far-more resource hungry mobile device itself. (Inductive charging exists for mobile devices such as Nexus 4 from 2012. Even if point-of-sale terminals supplied that much power, it would take around half-minute for a phone to boot-up and reach the point of running the payments application, far too long to be useful for tap payments where deadlines are measured in fractions of a second.) With the main processor powered off, there is no way to inform user with a beep/vibration that a transaction has taken place, much less ask them for confirmation ahead of time. Given those constraints, the secure element applet responsible for payments must fend for itself when deciding whether to proceed. Imagine the code waking up to find itself powered by the external field and faced with a request for payment. (By the way, neither NXP or Broadcom/Oberthur chips at the time provided any means to distinguish that from regular powered-by-host mode but let’s assume future hardware will solve that.) Is that because the consumer is holding up their phone against a real point-of-sale terminal at a store to checkout? Or is this an unauthorized person holding a rogue NFC-reader while that same person is standing on a bus with their phone in their pocket, oblivious to the virtual pick-pocketing? PBTF takes away one of the main advantages of having dual-interface secure element coupled to smartphone: trusted user-interface for communicating user intent.

That determination can not be made without the host OS participating– which it can not while powered off. One can imagine various heuristics to permit “low-risk” transactions: authorize a maximum of 10 transactions, approve only if the requested payment amount is small (but don’t forget about currency in addition to numeric amount as this paper demonstrates) or approve if the merchant is recognized. The problem is that only the first of these can be implemented reliably. Merchant authentication is rudimentary at best in EMV. Meanwhile the mag-stripe profile of EMV employed for backwards compatibility does not authenticate purchase amounts at all. In fact there is not even a provision to send the payment amount to the phone. Amusingly MasterCard MMPP added that capability but this is window dressing: the amount/currency does not factor into the calculation of CVV3. POS can claim to charge 5¢ while interacting with the phone but then authorize $1000.

Finally there is the principle of least surprise: consumer do not expect their phones to be capable of doing anything— much less spending money— when they are powered off. (Contrary to claims that NSA can continue tracking phones when powered off.) That argues for making PBTF an opt-in choice for users to enable once they are familiar with the technology. This fact was impressed on a blogger’s colleague during the early days of Google Wallet. While trying to demonstrate that how new technology is perfectly safe by tapping a powered-off Nexus S against a Vivotech 4500, the engineer was shocked to hear the familiar beep of a successful transaction emitted from the reader. It turned out that he powered off the phone by yanking the battery. Because of that abrupt shutdown, the NFC stack never had a chance to disable card-emulation mode. With NFC controller set to card-emulation and PBTF enabled by-default in early NFC controller firmware, that left the secure element “armed” to conduct payments. (Luckily the reader was not connected to any POS, so no actual money changed hands during this instructive, if somewhat unexpected demonstration.) That experience promptly inspired a firmware update to disable PBTF by default for Android devices.


When wallets runs out of juice: NFC with external power

One of the common refrains of criticism about NFC payments on mobile devices is what happens when the phone runs out of power. The common response is users can always fall back on their “regular wallet,” with the implicit assumption that nobody is foolish enough to rely only on their mobile wallet just yet. But unless the payment options are identical between their plastic and mobile incarnations —which is rarely the case, because of tokenization and virtual cards— that strategy has limitations. For example refunds are usually required to go on the same credit card used in the purchase. Worst case scenario is public transit, when the same card has to be used on both entry and exit to the system as in the case of BART turnstiles in the Bay Area. If the consumer were to tap-in with their phone and then the phone runs out of battery before they get a chance to tap-out on the other side, they have some explaining to do.

But it turns out that running out of battery power is not necessarily an impediment to making payments. The trick is a feature known as “powered-by-the-field” or PBTF for short. This is not exactly a new capability: consider how ordinary  plastic smart-cards operate. There is no internal battery powering most of these devices, which also explains why they do not have persistent clocks to track time. Instead they draw current from the reader to power their internal circuitry for those few seconds when the card is being called on to perform some cryptographic task. Initially that power was conducted via the brass-plate which comes into direct contact with the corresponding metal leads inside the reader. Later RFID cards were introduced, which can operate without direct metal-on-metal contact with reader surface– hence the name “contactless.” These cards instead draw power from the induction field generated by the reader by using an antenna that is typically placed around the outside perimeter of the card to maximize its surface area.

Since NFC is effectively a specific type of RFID technology, it is not surprising that the same principle translates to contactless cards used for tap-and-pay purchases. What is less obvious is that this capability carries over to their mobile incarnation, with some caveats. We covered earlier how NFC-equipped phones can be viewed as having the internals of a dual-interface smart card embedded inside, called “secure element” in these scenarios. (This is an over-simplification: cards usually have a simple analog antenna, while mobile versions are equipped with more complex NFC controller chips that support additional NFC functionality.) That same hardware can draw enough power from the external field to operate the complete payments and similar NFC scenarios such as physical access or public-transit. In theory then running out of battery is not a problem– although consumers  still need a battery present because the antenna is usually incorporated into the battery itself.

So why isn’t this feature advertised more prominently? Because it is optional and typically disabled for good reason. (Google Wallet designers also opted for disabling PBTF mode.) Next post will go into why conducting payments this way can be problematic.


One-time use credit cards and Google Wallet

Another day, another announcement of a payment service being shuttered. This time it is payments API for digital goods from Google Wallet. (Not to be confused with in-store NFC payments which is doing well, liberated from restrictions imposed by wireless carriers trying to push their competing ISIS/Software offering.) Business reasons for why a service does not achieve critical mass are often complex and subject to debate. But we can at least look at the difference in underlying technology between the discontinued service and related offering that is being actively developed: instant buy API.

As the documentation describes in passing, the instant purchase API is an interesting application of one-time use credit cards. This is an ancient idea  proposed many times in the past, while rarely implemented on a large scale for consumers. (Bank of America has recently tried its hand.) It has dual objectives for improving privacy as well as creating resilience against data-breaches affecting merchants such as recent Target and Home Depot debacles. Unlike a plastic credit-card which will always return the same card-number when swiped repeatedly, single-use cards are designed to produce a series of unique card data for each purchase. There is a range of possible definitions for “unique:” from accepting only one transaction with that particular merchant to being truly unlinkable in the sense that it is not possible for multiple merchants to pool together their information and realize that a series of transactions in fact belongs to the same user. (That latter goal is harder to achieve: in addition to using different card numbers, other metadata encoded on the card such as expiration, cardholder-name and CVV number must be diversified.)

Naturally achieving that affect with standard plastic cards using magnetic stripes is very difficult. Recently introduced programmable mag-stripe technology is in principle capable of doing this by varying the data encoded on the stripe, although none of the existing deployments have taken that route. NFC payments and chip & PIN protocols also fall short of “one-time use” by that strict definition. While the protocols provide replay protection– data collected by the merchant from the card can not be reused to make fraudulent transactions at another merchant– it is still the same card number that appears in all cases. Even EMV tokenization does not fix that problem either: there is a single unique identifier, different from the “real” credit-card number but still  links all transactions conducted by that card.

So how does Instant Buy achieve this objective? The API is aimed at merchants who want to accept payments from Google Wallet users for mobile web and application scenarios. That would have been just another proprietary payment option (or “rails” to use the common terminology) similar to PayPal. But key difference is that movement of funds proceeds through existing credit-card networks. When the user completes a purchase at an ecommerce site using this scheme, Google returns what looks like a credit card to the site containing the usual assortment of fields: card number, expiration, CVV2. That website can now go through its existing card processor for a routine card-not-present transaction, in exactly the same way they would if those same card details were entered manually into a checkout form.

If that sounds a lot like virtual-cards used for NFC payments in the Google Wallet proxy model, that’s because it is. Same TxVia-derived backend powers both solutions. The difference is that for NFC transactions, a single virtual-card is provisioned to the phone each time the wallet is initialized. That card has a generous expiration date on the order of years, because it is used repeatedly over the lifetime of the wallet. Each transaction still includes a unique dynamic CVV or CVV3 code to prevent reuse of card data but card-number as observed by the merchant itself is fixed. By contrast Instant Buy generates a unique MasterCard for each transaction with a relatively short expiration time specifically bound to that transaction.

That alone is valuable from a security perspective. It provides the same resilience against data breaches for online payments as NFC does for in-person payments at bricks & mortar retailers. The potential privacy benefit on the other hand is not realized in this scenario. Additional identifying data is returned to the merchant to help complete the transaction, including customer name, billing/shipping addresses.

Another caveat about transaction model: “one-time” is a slight misnomer and in fact would be an undesirable feature. For example when an order is returned, merchants need to refund all/part of original amount back on the card. Even fulfilling a single order may include multiple charges, as when multiple shipments are required to handle items temporarily out of stock. It is more accurate to view virtual-cards as being associated with a specific transaction as opposed to only being valid for one “payment” in the strict sense of card networks. As described in the virtual cards FAQ:

  • Transaction limit: order amount + 20% (to account for increased transaction size due to extra shipping charges etc.)
  • One time card allows for multiple authorization […]
  • Authorizations, captures, refunds, chargebacks etc. work as usual
  • The card expires 120 days after the end of the month when the transaction was initiated.

In other words, the system strikes a good balance between maintaining compatibility with existing credit-card processing practices used by websites, while offering improved security for users. By comparison, the deprecated digital goods API defined its own rails for moving funds, requiring greater effort integration as well as tight-coupling between Google and merchants.


The challenge of cryptographic agility

Getting stuck with retro/vintage cryptography

Developers and system designers are overly attached to their cryptographic algorithms. How else to explain the presence of an MD5 function in PHP? This is not exactly a state of the art hash function. Collisions were discovered in the round-function as  early as 1996. An actual collision arrived in 2004 with the work of Wang et al. As usual one paper is never enough for the problem to register broadly. Many certificate authorities continued to use MD5 for issuing certificates, with the predictable outcome: a spectacular break in 2008 when security researchers were able to obtain a fraudulent intermediate CA certificate from RapidSSL.

Yet die-hard MD5 fans continue to lurk among the open-source development community. State-of-the art “code integrity” for open source projects often involves publishing an MD5 hash of a tarball on the download page. (That approach does not work.) The persistence of MD5 also explains why Windows Security team at one point had an official “MD5 program manager” role. This unfortunate soul was tasked with identifying and deprecating all MD5 usage across the operating system. It turns out he/she still missed one: Terminal Service licensing CA continued to issue certificates using MD5 until 2012, a vulnerability exploited by the highly sophisticated Flame malware believed to have been authored by nation-states.

How many hash functions?

It would be easy to chalk-up the presence of an MD5 function in PHP, to well, it being PHP. But it is not only language designers that make these mistakes. What about the Linux md5sum command-line utility for computing cryptographic hashes of files? Microsoft one-upped with the “file checksum integrity verification” or FCIV utility which can do not only MD5 but also the more recent vintage SHA1. Sadly for the author of that piece of software, SHA1 is also on its way out. While no one has yet demonstrated even a single collision, it is widely believed that attacks targeting MD5 can also be extended to SHA1. Chrome is trying phase-out SHA1 usage in SSL certificates.

One person cataloged 10 GUI-based applications for computing file-hashes. Several of these utilities also hard-coded MD5 and SHA1 as their choice of hash. Even the ones that decided to go all-out by adding SHA2 variants are now stuck: NIST has selected Keccak as the final SHA-3 standard.

The mistake is not limited to amateurs. Designers of the SSL/TLS protocols went to great pains to provide a choice of different algorithm combinations called “ciphersuites,” negotiated between the web-browser & web-server based on their capabilities and preferences. Yet until recent versions, the protocol also hard-coded a choice of hash function into the record layer, as well as into the client-authentication scheme. (MD5 and SHA1 concatenated together, just in case one of them turns out to be weak. There is an extra measure of irony here beyond the hapless choice of MD5: as Joux pointed out in 2004, concatenating hash functions based on the Merkle-Damgard construction– which includes MD5 and SHA1– does not result in a hash function with the naively expected “total” security of both algorithms added together.)

Seemed like a good choice– at the time

Lest we assume these failures are a relic of the past: the mysterious person/group who designed Bitcoin under the pseudonym Satoshi Nakamato also fell into this trap of hard-coding the choice of elliptic-curve and hash functions. And bad news is that choice is going to be a much harder to undo compared to introducing more options into TLS. After all, no one has to go back and “repeat” past SSL connections when an algorithm shows weaknesses. If Bitcoin ever needs to change curves because the discrete logarithm problem in secp256k1 turns out to be easier than expected, users will have to “convert” their existing funds into new coins protected by different algorithms. Since that involves extending the scripting language used to verify transactions, it will represent a hard-fork to the protocol.

It is unclear why Satoshi picked ECDSA over RSA. Space savings? But this is illusory; RSA signatures permit message recovery, allowing the signer to “reclaim” most of the padding taken up in the signature for storing additional information. Also ECDSA has the problem that signature verification is just as expensive as signing; see Crypto++ benchmarks as one data-point. Typically a transaction is signed once, but verified thousands of times by other participants in the system. RSA has just the right distribution of work for this: signatures are slow, verification is quick. Given that Bitcoin emerged around 2008, the Brainpool family of elliptic curves were relatively new at the time and probably considered a safe bet. Of course we can also imagine someone in 1995 deciding that this brand-new hash-function called “MD5″ would be a great choice for basing their entire security on.

Cryptographic agility

The common failure in all of these cases is cryptographic agility. Agility can be described as three properties, in order of priority:

  • Users can choose which cryptographic primitives (such as encryption algorithm, hash-function, key-exchange scheme etc.) are used in a given protocol
  • Users can replace the implementation of an existing cryptographic primitive, with one of their choosing
  • Users can set system-wide policy on choice of default algorithms

It is much easier to point out failures of cryptographic agility than success stories. This is all the more surprising because it is a relatively well-defined problem, unlike software extensibility. Instead of having to worry about all possible ways that a piece of code may be called on to perform a function not envisioned by its authors, all of the problems above involve swapping out interchangeable components. Yet flawed protocol design, implementation laziness and sometimes plain bad luck have often conspired to frustrate that. Future blog posts will take up this question of why it has proved so difficult to get past beyond MD5 and SHA1, and how more subtle types of  agility-failure continue to plague even modern, greenfield projects such as Bitcoin wallets.


Privacy in NFC payments: mobile wallets and prying eyes

Continuing the exploration of privacy in NFC payments, we next focus on the code that runs on the smart phone and what those applications can learn about user transactions. Unlike an inert plastic smart-card, a smart phone is a general purpose computer connected to a network. Given that mobile devices have been implicated in many privacy intrusions, it is also natural to ask whether applications can snoop on purchase history. Due to the variation in mobile wallet architecture, there is no clear-cut answer to this but we can explore the design space for privacy.

First it is worth distinguishing between the wallet application itself and third-party apps. For example if a bank provided a mobile application for NFC payments, the privacy question is largely moot– the bank already has visibility into payments by virtue of authorizing them. (Unless of course the application allows the bank to learn additional information such as line-level item data, above and beyond what the bank is entitled to observing as card issuer.) On the other hand, if the wallet provider is distinct from the issuing bank as in the case of Apple Pay, it is fair game to ask what the mobile application sees. There is an even greater concern with third-party applications unrelated to the wallet provider or financial institution trying to snoop on purchase history.

The wallet application is typically involved in the purchase transaction for user-experience reasons. This holds true even when the actual payment credentials live on a dedicated secure element chip, with all NFC communication takes place between the point-of-sale and that secure element, as is the case when card-emulation is enabled. Recall that the secure element itself has no user-interface and can not communicate with the user to walk through various steps of the flow, such as entering a PIN, displaying a dollar amount for confirmation or signaling that a transaction has completed. Since NFC communications from the POS would be normally routed directly to the SE, this typically involves some other mechanism for notifying the host operating system. For example on Android NFC controller generates notifications for specific events, such as detecting the presence of an external NFC field or observing a command to SELECT a particular application on the secure element. Such notifications are delivered to relevant applications such as Google Wallet in the case of Android. If these notifications were more broadly available, third-party applications could also observe a transaction taking place. Typically this is prevented by having the notifications only delivered to privileged applications.

At a  minimum then the wallet application knows that a transaction was attempted. In fact the payment applets on the secure element will typically have even more information that can be queried by the host, including purchase amount and status of last transaction. This is all part of the specification for SE applets mandated by the payment network for interoperability, such as MasterCard Mobile PayPass. What the host application does not learn is also a direct consequence of what the protocol communicates to the card from the point-of-sale terminal:

  1. Identity of the merchant. Typically merchant is not authenticated in the transaction.
  2. Line-level item information; in other words, the contents of the shopping cart.

The first one is not too difficult to infer. A mobile device is equipped with a number of radios including GPS, wireless and Bluetooth. Approximate GPS location, the identity of a nearby wireless network or the presence of an iBeacon can all be used to identify a particular merchant as the setting for a shopping excursion. (This is easier in the current situation when few merchants support NFC payments, making it easier to pinpoint the only possibility given an approximate area.) In principle then the wallet application supervising NFC transactions can have at least as much visibility into transaction history as the issuing bank. Apple could in principle compile the same information on Apple Pay users that our banker friend was worried about Google getting as the issuer-of-record for virtual cards. That is not to say that Apple is collecting that information currently. iOS security guide emphasizes that “Payment transactions are between the user, the merchant and the bank,” no doubt to allay any concerns of Apple becoming yet another middleman. But it is technically possible for a mobile wallet application– and by extension its provider– to help itself to that information.

The second one however is not that easy to work around. Unless the core EMV payment protocol is somehow augmented with out-of-band data provided by the cash register, there is no way for a mobile device to divine this information using its own sensors. For all the worrying about keeping data away from Google, there is however one other leading company that does have visibility into this highly coveted consumer purchase history: Square. By providing the complete point-of-sale terminal software, Square controls the check out experience. That includes the way items are rung-up. Square in that sense is the ultimate intermediary. It acts as “merchant” as far as the payment network is concerned while also tapping into a rich data stream of line-level item information from every purchase.



PIV provisioning on Linux and Mac: getting certificates (part III)

Continuing to the next step of PIV provisioning with open-source utilities, we need to generate a certificate signing request (CSR) corresponding to the key-pair generated on the card. This is a signed document generated by the person enrolling for a certificate, containing information they would like to have appear on the certificate, including their name, organization affiliation and public-key. The certificate authority verifies this information, makes changes/amendments as necessary and issues a certificate. This ritual for using CSRs is largely a matter of convention. Nothing stops a CA from issuing any certificate to anyone given just the public key. But most CA implementations are designed to accept only valid CSRs as input to the issuance procedure.

Offloading crypto in openssl

That calls for constructing the CSR with proper fields, including the public-key that exists on the card and getting that object signed using the PIV card. openssl provides both these ingredients:

Our strategy then is to find a suitable engine implementation that can interop with PIV cards and use it for signing the CSR. Fortunately OpenSC project provides precisely such an engine, building on top of its PKCS #11 library that figured in previous PIV scenarios.

It is possible to script openssl to load an engine first and perform an operation using keys associated with that engine. Here is an example gist for CMS decryption. It issues two commands to openssl. First one loads the engine by specifying the shared library path and assigning it a name for future reference. Second one performs the CMS decryption using a private-key associated with that engine, in this case key ID 01 corresponding to the PIV authentication key. (Somewhat confusingly the PKCS11 layer uses does not use the PIV key-reference identifiers.)

Boot-strapping problem

It would appear straightforward to create a similar script to load the engine and invoke  CSR generation while referencing a key associated with the card. But this runs into a subtle problem: OpenSC middleware assumes there is no key present in a particular slot unless there is also an associated certificate. This is a reasonable assumption for PIV cards in steady-state: each key-type defined in the standard such as PIV authentication or digital signature has both an X509 certificate and associated private key material. But during provisioning that assumption is violated temporarily: a key-pair must be generated first before we can have a certificate containing the public piece.

That leads to a boot-strapping problem: OpenSC middleware can not use the private-key on card without an associated certificate also present on the same card. But we can not obtain a certificate unless we first sign a CSR using the private-key.

It turns out OpenSC is not to blame for this either. Windows PIV minidriver has the same behavior; it would report no credentials present on a card without certificates even if keys had been generated for all four slots. Root cause is a more subtle problem with the PIV command interface itself: there is no mechanism for retrieving a stand-alone public-key. The assumption is that public-keys are extracted from the certificate. In the absence of the certificate, there is no uniform way to even learn the public-key for the purpose of placing it into the CSR.**

Signing the hard way

The above observation also rules out a kludge that at first sight seems promising: provision a dummy certificate containing an unrelated public-key to make OpenSC happy. That would allow signing to “work”– it turns out the middleware does not check the signature for consistency against public key in the certificate. But the CSR generated this way would still have the wrong public-key drawn from the placeholder certificate.

Boot-strapping requires a more complex sequence involving manual ASN1 manipulation:

  1. Save the output from key generation step
  2. Craft an unsigned CSR containing that public-key
    • This is easiest to accomplish by taking an existing CSR with all of the other fields correct, removing the signature and “grafting” a new public-key in place of the existing field.
  3. Get a signature from the card.
    • This can be done either by using the placeholder-certificate approach described  above or directly issuing a GENERAL AUTHENTICATE command to the PIV application.
  4. Combine the signature with the modified request from step #2 above to create a valid signed CSR.

As a side-note, generating a new CSR once a valid certificate exists is much easier. In that case the key is not changed and the certificate is consistent with the key. It is much easier to go from a card already set up with self-signed certificates (for example provisioned on Windows via certreq) to issuing a new certificates from a different CA using off-the-shelf open source utilities.

Completing provisioning

Once CSR is created and exchanged for a valid certificate, the final step is straightforward: load the certificate. This is done using piv-tool and authenticating with the card management key, similar to key generation.


** For ECDSA the public-key can be derived from one valid signature and associated hash. For RSA there is no similar quick fix, but most cards will give an error when asked to operate on an input larger than the modulus. This can be used to binary-search for the correct modulus.


Get every new post delivered to your Inbox.

Join 57 other followers