FakeID, Android NFC stack and Google Wallet (part IV)

[continued from part III]

Wallet substitution attacks

There is a more indirect way of spying on purchase history by replacing the existing wallet with one controlled by the attacker. We noted that payments can not be relayed remotely because those commands are only honored over NFC interface. But administrative commands from the Trusted Service Manager (TSM) responsible for managing the SE are delivered via contact interface. These are used to download/install/uninstall code and personalize payment applets after installation. Since FakeID-signed malware can send arbitrary commands over that interface, it can also be used to relay legitimate traffic from TSM.

Consider an adversary who has already set up Google Wallet on her own local device, trying to attack a remote victim using malware that exploits FakeID.

  • Attacker requests TSM to perform a wallet-reset on the local device.
  • But each time local TSM agent tries to communicate with local SE, the commands are relayed to the remote victim device and forwarded to that SE using the malware running there.
  • In particular when the TSM queries for the unique SE identifier (CPLC or Card Production Life Cycle data) attacker returns the value associated with victim SE. This substitution is necessary because the card-manager keys are unique to each SE.
  • Given the wrong CPLC, TSM now proceeds to configure victim SE with payment applets representing the attacker. TSM generates an install script, a sequence of commands encrypted & authenticated using Global Platform secure messaging.
  • The commands are sent to the attacker device who relays them to the victim SE. Existing applets are deleted and replaced by new instances personalized for the attacker.

The attacker effectively replaces the victim wallet with her own. Each time the victim makes a transaction, it will bill to the attacker who can observe this spending from the transaction history in the cloud.

Such an attack is possible but comes with several caveats, not the least of which is the fact that it stretches the definition of an “attack” when the victim is getting a blank check to spend attacker funds. Of course the adversary can try to minimize costs by ensuring that the backing instruments do not have sufficient funds. In that case it will still show up as a declined transaction, complete with merchant and amount. But that brings us to the second problem: the attack is not very stealthy. At best attacker can write the victim a blank check, authorizing every transaction and hoping the victim will not notice that his/her own credit-card is never getting charged.

Finally there is another problem confronting the attacker glossed over above. After new applets are installed, they still need to be configured with a PIN.  Given access to SE, attacker can set any initial PIN desired by sending the appropriate incantation over contact interface. But the problem is, existing user PIN is not known. (Recall that FakeID only permits malware to send its own commands to SE; it does not allow spying on traffic sent by other applications.) Each time Google Wallet collects a PIN from the user and attempts to verify it, that value will be sent to the SE applets. If the PIN is wrong, the applet will respond with an error and also disable payments. In order to work around that, we have to make additional assumptions about the malware, such as social-engineering user into entering the PIN into a fake prompt or trying to masquerade as the legitimate wallet application instead of coexisting along-side it.


This is one easy category of attacks available when malware can exchange arbitrary traffic with the secure element. For example:

  • Locking the payment application by making too many failed PIN attempts. Resetting Google Wallet recovers from this state by deleting that instance and installing a new one.
  • Locking the SE itself by issuing a reset command to the controller applet. In later iterations of Google Wallet, the controller applet is installed with card-locking privileges. Following Global Platform, when card is placed into “locked” state, no application other than card manager can be selected. This too is a fully reversible state. In fact it happens by-design during wallet reset, with over-the-air commands from TSM relayed to the SE for unlocking during the next time wallet is initialized.
  • “Bricking” the secure element.** Global Platform-compliant cards typically have a feature designed to defend against brute-forcing card-manager keys. Too many failed authentication attempts will result in permanently locking up the chip. (Note this is authenticating to the card itself to manage all of its contents, as opposed to authenticating to one particular applet such as NFC payments, which can be deleted and reinstalled.) Since a failed authentication is defined as INITIALIZE-UPDATE command not followed by a successful EXTERNAL-AUTHENTICATE, malware can simply loop on the first to run up the tally.


To summarize then: contrary to speculation on FakeID affecting NFC payments, there is no easy remote attack for getting sensitive financial data out of the embedded secure element, much less for conducting unauthorized transactions. (And that’s assuming was something present in SE worth stealing: Google Wallet no longer relies on secure element hardware for payments. Unless Sprint, Samsung or another enterprising OEM launch new NFC wallets, the hardware will lie dormant.) That said, there are legitimate denial-of-service concerns, including permanently and irreversibly locking up the secure element in a way that most consumers can not easily recover from.


** Bricking appears in quotes above because the chip does not quite get TERMINATED as defined by Global Platform. For example NXP SmartMX goes into a “fused” state when the failure count exceeds the limit. Existing applets can be selected and will continue to function, however no new applications can be installed/deleted going forward. This is much better than a true “terminated” state, although it is still of greatly diminished utility for the user.

FakeID, Android NFC stack and Google Wallet (part III)

[continued from part II]

3. Secure element access

This is probably the most interesting scenario in mind when NFC payments are mentioned as a potential casualty of the Android FakeID vulnerability:

“This can result in a wide spectrum of consequences. For example, the vulnerability can be used by malware to escape the normal application sandbox and take one or more malicious actions: [...] gain access to NFC financial and payment data by impersonating Google Wallet; [...]“

Before exploring this particular threat vector, let’s reiterate that all Google Wallet instances have been using host-card emulation for NFC payments since Q4 2013.  That includes both new installations and earlier ones migrated from the secure element architecture. With the SE completely out of the picture, the following discussion is largely hypothetical. Still for the sake of this exercise, we will consider a concrete version of Google Wallet that leveraged the secure element from two-years back, circa August 2012. Could a malicious Android app access any financial information from that version using FakeID?

Interface detection, revisited

First observation is that the secure element has 2 interfaces to the outside world: the “contact” (aka wired) interface attached to the host Android operating system and the “contactless” route that is directly attached to the NFC antenna. All communication from host applications arrives via the first path, while external NFC readers communicate over the second one. Relevant to the current problem, applications running in SE can distinguish between these cases and behave differently. Informally this is known as interface detection or interface discrimination. It is one of the corner-stones of the security model against remote-relay attacks.

Payments happen over NFC only

Google Wallet applets are configured to only execute payments over the NFC interface. Barring implementation bugs in the applets’ use of interface detection, it is not possible for an Android application to simulate an external NFC reader to initiate a fraudulent transaction. This holds true even if malware attains root privileges, by-passes NFC stack and interfaces directly with raw NFC hardware. Incidentally FakeID vulnerability does not permit that level of access; it only enables going through the legitimate NFC stack to exchange traffic with SE.

Information disclosure

While executing a payment or reading out track-data containing credit card number, expiration etc. will not work, an attacker can still attempt to interact with SE to get information. As a starting point, it can survey SE contents by trying to select applets using their well-known application IDs or AID. (It is not possible directly to enumerate installed applets. Doing that requires authenticating to the SE with unique card-manager keys.)  Each flavor of EMV payments for Visa, MasterCard, American Express and Discover have unique AIDs that can be used to finger-print whether a payment instrument of that brand exists. In the case of Google Wallet, only MasterCard is supported. The ability to use other payment networks is achieved via cloud-based indirection; the SE only contains a single “virtual” MasterCard, so there is no concept of backing instruments at that level. As such the only information gleamed will be whether the device is set-up for tap & pay at all. Depending on the SE model, there is also an optional controller applet designed to provide additional constraints on the behavior of the payment applet, and its existence can be used to distinguish between versions of Google Wallet.

After locating applets, malware could try interacting with the applet to gain additional information. But there is very little interesting data to retrieve over contact interface.** There is a “log” of recent transactions but this log only contains success/failure state. No information about amount paid, merchant let alone line-level item data is stored. (None of that information was even sent to SE in the original PayPass mag-stripe profile. While  the mobile variant MMPP added support for terminal to relay amount/currency-code, that field is not reliable since it is not authenticated by the CVC3.) In earliest iterations of Wallet, two different types of MasterCard were supported and the controller applet kept track of the currently active card. In a situation with multiple cards present, malware can observe which card is selected. More interestingly it can modify the selection and cause the next transaction to be charged to a different card– still belonging to the current user– than the one originally picked.

Active attacks

The ability to change actively selected card is not particularly useful, but it points in the direction of exploring other ways that SE access can be leveraged to tamper with the state of the payment applications or for that matter SE hardware itself. The final post in this series will look at denial-of-service risks as well as more subtle substitution-attacks around replacing the user wallet with one controlled by the adversary.



** Over NFC one can read out the full credit-card number– that is how the mag-stripe profile of EMV payments work. But that capability is only available to external NFC readers and not local malware, because of interface detection in SE applets.

SSH with GoldKey tokens on OS X: fine-print (part III)

Second post in this series reviewed the basics of getting ssh working with GoldKey tokens. Here we focus on edge-cases and Mac specific problems.

Fine-print: compatibility with other applications

Invoking ssh from the command line is not the only use-case for SSH authentication. Many applications leverage SSH tunnels to protect otherwise unauthenticated/weakly-authenticated connections. One example already covered is the source-control system git. git invokes ssh in a straightforward manner, with some customization possible via an environment variable. Because all of this happens in a terminal, PIN prompt also appears inline on the same terminal session and everything works as expected after user enters correct PIN.

Such automatic compatibility does not hold true for more complex use cases, and depends to some extent on the way SSH authentication is invoked. Interestingly  GUI applications can also work with cryptographic hardware if they are designed properly. For example Sequel Pro is a popular SQL client for OS X. It supports accessing databases over SSH tunnels through bastion hosts. When using a GoldKey for the SSH credential, PIN prompt will appear in a modal dialog:

PIN prompt from Sequel Pro on OS X

PIN prompt from Sequel Pro on OS X

This is compliments of the custom ssh-agent built into OS X, which includes changes from Apple to interface with the system keychain for collecting PIN/passphrase to unlock keys. Unfortunately this “value-added” version from Apple has a downside: it does not work correctly with hardware key.

Macintrash strikes back: key-agent troubles

One problem quickly discovered by Airbnb engineers during this pilot concerned SSH key-agent: adding credentials from the token to SSH agent does not work. In fact, it is worse than simply failing: it fork-bombs the machine with copies of ssh-pkcs11-helper process until no additional processes can be created. (pkill to the rescue for terminating all of them by name.) No one can accuse Apple for having tested enterprise scenario.

This creates two problems:

  1. Usability: GoldKey PIN must be entered each time. Arguably this is beneficial for security; guidelines for PIV clients even discourage PIN caching, since it can lead to problems when a computer is left unattended with the token attached. But there is also an inconvenience element to frequent PIN entry. This is more of a concern when the token is used for scenarios such as accessing a git repo, when frequent server interactions are common. (SSH to multiple targets in quick succession is rare although there are applications such as massively parallel SSH designed to do exactly that.)
  2. Functional limitation: Putting aside the security vs usability debate for PIN caching, there is a more fundamental problem with key-agent not supporting hardware tokens: it breaks scenarios relying on SSH agent-forwarding. For example, when the user connects to server A and from there tries connecting to server B, they need an agent running on their local machine and that agent to be forwarded to server A in order to authenticate to B using credentials on the GoldKey token. A less obvious example of “nested” connection involves logging into a remote machine and then executing a git pull there to sync a repository, which involves SSH authentication under the covers.

Luckily the limitation turns out to be a function of openssh binaries included out-of-the-box with Mavericks. Upgrading to 6.6p1 and making sure that all of the binaries executed– including ssh-agent and ssh-pkcs11-helper– point to this latest version solves the problem.

$ ssh-add -s /usr/lib/opensc-pkcs11.so
Enter passphrase for PKCS#11: 
Card added: /usr/lib/opensc-pkcs11.so
$ ssh-add -e /usr/lib/opensc-pkcs11.so
Card removed: /usr/lib/opensc-pkcs11.so

The collateral damage however is losing PIN collection for GUI applications, such as the Sequel Pro example above. That functionality is dependent on custom OS X keychain integration which the vanilla open-source ssh-agent lacks. When the application is invoked manually from the terminal, we can see the PIN prompt appear on stdout. Not good.

One work-around is to manually add the key to the agent before invoking GUI applications. A better solution is provided by the Homebrew version of openssl which combines the keychain patch with latest openssh sources to satisfy both use cases.

The final post in this series will conclude with a discussion of setup, key-generation/credential provisioning and maintenance operations.



FakeID, Android NFC stack and Google Wallet (part II)

[continued from part I]

2. Notifications

Next up in the discussion what attackers can achieve by getting NFC admin privilege with FakeID vulnerability, we turn to information disclosure from Android notifications. Certain notifications are only delivered to these privileged applications, as defined in NfcExecutionEnvironment and NfcAdapterExtras classes:

  • AID selected. Generated when an external NFC reader activates a particular application on the secure element, by issuing a SELECT command. The full application ID or AID for short is included in the notification.
  • APDU received. Note that while the source-code defines an extra parameter for delivering the APDU contents, that payload is always empty. In general NFC controller does not stash aside all incoming APDUs addressed to the secure element and deliver another copy to the host-operating system. This would amount to spying on traffic directed at the SE. (That said the host could always switch the controller into host-card emulation mode and receive the traffic directly but then SE will not be receiving that traffic over NFC itself.) Only exception is the SELECT command as noted earlier, where AID from the command payload is surfaced to Android apps. This would have been useful if there were different applets co-existing on eSE with different use-cases (payments, physical access, identity…) each of them with an associated Android app that may need to take action when that specific applet has been invoked.
  • EMV card removed. This is not generated by the NFC controller in Android.
  • Mifare access detected. Applicable when SE has Mifare emulation enabled to also emulate a Mifare Classic tag in addition to ISO7816-style card. Early incarnations of Google Wallet on NXP PN65N/O used this feature for storing coupons. Intended use-case was redeeming a coupon and making a payment in one tap transaction. Later offers were moved out of Mifare sectors into ordinary Android land. Future iterations of the NFC hardware– specifically the Oberthur SE coupled to the Broadcom controller– had Mifare emulation disabled, even when the underlying hardware supports it.
  • RF field on/off. These notifications indicate when the phone comes into the presence of an NFC field or is removed out of the field. This is useful to gauge the beginning and end of a transaction. Since external NFC readers communicate directly with the eSE without the bits flowing through Android itself, there is no other way of reliably identifying those points. (Polling the SE over contact interface is not a good solution, because doing so disables card-emulation and possibly interrupt a transaction in progress.) For example Google Wallet acquires a wake-lock when the NFC field is initially detected, to prevent the screen from going dark in the middle of a transaction.

An application with NFC admin privileges could receive all of these notifications, subject to the caveats noted above. In principle this would allow an application to infer when payments happen. For example the presence of an RF field alone is strong indication, since the only application present on most Android SE are payment applications. But there is even more unambiguous signal returned by the AID-selected notification: PayPass and PPSE applets invoked during the NFC payment transaction have unique AIDs that are easily recognized.

In short receiving these notifications discloses information about when a user conducts a payment transaction and which type of card is used based on the AID chosen by the terminal. (In the case of Google Wallet, it is always a MasterCard so there is no new information revealed.) But it reveals no information about the contents of the transaction, such as the merchant or amount. In fact it does not even allow differentiating between successful and failed transactions, because the traffic after the initial SELECT is not visible.

Remainder of this series will review risks associated with rogue Android applications accessing the embedded secure element via NFC extras APIs.



FakeID, Android NFC stack and Google Wallet (part I)

Google Wallet and Android NFC stack are frequently mentioned as one of the casualties from the Android FakeID vulnerability. This post looks at what an attacker can accomplish with forged NFC admin permissions.

Code-signing reinvented

“Those who do not understand UNIX are condemned to reinvent it, poorly” — attributed to Henry Spencer.

Spencer’s observation has a parallel for digital signature standards, which can be formulated as:

“Those who do not understand PKI, X509 and CMS are condemned to reinvent it, poorly.”

FakeID is not the first time that a serious implementation error was found in Android’s code signing mechanism. Unlike the earlier incarnation that involves a subtle mismatch between the contents of the ZIP archive making up an application versus what is covered by the signature, the new vulnerability affects a more specific code-path: signature-based permissions. This is one of the options for defining an Android permission, to be granted specifically to other applications that are signed with the exact same certificate as the one defining the permission.

For instance the original NFC administrator aka NFCEE_ADMIN permission in Android prior to ICS operated this way. Google Wallet was signed with the same key as the NFC stack on official builds of Android, obtaining this permission on installation. (Of course an OEM or carrier always has the option to recompile and sign the NFC stack using a different key in order to hamper Google from offering a mobile payment application on their devices.) Starting in ICS, this model was generalized to read out a list of acceptable signing certificates from an XML file which can not be modified by end users on non-rooted devices.  The underlying vulnerability still applies to this version, because the same code-path is used by the NFC stack when verifying whether a caller requesting special NFC privileges is any one of the approved signers.

Meaning of NFC administrator privilege

What has been obscured in this discussion so far is what happens if a malicious app could attain NFC administrator privileges. What is the worst case scenario assuming Google Wallet is installed on the device? Can this attack result in fraudulent transactions or access to financial information? Short answer is no.

One important caveat: many of the risks discussed here assume the presence of an embedded secure element being used for NFC payments. While this was the original architecture for Google Wallet from 2011-2013, more recent iterations are built entirely on host-card emulation, leaving the SE unused regardless of hardware capabilities. In fact many flagship devices including the Nexus 5 do not even feature an SE any longer. Yet for the purposes of this discussion, we will roll-back the clock to 2011 and assume the worst case scenario: namely that eSE exists and has been provisioned with active payment applets.

There are three kinds of additional privileges granted by the NFCEE_ADMIN permission:

  1. Ability to control card-emulation state
  2. Ability to receive notifications
  3. Access to the secure element. Note that this specifically applies to the embedded secure element. On ISIS devices, NFC controller is connected to the UICC as secure element. Host communication to that hardware goes through RIL (Radio Interface Layer) as with all SIM access, not the NFC execution environment defined by NFC stack.

Let’s examine each of these in turn.

1. Card-emulation state

Android only defines two possible states for controlling NFC card-emulation status:

1. Off: Meaning the external NFC input is not delivered to the eSE
2. On when screen on: NFC communication to the eSE is tied to screen state.

(Not to be confused with controlling card-emulation route– that is fixed at boot time. It used to be always pointed at the embedded secure element on earlier versions, and later defaults to host-card emulation.)

#2 is the default setting when Google Wallet is installed and tap-to-pay is configured. In that sense, the device is already operating in the most “permissive” state and there is nothing malware can do to move the user into a weaker configuration. Of course malware can interfere with payments by keeping card emulation disabled but this is at best a low-severity denial of service. No access to financial data results from doing that.

Note that the underlying NFC controller hardware supports other options, including always-on as long as the phone is on and in the extreme case, powered-by-the-field where eSE can draw power from an external NFC reader even when the device itself is completely powered off. (In fact, there is no concept of “screen is on” at NFC controller level; synchronizing card-emulation to screen state is orchestrated manually by NFC stack in response to display events.) But there is no way of activating these options by making calls to the NFC extras interface, even when caller gains access to the API by subverting the signature permission.



SSH with GoldKey tokens on OS X (part II)

[continued from part I]

Life-cycle of  an authentication token can be divided into roughly three stages:

  1. Initial setup. This includes steps such as acquiring the hardware, generating keys and provisioning credentials on the token and setting an initial PIN before handing it over to the user.
  2. Normal usage, namely  SSH authentication with private keys on the token. This also includes changing the PIN as necessary.
  3. “Administrative” maintenance such as resetting a forgotten PIN, clearing out all credentials on the token for transfer to another user or issuing new keys.

This post will focus on the second part, namely using the token for SSH authentication on OS X. But it is worth pointing out that the lines between provisioning and steady-state usage are somewhat arbitrary. For example users could receive completely uninitialized tokens and handle key generation on their own, just as SSH keys are typically generated unassisted on end-user machines. Since SSH only cares about the raw cryptographic key, it does not matter whether users load a self-signed certificate or one obtained from their own CA. It would be a different story if the use-case called for certificates chaining up to a trusted authority. That said, there is some benefit to performing the key-generation in a trusted environment and noting the public-key directly. For example it can be used to enforce a policy that employees can only use SSH keys that were generated in hardware. (PIV standard does not provide a way to generate such an attestation after the fact to prove that a particular public-key was generated in hardware.)

OS X setup

These instructions assume that the token has been personalized with default PIN and secret question/answer and PIV authentication certificate was provisioned. Subsequent posts will take up some of the subtleties around performing key-generation on the token as well as wiping out and reinitializing tokens that have credentials present.

1. Check openssh version and upgrade if necessary

While openssh uses PKCS #11 interface, it is dependent on a compile time macro which can be configured to include or exclude that functionality. Earlier versions may have been more inclined to opt-out, possibly because PKCS support was not stable. Of course Apple is notorious for shipping ancient versions of open-source software, so it comes as no surprise that the version of openssh built into OS X 10.8 does not have smart-card support.

OS X 10.9 ships with a more recent vintage 6.2 from March 2013 which has been compiled with PKCS support. (That said, this version as built by Apple still has a serious bug that breaks agent-forwarding and PIN caching that we will describe later.)
For reference the latest stable release as of this writing is 6.6p1 from March 2014.

2. Install required software

Install opensc 0.14 from precompiled binaries for OS X or build from source tarballs.

Also install the GoldKey client for OS X which is required for changing the PIN. Note that the GoldKey PIV application does not honor the CHANGE AUTHENTICATION DATA command that is normally used on PIV cards to set a new PIN. Instead the entire token including any optional encrypted flash-drive has a single unified PIN controlled via the  GoldKey management interface which also prompts for secret question. That process takes place via GoldKey utility, (not to be confused with “GoldKey Vault” also installed by the same package)  selecting “GoldKey information” from the menu and clicking on “Personalize.”

GoldKey information

GoldKey personalization dialog


3. Modify SSH configuration

In principle this step is optional, since the PKCS module can be specified in the ssh command line using the -I option during invocation. But doing that every time is inconvenient and some utilities such as git expect to invoke “ssh” with no additional parameters by default, leaving out the hardware token support.

Specify the path to the PKCS #11 module such as:

PKCS11Provider /usr/lib/opensc_pkcs11.so

There is one pitfall here as cautioned in the documentation for openssh: if the configuration includes an Identities or IdentityFile directive, it can interfere with the ability to leverage additional credentials present on the token.

4. Export SSH public-key from token

ssh-keygen has support for downloading an existing public key from the token:

ssh-keygen -I /usr/lib/opensc_pkcs11.so

(Somewhat confusingly, this does not in fact perform key-generation on the token; it only retrieves existing credentials already present.)

A more round-about way of accomplishing the same task is to read out the PIV authentication certificate and format the public-key in the certificate in a suitable format required by openssh. For example OpenSC project includes a pkcs11-tool for retrieving arbitrary data-objects from the card  application by label, including certificates. openssl can be used to extract the public-key field out of that certificate and passed to ssh-keygen to convert from PKCS #8 into the native ssh format.

pkcs11-tool --module /usr/lib/opensc-pkcs11.so\
 -r -a "Certificate for PIV Authentication" --type cert |\
openssl x509 -inform DER -pubkey -noout |\
ssh-keygen -i -m PKCS8 -f /dev/stdin

Verifying the setup

At this point the configuration can be verified against any ssh server. Here is an example involving Github, which supports multiple SSH keys associated with an account– very convenient for testing before switching 100% to hardware tokens:

$ git pull
Enter PIN for 'PIV_II (PIV Card Holder pin)': 
remote: Counting objects: 1234, done.

Second line highlighted above is the prompt for a PIN associated with the PKCS #11 module.

The next post will discuss some edge-cases around using the token with other applications, as well as pitfalls around SSH agent-forwarding, which does not function correctly with the version of openssh utilities shipped in OS X 10.9.



SSH with GoldKey tokens on OS X (part I)

Earlier posts discussed how GoldKey tokens are compatible with PIV standard and can be used on Windows for traditional smart-card scenarios such as BitLocker  disk encryption or logon with Active Directory. This update considers a different use case with the exact same hardware, which this blogger has been piloting with Airbnb engineering team: SSH from Apple OS X. In theory this should be easier compared to the use-cases already covered. SSH authentication protocol uses raw public-keys and does not care for X509 certificates or other data-objects (such as the CHUID or security-object) defined in such exacting detail by NIST SP 800-73. If the Windows use-cases only touched on a small subset of PIV capabilities (for example, only the PIV authentication certificate and keys are involved in domain-logon or TLS client authentication) SSH calls for even less functionality.

Middleware all the way down

The complication lies in the middleware. Applications such as SSH or a web browser are rarely written with one particular model or brand of cryptographic hardware in mind. Instead they pursue broad compatibility by accessing such hardware using an interface that abstracts away implementation quirks, communication interfaces and specific commands syntax required to perform high-level operation such as signatures. As long as software exists that conforms to this interface– typically authored by the hardware vendor– the card/token/gadget in question can be transparently used for all of the same code paths.

One problem is that this magical interface is not standardized across platforms or even applications on the same platform. For example on Windows, preferred way of exposing cryptographic hardware to the platform is via smart card mini-drivers, a proprietary standard defined by MSFT. Not to be left behind, Apple has its own proprietary interface called “tokend” for the same purpose. Just in case one suspected this reinvention of the wheel was confined to closed platforms, openssl also defines its own abstraction called engines.

The closest any interface can be said to have at least aspired towards “standard” status is PKCS #11. PKCS stands for Public-Key Cryptography Standards, a series of specifications developed by RSA (now owned by EMC) in the early 1990s. While some specifications were later published as RFCs in the spirit of a true modern standard, others remained under RSA control or turned over to other private industry consortia. Among these standards, PKCS #11 is titled “cryptographic token interface.” It defines a fixed API to be implemented in a “module” for each type of cryptographic hardware, providing a uniform way for developers to access the underlying functionality. (Somewhat confusingly there is also PKCS #15, which defines low-level functionality and communication protocol for an on-card application implementing cryptographic functionality. PKCS #11 does not assume that the card application complies with PKCS #15; it is a higher-level interface designed to hide away those details.)

Luckily openssh– the popular implementation of SSH protocol shipping with OS X and many Linux flavors– has settled on PKCS #11 for its choice of hardware abstraction. Returning to the problem of using a GoldKey for SSH, as long we have a suitable PKCS #11 module this scenario is theoretically supported. Even better this module need not be specific to the GoldKey or depend on GoldKey corporation to write a single line of code. Since the token closely (but not exactly) follows the widely used US government PIV standard, generic middleware developed for traditional PIV scenarios– government, defense, enterprise– is sufficient.

“Macintrash” problem for smart-cards

Windows users might be forgiven for assuming such middleware must already exist in the OS. After all PIV middleware (albeit in the form of mini-drivers rather than PKCS #11) has shipped out-of-the-box starting in Windows 7 in 2009. No additional software is necessary for the operating system or any application to use PIV cards, effectively plug-and-play for cryptographic hardware. Credit for that level of convenience goes to commercial pressure from US federal contracts requiring PIV support, a key constituency that MSFT could not afford to ignore. OS X on the other hand has never aspired to be an enterprise-grade platform. To the extent that it has found acceptance in enterprise and government sectors, that success appears to have been in spite of Apple. Not surprisingly then, OS X has the worst smart-card support among the triumvirate of major desktop OS platforms. Apple even went so far as to remove the built-in tokend in OS X 10.7 Lion that once enabled CAC/PIV cards support, ceding that ground to third-party providers such as Thursby.

OpenSC to the rescue

Since there is no built-in PIV support in contemporary OS X, first order of business is locating alternative middleware. While there are commercial third-party offerings for PIV on OS X, there is also a popular open-source option with an extended track record: OpenSC. This cross-platform project provides a PKCS #11 module compatible with PIV standard as well as tokend and openssl engines for additional use cases.

The remaining posts in this series will describe the end-to-end process (including provisioning of PIV credentials, which is tricky because standard PIV does not grant card-holders this privilege) for getting this scenario working with opensc.




Get every new post delivered to your Inbox.

Join 51 other followers