There is a recurring theme in the previous series on replacing passwords by cryptographic hardware: all the scenarios were under unilateral control of the user. If you plan to use SSH private-keys for connecting to a remote service such as Github, it is transparent to that service whether those keys are stored locally on disc protected by a passphrase or stored on external tamper-resistant hardware. Same goes for disk-encryption and PGP email signing; the changes required to move from passwords to hardware tokens are localized to that device. In this post we look at a more difficult use-case: authentication on the web.
The fundamental problem is that one exercises very little control over the authentication choices offered by a website. That decision is made by the persons controlling the web server configuration. Until about 5 years ago in fact, there were no options at all: passwords were the only game in town. Only a handful of websites offered two-factor authentication of any type. It was not until Google first deployed the feature that going beyond passwords became an aspirational goal for other services hoping to compete on security. But the majority of 2FA designs did not in fact dispense with passwords; they were more accurately “password-plus” concepts, incremental in nature and trying to augment a user-selected password with an additional factor such as SMS message and one-time passcode generated by a mobile app.
Authentication integrated into SSL/TLS
Given the dearth of options, it may come across as a surprise that the fundamental security protocol that secures web traffic— SSL or TLS, Transport Layer Security in the modern terminology— supports a better alternative. Even more surprisingly, that feature has existed nearly since the inception of the protocol in the late 1990s. The official name for the feature is client-authentication. That terminology is derived from the roles defined in SSL/TLS protocols: a user with a web-browser (“client”) establishes a secure connection to a website (“server”.) 99.999% of the time, only one side of that connection is authenticated as far as TLS is concerned– the server. It is the server that must present a valid digital certificate, and manage corresponding cryptographic keys. Clients have no credentials at the TLS layer. User authentication, to the extent it happens, occurs at a higher application layer; for example with a password typed into a web page and submitted for verification. While that is very much of a legitimate type of authentication, it takes place outside the purview of TLS protocol, without leveraging any functionality built into TLS.
This functionality built into TLS for authenticating users is the exact mirror-image of how it authenticates servers: using public-key cryptography and X509 certificates. Just as websites must obtain server-certificates from a trusted certificate-authority, or CA for short, users obtain client-certificates from a trusted CA. (It could even be the same CA; while the commercial prospects for issuing client-certificates is dim compared to server certificates, a few intrepid CAs have ventured into personal certificate issuance.) But more importantly, users must now manage their own private-keys, the cryptographic secrets associated with those certificates.
Therein lies the first usability obstacle for TLS client-authentication: these credentials are difficult to manage. Consider that passwords are chosen by humans, with the full expectation that they will be typed by humans. They can be complex, random looking string of characters but more likely, they are simple, memorable phrases out of a dictionary. To “roam” a password from one computer to another, that person needs only her own memory and fingers. By comparison, cryptographic keys are not memorable. There is too much entropy, too many symbols to juggle for all but the few blessed with a photographic memory. (Meanwhile attempts to derive cryptographic keys from memorable phrases rarely have good outcomes- consider the BrainWallet debacle for Bitcoin.) It’s relatively easy to generate secret keys on one device and provision a certificate there. The real challenge lies in moving that secret to another device, or creating a backup such that one can recover from the loss/failure of the original. Imagine being locked out of every online account because your computer crashed.
Compounding that is a historical failure to converge on standards: while every piece of software can agree on using passwords the same way, there is great variety (read: inconsistency) in how they handle cryptographic secrets. If you can login to a website using Firefox with a given password, you can count on being able to type the exact same password into Chrome, Internet Explorer or for that matter, a mobile app to achieve same result. The same is not true of cryptographic keys. To pick a simple example: openssh, PGP and openssl all perform identical operations using private-keys (signing a message) in the abstract, from a mathematical perspective. But they differ in the format they expect to find those keys such that it is non-trivial to translate keys between different applications. Even different web browsers on the same machine can have their own notions of what certificates/keys are available. Firefox uses its own “key-store” regardless of operating system, while Chrome defers to the operating system (for example, leveraging CAPI on Windows and tokend on OSX) and Internet Explorer only recognizes keys defined through Windows cryptography API. It is entirely possible for a digital certificate to exist in one location while being completely invisible to the other applications that look for their credentials in another one.
It is no wonder then that TLS client authentication has been relegated to niche applications, typically associated with government/defense scenarios with UI that qualifies for the label “enterprise-quality” Browser vendors rightfully assume that only hapless employees of large organizations who are required to use that option by decree of their IT department will ever encounter that UI.
- Specifying a set of certificate authorities (CAs) trusted for issuing client certificates. This is the counterpart of clients maintaining their own collection of trust-anchors to validate server certificates presented by websites. Getting this right is important because it influences client behavior. During the TLS handshake, the server will actually send the client this list to serve as hint for picking the correct certificates. The user-agent can show different UI depending on whether there is zero, one or multiple options for the user to choose from.
- Unlike web-browsers who implicitly trust hundreds of public CAs in existence, trust-anchors for the server are bound to be very limited. Most public CAs do not issue client certificates to individuals, so there is no point to including them in the group. Typically TLS client authentication is deployed in closed-loop systems: members of an organization accessing internal resources, with exactly one CA also operated by the same entity to issue everyone their credentials.
- There is also the degenerate case of trusting any issuer, which includes self-signed certificates. That is not useful for authentication by itself: it only establish a pseudonym, by proving continuity of identity. “This is the same person that authenticated last time with same public-key.” But it can be coupled with other, one-time verification mechanisms to implement strong authentication.
- Deciding whether client-authentication is mandatory vs optional. In the first case, the server will simply drop the connection when the client lacks valid credentials. There is no opportunity for the application layer to present a user-friendly error message, or even redirect them to an error page. That requires exchanging HTTP traffic, which is not possible when the lower-layer TLS channel has not been setup. For that reason it is wiser to use the optional setting, and have application code handle errors by checking if a certificate has been presented.
Fine-grained control & renegotiation
One crucial difference between server capabilities is that Apache and IIS can apply client-authentication to specific paths such as only pages under /foo directory. By comparison nginx can only enable the feature for all URLs fielded by a given server. It is not possible to configure on a per-path basis.
At first blush, it is somewhat puzzling that Apache and IIS can distinguish between different pages and know when to prompt the user. Recall that TLS handshake takes place before any application level data- in this case HTTP- has been exchanged. Meanwhile the URL path requested is conveyed as part of the HTTP request. So how is it possible for a web server to decide when to request a client certificate? Let’s rule out one trick: this behavior is not implemented by always prompting for a client certificate and then ignoring authentication failures based on path. Requesting a client certificate can modify the user experience: some user-agents simply panic and rop the connection, others such as early version of IE might display a dialog box with an empty list of certificates that confuses users.
The answer is a little-known and occasionally problematic feature of TLS: renegotiation. Most TLS connections involve a single handshake completed at the beginning, followed by exchanging application level data. But the protocol permits the server to request a new handshake from scratch, even after data has been exchanged. This is what allows Apache and IIS to provide fine-grained control: initially a TLS connection is negotiated with server authentication only. This allows the client to start sending HTTP traffic. Once the request is received, the server can inspect the URL and decide whether that particular page requires additional authentication for the client. In that case the exchange of application-level HTTP traffic is temporarily paused, and a lower-level TLS protocol message (namely ServerHello) is sent to trigger a second handshake. This time the server includes an additional message type (namely CertificateRequest) signaling the client to present its certificate.
Popular web browsers have also supported client-authentication since late 1990s, but the user-experience often leaves much to be desired. As an additional challenge, our objective is not just using public-key authentication but also using external cryptographic hardware to manage those keys. That introduces the added complexity of making sure the web browser can interface with such devices.
Here is Chrome on OSX displaying a modal dialog that asks the user to choose from a list of three certificates:
This dialog appears when accessing a simple demo web-server that requested client authentication. (It is an expanded version of the original abridged dialog, which starts out with only the list at the top, without detailed certificate information below.) All available certificates are displayed, including those that are stored locally on the machine, and those which are associated with a hardware token- in this case a PIV token- currently connected. Note that the UI does not visually distinguish; but hardware credentials only appear in the listing when the token is connected.
Choosing one of the certificates on the PIV token leads to a PIN prompt:
Chrome turns out to be something of a best-case scenario:
- It is integrated with the OSX key-chain and tokend layer for interfacing with cryptographic hardware (In theory tokend has been deprecated since 10.7, but in practice remains fully functional and supported via third-party stacks.)
- Credentials do not need to be explicitly “registered” with Chrome, as long as there is a tokend module which supports them. This example uses the same OpenSC tokend that makes possible previous scenarios such as smart-card login on OSX. Connecting the PIV token is enough.
- The dialogs themselves are part of OSX, which creates uniform visuals. For example here is the same UI from Safari- note the icon overlaid on the padlock is now the Safari logo:
For an example of a more tricky setup, we next turn to Firefox and getting the same scenario working there.