[continued from part I]
Features & cryptographic agility
A clear advantage of using an SGX enclave over a smart-card or TPM is the ability to support a much larger collection of algorithms. For example the Intel crypto-api-toolkit is effectively a wrapper over openssl or SoftHSMv2, which supports a wide spectrum of cryptographic primitives. By comparison most smart-card and TPM implementations support a handful of algorithms. Looking at generic signature algorithms, the US government PIV standard only defines RSA and ECDSA, with the latter limited to two curves: NIST P256 and P384. TPM2 specifications define RSA and ECDSA, with ECDSA again limited to the same two curves, with no guarantee that a given TPM model will support them all.
That may not sound too bad for the specific case of managing SSH client keys. OpenSSH did not even have the ability to use ECDSA keys from hardware tokens until recently, making RSA the only game in town. But it does raise a question about how far one can get with the vendor-defined feature set or what happens in scenarios where more modern cryptographic techniques— such as pairing-based signatures or anonymous attestation— are required capabilities, instead of being merely preferred among a host of acceptable algorithms.
More importantly, end-users have greater degree of control over extending the algorithms supported by a virtual token implemented in SGX. Since SGX enclaves are running ordinary x86 code, adding one more signature algorithm such as Ed25519 comes down to adopting an existing C language implementation to run inside the enclave. By contrast, end-users usually have no ability to customize code running inside the execution environment of a smart-card. It is often part of the security design for this class of hardware that end-users are not allowed to execute arbitrary code. They are limited to exercising functionality already present, effectively stuck with feature decisions made by the vendor.
Granted, smart-cards and TPMs are not made out of magic; they have an underlying programming model. At some point someone had the requisite privileges for authoring and loading code there. In principle one could start with the same blank-slate, such as a plain smart-card OS with JavaCard support and develop custom applets with all the desired functionality. While that is certainly possible, programming such embedded environments is unlikely to be as straightforward as porting ordinary C code.
It gets more tricky when considering an upgrade of already deployed cryptographic modules. Being able to upgrade code while keeping secret-key material intact is intrinsically dangerous— it allows replacing a legitimate application with a backdoored “upgrade” that simply exfiltrates keys or otherwise violates the security policy enforced by the original version. This is why in the common Global Platform model for smart-cards, there is no such thing as in-place upgrade. An application can be deleted and a new one can installed under the exact same identity. But this does not help an attacker because the deletion will have removed all persistent data associated with the original. Simulating upgradeability in this environment requires a multiple-applet design, where one permanent applet holds all secrets while a second “replaceable” applet with the logic for using them communicates over IPC.
With SGX, it is possible to upgrade enclave logic depending on how secrets are sealed. If secrets are bound to a specific implementation, known as the MRENCLAVE measurement in Intel terminology, any change to the code will render them unusable. If they are only bound to the identity of the enclave author established by code signature— so-called MRSIGNER measurement— then implementation can be updated trivially, without losing access to secrets. But that flexibility comes with the risk that the same author can sign a malicious enclave designed to leak all secrets.
When it comes to speed, SGX enclaves have a massive advantage over commonly available cryptographic hardware. Even with specialized hardware for accelerating cryptographic operations, the modest resources in an embedded smart-card controller are dwarfed by the computing power & memory available to an SGX enclave.
As an example: a 2048-bit RSA signature operation on a recent generation Infineon TPM takes about several hundred milliseconds, which is a noticeable delay during an SSH connection. (Meanwhile RSA key generation for that length can take half a minute.)
That slowdown may not matter for the specific use case we looked at, namely SSH client authentication or even other client-side scenarios such as connecting to a VPN or TLS authentication in a web browser to access websites. In client scenarios, private key operations are infrequent. When they occur, they are often accompanied by user interaction such as a PIN prompt or certificate selection/confirmation dialog. Shaving milliseconds off an RSA computation is hardly useful when overall completion time is dominated by human response times.
That calculus changes if we flip the scenario and look at the server side. That machine could be dealing with hundreds of clients every second, each necessitating use of the server private-key. Overall performance becomes far more dependent on the speed of cryptography under these conditions. The difference between having that operation take place in an SGX enclave ticking along at the full speed of the main CPU versus offloaded to a slow embedded controller would be very noticeable. (This is why one would typically use a hardware-security module in PCIe card form factor for server scenarios, as they combine the security and tamper-resistance aspects with more beefy hardware that can keep with the load just fine. But an HSM hardly qualifies as “commonly available cryptographic hardware” given their cost and complex integration requirements.)
State of limitations
One limitation of SGX enclaves are their stateless nature. Recall that for the virtual PKCS#11 token implemented in SGX, the implementation creates the illusion of persistence by returning sealed secrets to the “untrusted” Linux application, which stores them on the local filesystem. When those secrets need to be used again, they are temporarily imported into the enclave. This has some advantages. In principle the token never runs out of space. By contrast a smart-card has limited EEPROM or flash for nonvolatile storage on-board. Standards for card applications may introduce their own limitations beyond that: for example the PIV standard defines 4 primary key slots, and some number of slots for “retired” keys, regardless of how much free space the card has.
TPMs present an interesting in-between case. TPM2 standard uses a similar approach, allowing unbounded number of keys, by offloading responsibility for storage to the calling application. When keys are generated, they are exported in opaque format for storage outside the TPM. These keys can be reanimated from that opaque representation when necessary. (For performance reasons, there is a provision for allowing a handful of “persistent” objects that are kept in nonvolatile storage on TPM, optimizing away the requirement to reload every time.)
But there is a crucial difference: TPMs do have local storage for state, which makes it possible to implement useful features that are not possible with pure SGX enclaves. Consider the simple example of PIN enforcement. Here is a typical policy:
- Users must supply a valid PIN before they use private keys
- Incorrect PIN attempts are tracked by incrementing a counter
- To discourage guessing attacks, keys become “unusable” (for some definition of unusable) after 10 consecutive incorrect entries
- Successful PIN entry resets the failure counter back to zero
This is a very common feature for smart-card applications, typically implemented at the global level of the card. TPMs have a similar feature called “dictionary-attack protection” or anti-hammering, with configurable parameters for failure count and lockout period during which all keys on that TPM become unusable when the threshold is hit. (For more advanced scenarios, it is possible to have per-application or per-key PINs. In the TPM2 specification, these are defined as a special type of NVRAM index.)
It is not possible to implement that policy in an SGX enclave. The enclave has no persistent storage of its own to maintain the failure count. While it can certainly seal and export the current count, the untrusted application is free to “roll-back” state by using an earlier version where the count stands at a more favorable number.
In fact even implementing the plain PIN requirement— without fancy lockout semantics— is tricky in SGX. In the example we considered, this is how it works:
- Enclave code “bundles” the PIN along with key bits, in a sealed object exported at time of key generation.
- When it is time to use that key, the caller must provide a valid PIN along with the exported object.
- After unsealing the object, the supplied PIN can be compared against the one previously set.
So far, so good. Now what happens when the user wants to change the PIN? One could build an API to unseal/reseal all objects with an updated PIN. Adding one level of indirection simplifies this process: instead of bundling the actual PIN, place a commitment to a different sealed object that holds the PIN. This reduces the problem to resealing 1 object, for all keys associated with the virtual token. But it does not solve the core problem: there is no way to invalidate previously sealed objects using the previous PIN. In that sense, the PIN was not really changed. An attacker who learned the previous PIN and made off with the sealed representation of a key can use that key indefinitely. There is no way to invalidate that previous version.
(You may be wondering how TPMs deal with this, considering they also rely on exporting what are effectively “sealed objects” by another name. The answer is that TPM2 specification allows setting passwords on keys indirectly, by reference to an NVRAM index. The password set on that NVRAM index then becomes the password for the key. As the “Non-Volatile” part of that name implies, the NVRAM index itself is a persistent TPM object. Changing its passphrase on that index collectively changes the passphrase on all keys, without having to re-import/re-export anything.)
One could try to compensate for this by requiring that users pick high entropy secrets, such as long alphanumeric passphrases. This effectively shifts the burden from machine to human. With an effective rate-limiting policy on the PIN as implemented in smart-cards or TPMs, end-users can get away with low-entropy but more usable secrets. The tamper-resistance of the platform guarantees that after 10 tries, the keys will become unusable. Without such rate limiting, it becomes the users’ responsibility to ensure that an adversary free to make millions of guesses is still unlikely to hit on the correct one.
PIN enforcement is not the only area where statelessness poses challenges. For example, there is no easy way to guarantee permanent deletion of secrets from the enclave. As long as there is a copy of the signed enclave code and exported objects stashed away somewhere in the untrusted world, they can be reanimated by running the enclave and supplying the same objects again.
There is a global SGX version state that applies at the level of the CPU. Incrementing that will invalidate enclaves signed for the previous version. But this is a drastic measure that renders all SGX applications on that unit unusable.
Smart-cards and TPMs are much better at both selective and global deletion, since they have state. For example TPM2 can be cleared via firmware or by invoking the take-ownership command. Both options render all previous keys unusable. Similarly smart-card applications typically offer a way to explicitly delete keys or regenerate the key in a particular slot, overwriting its predecessor. (Of course there is also the nuclear option: fry the card in the microwave, which is still nowhere as wasteful as physically destroying an entire Intel CPU.)
Unknown unknowns: security assurance
There is no easy comparison on the question of security— arguably the most important criteria for deciding on a key-management platform. Intel SGX is effectively a single product line (although microcode updates can result in material differences between versions) the market space for secure embedded ICs is far more fragmented. There is a variety of vendors supplying products at different levels of security assurance. Most products ship in the form of a complete, integrated solution encompassing everything from the hardware to the high-level application (such as for chip & PIN payments or identity-management) selected by the vendor. SGX on the other hand serves as a foundation for developers to build their own applications that leverage core functionality provided by Intel, such as sealed storage and attestation provided by the platform.
When it comes to smart-cards, there is little discretion left to the end-user in the way of software; in fact most products do not allow users to install any new code of their choosing. That is not a bug, it is a feature: it reduces the attack surface of the platform. In fact the inability to properly segment hostile application was an acknowledged limitation in some smart-card platforms. Until version 3, JavaCard required the use of an “off-card verifier” before installing applets to guard against malicious bytecode. Unstated assumption there is that the card OS could not be relied on to perform these checks at runtime and stop malicious applets from exceeding their privileges.
By contrast SGX is predicated on the idea that malicious or buggy code supplied by the end-user can peacefully coexist alongside a trusted application, with the isolation guarantees provided by the platform keeping the latter safe. In the comparatively short span SGX has been commercially available, a number of critical vulnerabilities were discovered in the x86 micro-architecture resulting in catastrophic failure of that isolation. To pick a few examples current as of this writing:
These attacks could be executed purely in software, in some cases by running unprivileged user code. In each case, Intel responded with microcode updates and in some cases future hardware improvements to address the vulnerabilities. By contrast most attacks against cryptographic hardware— such as side-channel observations or fault-injection— require physical access. Often they involve invasive techniques such as decapping the chip that destroys the original unit, making it difficult to conceal that an attack occurred.
While it is too early to extrapolate from the existing pattern of SGX vulnerabilities, the track-record confirms the expectation that running code on the same platform as SGX does indeed translate into a significant advantage for attackers.