Smart-cards vs USB tokens: when form factor matters (part I)

Early on in the development of cryptography, it became clear that off-the-shelf hardware intended for general purpose computing was not well suited to safely managing secrets. Smart-cards were the answer. A miniature computer with its own modest CPU, modest amount of RAM and persistent storage, packaged into the shape an ordinary identity card. Unlike an ordinary PCs, these devices were not meant to be generic computers, with end-user choice of applications. Instead they were preprogrammed for a specific security application, such as credit-card payments in the case of chip & PIN or identity verification in the case of the US government identification programs.

Smart-cards solved a vexing problem in securing sensitive information such as credentials and secret keys: bits are inherently copyable. Smart-cards created an “oracle” abstraction for using secrets while making it difficult for attackers to extricate that secret out of the card. The point-of-sale terminal at the checkout counter can ask this oracle to authorize a payment or the badge-reader mounted next to a door could ask for proof that it was in possession of a specific credential. But they can not ask the card to cough up the underlying secret used in those protocols. Not even the legitimate owner of the card can do that, at least not by design. Tamper-resistance features are built into the hardware design to frustrate attempts to extract any information from the card beyond what the intended interface exposes.

“Insert or swipe card”

The original card form-factor proved convenient in many scenarios when credentials were already carried on passive media in the shape of cards. Consider the ubiquitous credit card. Digits of the card number, expiration date and identity of the cardholder are embossed on the card in raised lettering for the ultimate low-tech payment solution, where a mechanical reader presses the card imprint through carbon paper on a receipt. Magnetic stripes were added later for automated reading and slightly more discrete encoding of the same information. We can view these as two different “interfaces” to the card. Both involve retrieving information from a passive storage medium on the card. Both are easy targets for perpetrating fraud at scale. So when it came time for card networks to switch to a better cryptographic protocol to authorize payments, it made sense to keep the form factor constant while introducing a third interface to the card: the chip. By standardizing protocols for chip & PIN and incentivizing/strong-arming participants in the ecosystem to adopt newer mart-cards, payment industry opened up new market for secure IC manufacturers.

In fact EMV ended up introducing two additional interfaces, contact and contactless. As the name implies, first one used direct contact with a brass plate mounted on the card to communicate with the embedded chip. The latter used a wireless communication protocol, called NFC or Near Frequency Communication for shuttling the same bits back and forth. (The important point however is that in both cases those bits typically arrive at the same piece of hardware: while there exist some stacked designs where the card has two different chips operating independently, in most cases a single “dual-interface” IC handles traffic from both interfaces.)

Identity & access management

Identity badges followed a similar evolution. Federal employees in the US always had to wear and display their badges for access to controlled areas. When presidential directive HSPD-12 called for raising the bar on authentication in the early 2000s, the logical next step was upgrading the vanilla plastic cards to smart-cards that supported public-key authentication protocols. This is what the Common Access Card (CAC) and its later incarnation Personal Identity Validation (PIV) systems achieved.

With NFC, smart-cards can open doors at the swipe of a badge. But they can do a lot more when it comes to access control online. In fact the most common use of smart-cards prior to CAC/PIV programs were in the enterprise space. Companies operating in high-security environments issues smart-cards systems to employees for accessing information resources. Cards could be used to login to a PC, access websites through a browser and even encrypt email messages.  In many ways the US government was behind the curve in adoption of smart-cards for logical access. Several European countries have already deployed large-scale electronic ID or eID systems for all citizens and offer government services online accessed using those cards.

Can you hear me now?

Another early application of smart-card technology for authentication appeared in wireless communication, with the ubiquitous SIM card. These cards hold the cryptographic secrets for authenticating the “handset”— in other words, a cell phone— to the wireless carrier providing service. Early SIM cards had the same dimensions as identity cards, called ID1. As cell-phones miniaturized to the point where some flip-phones were smaller than the card itself, SIM cards followed suit. The second generation of “mini-SIM” looked very much like a smart-card with the plastic surrounding the brass contract-plate trimmed. In fact many SIM cards are still delivered this way, as a full size card with a SIM-sized section that can be removed. Over time standards were developed for even smaller form-factors designated “micro” and “nano” SIM, chipping away at the empty space surrounding the contact plate.

This underscores the point that most of the space taken up by the card is wasted; it is inert plastic. All of the functionality is concentrated in a tiny area where the contact plate and secure IC are located. The rest of the card can be etched, punched or otherwise cut with no effect on functionality. (Incidentally this statement is not true of contactless cards because the NFC antenna runs along the circumference of the card. This is in order to maximize the surface area covered by antenna in order to maximize NFC performance: recall that these cards have no battery or other internal source of power. When operating in contactless mode, the chip relies on induction to draw power from the electromagnetic field generated by the reader. Antenna size is a major limiting factor, which is why most NFC tags and cards have a loop antenna that closely follows the external contours of its physical shape.)



Using Intel SGX for SSH keys (part II)

[continued from part I]

Features & cryptographic agility

A clear advantage of using an SGX enclave over a smart-card or TPM is the ability to support a much larger collection of algorithms. For example the Intel crypto-api-toolkit is effectively a wrapper over openssl or SoftHSMv2, which supports a wide spectrum of cryptographic primitives. By comparison most smart-card and TPM implementations support a handful of algorithms. Looking at generic signature algorithms, the US government PIV standard only defines RSA and ECDSA, with the latter limited to two curves: NIST P256 and P384. TPM2 specifications define RSA and ECDSA, with ECDSA again limited to the same two curves, with no guarantee that a given TPM model will support them all.

That may not sound too bad for the specific case of managing SSH client keys. OpenSSH did not even have the ability to use ECDSA keys from hardware tokens until recently, making RSA the only game in town. But it does raise a question about how far one can get with the vendor-defined feature set or what happens in scenarios where more modern cryptographic techniques— such as pairing-based signatures or anonymous attestation— are required capabilities, instead of  being merely preferred among a host of acceptable algorithms.


More importantly, end-users have greater degree of control over extending the algorithms supported by a virtual token implemented in SGX. Since SGX enclaves are running ordinary x86 code, adding one more signature algorithm such as Ed25519 comes down to adopting an existing C language implementation to run inside the enclave. By contrast, end-users usually have no ability to customize code running inside the execution environment of a smart-card. It is often part of the security design for this class of hardware that end-users are not allowed to execute arbitrary code. They are limited to exercising functionality already present, effectively stuck with feature decisions made by the vendor.

Granted, smart-cards and TPMs are not made out of magic; they have an underlying programming model. At some point someone had the requisite privileges for authoring and loading code there. In principle one could start with the same blank-slate, such as a plain smart-card OS with JavaCard support and develop custom applets with all the desired functionality. While that is certainly possible, programming such embedded environments is unlikely to be as straightforward as porting ordinary C code.

It gets more tricky when considering an upgrade of already deployed cryptographic modules. Being able to upgrade code while keeping secret-key material intact is intrinsically dangerous— it allows replacing a legitimate application with a backdoored “upgrade” that simply exfiltrates keys or otherwise violates the security policy enforced by the original version. This is why in the common Global Platform model for smart-cards, there is no such thing as in-place upgrade. An application can be deleted and a new one can installed under the exact same identity. But this does not help an attacker because the deletion will have removed all persistent data associated with the original. Simulating upgradeability in this environment requires a multiple-applet design, where one permanent applet holds all secrets while a second “replaceable” applet with the logic for using them communicates over IPC.

With SGX, it is possible to upgrade enclave logic depending on how secrets are sealed. If secrets are bound to a specific implementation, known as the MRENCLAVE measurement in Intel terminology, any change to the code will render them unusable. If they are only bound to the identity of the enclave author established by code signature— so-called MRSIGNER measurement— then implementation can be updated trivially, without losing access to secrets. But that flexibility comes with the risk that the same author can sign a malicious enclave designed to leak all secrets.


When it comes to speed, SGX enclaves have a massive advantage over commonly available cryptographic hardware. Even with specialized hardware for accelerating cryptographic operations, the modest resources in an embedded smart-card controller are dwarfed by the computing power & memory available to an SGX enclave.

As an example: a 2048-bit RSA signature operation on a recent generation Infineon TPM  takes about several hundred milliseconds, which is a noticeable delay during an SSH connection. (Meanwhile RSA key generation for that length can take half a minute.)

That slowdown may not matter for the specific use case we looked at, namely SSH client authentication or even other client-side scenarios such as connecting to a VPN or TLS  authentication in a web browser to access websites. In client scenarios, private key operations are infrequent. When they occur, they are often accompanied by user interaction such as a PIN prompt or certificate selection/confirmation dialog. Shaving milliseconds off an RSA computation is hardly useful when overall completion time is dominated by human response times.

That calculus changes if we flip the scenario and look at the server side. That machine could be dealing with hundreds of clients every second, each necessitating use of the server private-key. Overall performance becomes far more dependent on the speed of cryptography under these conditions. The difference between having that operation take place in an SGX enclave ticking along at the full speed of the main CPU versus offloaded to a slow embedded controller would be very noticeable. (This is why one would typically use a hardware-security module in PCIe card form factor for server scenarios, as they combine the security and tamper-resistance aspects with more beefy hardware that can keep with the load just fine. But an HSM hardly qualifies as “commonly available cryptographic hardware” given their cost and complex integration requirements.)

State of limitations

One limitation of SGX enclaves are their stateless nature. Recall that for the virtual PKCS#11 token implemented in SGX, the implementation creates the illusion of persistence by returning sealed secrets to the “untrusted” Linux application, which stores them on the local filesystem. When those secrets need to be used again, they are temporarily imported into the enclave. This has some advantages. In principle the token never runs out of space. By contrast a smart-card has limited EEPROM or flash for nonvolatile storage on-board.  Standards for card applications may introduce their own limitations beyond that: for example the PIV standard defines 4 primary key slots, and some number of slots for “retired” keys, regardless of how much free space the card has.

TPMs present an interesting in-between case. TPM2 standard uses a similar approach, allowing unbounded number of keys, by offloading responsibility for storage to the calling application. When keys are generated, they are exported in opaque format for storage outside the TPM. These keys can be reanimated from that opaque representation when necessary. (For performance reasons, there is a provision for allowing a handful of “persistent” objects that are kept in nonvolatile storage on TPM, optimizing away the requirement to reload every time.)

PIN enforcement

But there is a crucial difference: TPMs do have local storage for state, which makes it possible to implement useful features that are not possible with pure SGX enclaves. Consider the simple example of PIN enforcement. Here is a typical policy:

  • Users must supply a valid PIN before they use private keys
  • Incorrect PIN attempts are tracked by incrementing a counter
  • To discourage guessing attacks, keys become “unusable” (for some definition of unusable) after 10 consecutive incorrect entries
  • Successful PIN entry resets the failure counter back to zero

This is a very common feature for smart-card applications, typically implemented at the global level of the card. TPMs have a similar feature called “dictionary-attack protection” or anti-hammering, with configurable parameters for failure count and lockout period during which all keys on that TPM become unusable when the threshold is hit. (For more advanced scenarios, it is possible to have per-application or per-key PINs. In the TPM2 specification, these are defined as a special type of NVRAM index.)

It is not possible to implement that policy in an SGX enclave. The enclave has no persistent storage of its own to maintain the failure count. While it can certainly seal and export the current count, the untrusted application is free to “roll-back” state by using an earlier version where the count stands at a more favorable number.

In fact even implementing the plain PIN requirement— without fancy lockout semantics— is tricky in SGX. In the example we considered, this is how it works:

  1. Enclave code “bundles” the PIN along with key bits, in a sealed object exported at time of key generation.
  2. When it is time to use that key, the caller must provide a valid PIN along with the exported object.
  3. After unsealing the object, the supplied PIN can be compared against the one previously set.

So far, so good. Now what happens when the user wants to change the PIN? One could build an API to unseal/reseal all objects with an updated PIN. Adding one level of indirection simplifies this process: instead of bundling the actual PIN, place a commitment to a different sealed object that holds the PIN. This reduces the problem to resealing 1 object, for all keys associated with the virtual token. But it does not solve the core problem: there is no way to invalidate previously sealed objects using the previous PIN. In that sense, the PIN was not really changed. An attacker who learned the previous PIN and made off with the sealed representation of a key can use that key indefinitely. There is no way to invalidate that previous version.

(You may be wondering how TPMs deal with this, considering they also rely on exporting what are effectively “sealed objects” by another name. The answer is that TPM2 specification allows setting passwords on keys indirectly, by reference to an NVRAM index. The password set on that NVRAM index then becomes the password for the key. As the “Non-Volatile” part of that name implies, the NVRAM index itself is a persistent TPM object. Changing its passphrase on that index collectively changes the passphrase on all keys, without having to re-import/re-export anything.)

One could try to compensate for this by requiring that users pick high entropy secrets, such as long alphanumeric passphrases. This effectively shifts the burden from machine to human. With an effective rate-limiting policy on the PIN as implemented in smart-cards or TPMs, end-users can get away with low-entropy but more usable secrets. The tamper-resistance of the platform guarantees that after 10 tries, the keys will become unusable. Without such rate limiting, it becomes the users’ responsibility to ensure that  an adversary free to make millions of guesses is still unlikely to hit on the correct one.

Zombie keys

PIN enforcement is not the only area where statelessness poses challenges. For example, there is no easy way to guarantee permanent deletion of secrets from the enclave. As long as there is a copy of the signed enclave code and exported objects stashed away somewhere in the untrusted world, they can be reanimated by running the enclave and supplying the same objects again.

There is a global SGX version state that applies at the level of the CPU. Incrementing that will invalidate enclaves signed for the previous version. But this is a drastic measure that renders all SGX applications on that unit unusable.

Smart-cards and TPMs are much better at both selective and global deletion, since they have state. For example TPM2 can be cleared via firmware or by invoking the take-ownership command. Both options render all previous keys unusable. Similarly smart-card applications typically offer a way to explicitly delete keys or regenerate the key in a particular slot, overwriting its predecessor. (Of course there is also the nuclear option: fry the card in the microwave, which is still nowhere as wasteful as physically destroying an entire Intel CPU.)

Unknown unknowns: security assurance

There is no easy comparison on the question of security— arguably the most important criteria for deciding on a key-management platform. Intel SGX is effectively a single product line (although microcode updates can result in material differences between versions) the market space for secure embedded ICs is far more fragmented. There is a variety of vendors supplying products at different levels of security assurance. Most products ship in the form of a complete, integrated solution encompassing everything from the hardware to the high-level application (such as for chip & PIN payments or identity-management) selected by the vendor. SGX on the other hand serves as a foundation for developers to build their own applications that leverage core functionality provided by Intel, such as sealed storage and attestation provided by the platform.

When it comes to smart-cards, there is little discretion left to the end-user in the way of software; in fact most products do not allow users to install any new code of their choosing. That is not a bug, it is a feature: it reduces the attack surface of the platform. In fact the inability to properly segment hostile application was an acknowledged limitation in some smart-card platforms. Until version 3, JavaCard required the use of an “off-card verifier” before installing applets to guard against malicious bytecode.  Unstated assumption there is that the card OS could not be relied on to perform these checks at runtime and stop malicious applets from exceeding their privileges.

By contrast SGX is predicated on the idea that malicious or buggy code supplied by the end-user can peacefully coexist alongside a trusted application, with the isolation guarantees provided by the platform keeping the latter safe. In the comparatively short span SGX has been commercially available, a number of critical vulnerabilities were discovered in the x86 micro-architecture resulting in catastrophic failure of that isolation. To pick a few examples current as of this writing:

  • Foreshadow
  • SgxPectre
  • RIDL
  • PlunderVolt
  • CacheOut
  • CopyCAT

These attacks could be executed purely in software, in some cases by running unprivileged user code.  In each case, Intel responded with microcode updates and in some cases future hardware improvements to address the vulnerabilities. By contrast most attacks against cryptographic hardware— such as side-channel observations or fault-injection— require physical access. Often they involve invasive techniques such as decapping the chip that destroys the original unit, making it difficult to conceal that an attack occurred.

While it is too early to extrapolate from the existing pattern of SGX vulnerabilities, the track-record confirms the expectation that running code on the same platform as SGX  does indeed translate into a significant advantage for attackers.


Using Intel SGX for SSH keys (part I)

Previous posts looked at using dedicated cryptographic hardware— smart-cards, USB tokens or the TPM— for managing key material related to common scenarios such as SSH, full-disk-encryption or PGP. Here we consider doing the same using a built-in feature of recent-generation Intel CPUs: Software Guard Extensions or SGX for short. The first part will focus on the mechanics of achieving that result using existing open-source software, while a follow-up post will compare SGX to alternative versions that leverage discrete hardware.

First let’s clarify the objective by drawing a parallel with smart-cards. The point of using a smart-card for storing keys is to isolate the secret material from code running on the untrusted host machine. While host applications can instruct the card to use those keys, for example to sign or decrypt a message, it remains at arm’s length from the key itself. In a properly implemented design, raw key-bits are only accessible to the card and can not be extracted out of the card’s secure execution environment. In addition to simplifying key management by guaranteeing that only copy of the key exists at all times, it reduces exposure of the secret to malicious host applications which are prevented from making additional copies for future exploitation.

Most of this translates directly to the SGX context, except for how the boundary is drawn. SGX is not a a separate piece of hardware but a different execution mode of the CPU itself. The corresponding requirement can be rephrased as: manage keys such that raw keys are only accessible to a specific enclave, while presenting an “oracle” abstraction to other applications running on the untrusted commodity OS.

The idea of using a secure execution environment as a “virtual” cryptographic hardware is so common that one may expect to find an existing solution for this. Sure enough a quick search “PKCS11 SGX” turns up two open-source projects on Github. The first one appears to be a work-in-progress that is not quite functional at this time. The second one is more promising: called crypto-api-toolkit the project is under the official Intel umbrella at Github and features a full-fledged implementation of a cryptographic token as an enclave, addressable through a PKCS#11 interface. This property is crucial for interoperability since most applications on Linux are designed to access cryptographic hardware through a PKCS#11 interface. That long list includes OpenSSH (client and server), browsers (Firefox and Chrome) and VPN clients (the reference OpenVPN client as well openconnect which is compatible with Cisco VPN appliance.) This crypto-api-toolkit project turns out to check all the necessary box.


This PoC is based on an earlier version of the code-base which runs openssl inside the enclave. The latest version on Github has switched to SoftHSMv2 as the underlying engine. (In many ways that is is a more natural choice, considering SoftHSM itself aims to be a pure, portable simulation of a cryptographic token intended for execution on commodity CPUs.)

Looking closer at the code, there are a number of minor issues that prevent direct use of the module with common applications for manipulating tokens such as the OpenSC suite.

  • crypto-api-toolkit has some unusual requirements around object attributes, which are above and beyond what the PKCS#11 specification demands
  • While the enclave is running a full-featured version of openssl, the implementation restricts the available algorithms and parameters. For example it arbitrarily restricts elliptic-curve keys to a handful of curves, even though openssl recognizes a large collection of curves by OID.
  • A more significant compatibility issue lies in the management object attributes. The stock implementation does not support the expected way of exporting EC public-keys, namely by querying for a specific attribute on the public-key object.

After a few minor tweaks [minimal patch] to address these issues, SSH use-case works end-to-end, if not necessarily appease all PKCS#11 conformance subtleties.

Kicking the tires

First step is building and install the project. This creates all of the necessary shared libraries, including the signed enclave and installs them in the right location but does not yet create a virtual token. The easiest way to do that is to run the sample PKCS#11 application included with the project.


Initialize a virtual PKCS#11 token implemented in SGX


Now we can observe the existence of a new token and interrogate it. pkcs11-tool utility from the OpenSC suite comes in handy for this. For example we can query for supported algorithms, also known as “mechanisms” in PKCS#11 terminology:


New virtual token visible and advertising different algorithms.

(Note that algorithms recognized by OpenSC are listed by their symbolic name such as “SHA256-RSA-PKCS” while newer algorithm such as EdDSA are only shown by numeric ID .)

This token however is not yet in usable state. Initializing a token defines the security-officer (SO) role, which is the PKCS#11 equivalent of the administrator. But the standard “user” role must be initialized with a separate call by the SO first. Quick search shows that the sample application uses the default SO PIN of 12345678:


Initializing the PKCS#11 user role.

With the user role initialized, it is time for generating some keys:


RSA key generation using an SGX enclave as PKCS#11 token


The newly created keypair is reflected in the appearance of corresponding files on the local file system. Each token is associated with a subdirectory under “/opt/intel/crypto-api-toolkit/tokens” where metadata and objects associated with the token are persisted. This is necessary because unlike a smart-card or USB token, an enclave does not have its own dedicated storage. Instead any secret material that needs to be persisted must be exported in sealed state and saved by the untrusted OS. Otherwise any newly generated object would cease to exist once the machine is shutdown.

Next step is enumerating the objects created and verifying that they are visible to the OpenSSH client:


Enumerating PKCS#11 objects (with & without login) and retrieving the public-key for SSH usage

In keeping with common convention, the RSA private-key has the CKA_PRIVATE attribute set. It will not be visible when enumerating objects unless the user first logs in to the virtual token. This is why the private key object is only visible in the second invocation.

OpenSSH can also see the public-key and deem this RSA key usable for authentication. Somewhat confusingly, ssh-keygen with the “-D” argument does not generate a new key as implied by the command name. It enumerates existing keys available on all available tokens associated with the given PKCS#11 module.

We can add this public-key to a remote server and attempt a connection to check if the openssh client is able to sign with the key. While Github does not provide interactive shells, it is arguably easiest way to check for usability of SSH keys:


SSH using private-key managed in SGX

Beyond RSA

Elliptic curve keys also work:


Elliptic-curve key generation using NIST P-256 curve

Starting with release 8.0, OpenSSH can use elliptic curve keys on hardware tokens. This is why the patch adds support for querying the CKA_EC_POINT attribute on the public-key object, by defining a new enclave call to retrieve that attribute. (As an aside: while that  follows the existing pattern for querying the CKA_EC_PARAMS attribute, this is an inefficient design. These attributes are neither sensitive or variable over the lifetime of the object. In fact there is nothing sensitive about a public-key object that requires calling into the enclave. It would have been much more straightforward to export this object once and for all in the clear for storage in the untrusted side.)

These ECDSA keys are also usable for SSH with more recent versions of OpenSSH:


More recent versions of OpenSSH support using ECDSA keys via PKCS#11

Going outside the SSH scenario for a second, we can also generate elliptic-curve keys over a different curve such as secp256k1. While that key will not be suitable for SSH, it can be used for signing cryptocurrency transactions:


Generating and using an ECDSA key over secp256k1, commonly used for cryptocurrency applications such as Bitcoin

While this proof-of-concept suggests that it is possible to use an SGX enclave as a virtual cryptographic token, it is a different question how that compares to using using real dedicated hardware. The next post will take up that comparison.




Helpful deceptions: location privacy on mobile devices


Over at the New York Times, an insightful series of articles on privacy continues to give consumers disturbing peeks at how the sausage gets made in the surveillance capitalism business. The episode on mobile location tracking is arguably one of the more visceral episodes, demonstrating the ability to isolate individuals as they travel from sensitive locations— the Pentagon, White House, CIA parking-lot— back to their own residence. This type of surveillance capability is not in the hands of a hostile nation state (at least not directly; there is no telling where the data ends up downstream after it is repeatedly repurposed and sold) It is masterminded by run-of-the-mill technology companies foisting their surveillance operation on unsuspecting users in the guise of helpful mobile applications.

But NYT misses the mark on how users can protect themselves. Self-help guide dutifully points out users can selectively disable location permission for apps:

The most important thing you can do now is to disable location sharing for apps already on your phone.

Many apps that request your location, like weather, coupon or local news apps, often work just fine without it. There’s no reason a weather app, for instance, needs your precise, second-by-second location to provide forecasts for your city.

This is correct in principle. For example Android makes it possible to view which apps have access to location and retract that permission anytime. The only problem is many apps will demand that permission right back or refuse to function. This is a form of institutionalized extortion, normalized by the expectation that most applications are “free,” which is to say they are subsidized by advertising that in turn draws on pervasive data collection. App developers withhold useful functionality from customers unless the customer agrees to give up their privacy and capitulates to this implicit bargain.

Interestingly there is a more effective defense available to consumers on Android, but it is currently hampered by a half-baked implementation. Almost accidentally, Android allows designating an application to provide alternative location information to the system. This feature is intended primarily for those developing Android apps, buried deep under developer options.


It is helpful for an engineer developing an Android app in San Francisco to be able to simulate how her app will behave for a customer located in Paris or Zanzibar, without so much as getting out of her chair. Not surprisingly there are multiple options in PlayStore that help set artificial locations and even simulate movement. Here is Location Changer configured to provide a static location:

Screenshot_20200211-150017_Location Changer.jpg

(There would be a certain irony in this app being advertising supported, if it were not common for privacy-enhancing technologies to be subsidized by business models not all that different from the ones they are allegedly protecting against.)

At first this looks like a promising defense against pervasive mobile tracking. Data-hungry tracking apps are happy, still operating under the impression that they retain their entitlement to location data and track users at will. (There is no indication the data is not coming from the GPS and is instead provided by another mobile app.) Because that data no longer reflects the actual position of the device, its disclosure is harmless.

That picture breaks down quickly on closer look. The first problem is that the simulated location is indiscriminately provided to all apps. That means not only invasive surveillance apps but also legitimate apps with perfectly good justification for location data will receive bogus information. For example here is Google Maps also placing the user in Zanzibar, somewhat complicating driving directions:


The second problem is that common applications providing simulated location only have rudimentary capabilities, such as reporting a fixed location or simulating motion along a simple linear path— one that goes straight through buildings, tunnels under natural obstacles and crosses rivers. It would be trivial for apps to detect such anomalies and reject the location data or respond with additional prompts to shame the device owner into providing true location. (Most apps do not appear to be making that effort today, probably because few users have resorted to this particular subterfuge. But under an adversarial model, we have to assume that once such tactics are widespread, surveillance apps will respond by adding such detection capabilities.)

What is required is a way to provide realistic location information that is free of anomalies, such as a device stuck at the same location for hours or suddenly “teleported” across hundreds of miles. Individual consumers have access to a relatively modest sized corpus of such data—  their own past history. In theory we can all synthesize realistic looking location data for the present by sampling and remixing past location history. This solution is still unsatisfactory since it is still built on data sampled from a uniquely identifiably individual. That holds even if the simulated app is replaying an ordinary day in the life over and over again in a Groundhog Day loop. It may hold no new information about her current whereabouts, but it still reveals information about the person. For example, the simulated day will likely start and end at their home residence. What is needed is a way to synthesize realistic location information based on actual data from other people.

Of course a massive repository of such information exists in the hands of one company that arguably bears most responsibility for creating this problem in the first place: Google. Because Google also collects location information from hundreds of millions of iPhone and Android users, the company can craft realistic location data that can help users the renegotiate the standard extortion terms with apps by feeding them simulated data.

Paradoxically, Google as a platform provider is highly motivated to not provide such assistance. That is a consequence of the virtuous cycle that sustains platforms such as Android: more users make the platform attractive to developers who are incentivized to write new apps and the resulting ecosystem of apps in turn makes the platform appealing to users. In a parallel to what has been called the original sin of the web— reliance on free content subsidized by advertising— that ecosystem of mobile apps is largely built around advertising which is in turn fueled by surveillance of users. Location data is a crucial part of that surveillance operation. Currently developers face a simple, binary model for access to location. Either location data is available, as explicitly requested by the application manifest or it has been denied, in the unlikely scenario of a privacy-conscious user after having read one too many troubling articles on privacy. There is no middle ground where convincing but bogus location data has been substituted to fool the application at the user’s behest. Enabling that options will clearly improve privacy for end-users. But it will also rain on the surveillance business model driving the majority of mobile apps.

This is a situation where the interests of end-users and application developers are in direct conflict. Neither group directly has a business relationship Google—no one has to buy a software license for their copy of Android and only a fraction of users have paying subscriptions from Google. Past history on this is not encouraging. Unless a major PR crisis or regulatory intervention forces their hand, platform owners side with app developers, for good reasons. Compared to the sporadic hand-wringing about privacy among consumers, professional developers are keenly aware of their bottom line at all times. They will walk away from a platform if it becomes too consumer-friendly and interferes in cavalier tracking and data-collection practices that help keep afloat advertising-supported business models.


A clear view into AI risks: watching the watchers

A recent NYT expose on ClearView only scratches the surface on the problems with outsourcing critical law-enforcement functions to private companies. There is a lot of  To recap: ClearView AI is possibly the first startup to have commercialized face-recognition-as-a-service (FRaaS?) and riding high on a recent string of successes with police departments in the US. The usage model could not be any easier: upload an image of a person of interest, ClearView locates other pictures of the same person from its massive database of images scraped from public sources such as social media. Imagine going from a grainy surveillance image taken from a security camera to the LinkedIn profile of the suspect. It is worth pointing out that the services hosting the original images including Facebook were none too happy about the unauthorized scraping. Nor was there any consent from users to participate in this AI experiment; as with all things social-media, privacy is just an afterthought.

Aside from the blatant disregard for privacy, what could go wrong here?
NYT article already hints at one troubling dimension of the problem. While investigating ClearView, the NYT journalist asked various members of police departments  with authorized access to the system to search for himself. This experiment initially turned up several hits as expected, demonstrating the coverage of the system. But halfway through the experiment, something strange happened: suddenly the author “disappeared” from the system with no information returned on subsequent searches, even when using the same image successfully matched before. No satisfactory explanation for this came forward. At first it is chalked up to a deliberate “security feature” where the system detects and blocks unusual pattern of queries— presumably the same image being searched repeatedly? Later the founder claims it is a bug and it is eventually resolved. (Reading between the lines suggests a more conspiratorial interpretation: ClearView gets wind of a journalist writing an expose about the company and decides to remove some evidence that demonstrates the uncanny coverage of its database.)

Going with Hanlon’s razor and attributing this case of the “disappearing” person to an ordinary bug, the episode highlights two troubling issues:

  • ClearView learns which individuals are being searched
  • ClearView controls the results returned

Why is this problematic? Let’s start with the visibility issue, which is practically unavoidable. This means that a private company effectively knows who is under  investigation by law enforcement and in which jurisdiction. Imagine if every police department CCed Facebook every time they sent an email to announce that they are opening an investigation into citizen John Smith. That is a massive amount of trust placed in a private entity that is neither accountable to public oversight nor constrained by what it can do with that information.

Granted there are other situations when private companies are necessarily privy to ongoing investigations. Telcos have been servicing wiretaps and pen-registers for decades and more recently ISPs have been tapped as a treasure trove of information on the web-browsing history of their subscribers. But as the NYT article makes clear, ClearView is no Facebook or AT&T. Large companies like Facebook, Google and Microsoft receive thousands of subpoenas every year for customer information, and have developed procedures over time for compartmentalizing the existence of these requests. (For the most sensitive category of requests such as National Security Letters and FISA warrants, there are even more restrictive  procedures.) Are there comparable internal controls at ClearView? Does every employee have access to this information stream? What happens when one of those employees or one of their friends becomes the subject of an investigation?

For that matter, what prevents ClearView from capitalizing on its visibility into law-enforcement requests and trying to monetize both sides of the equation? What prevents the company from offering an “advance warning” service— for a fee of course— to alert individuals whenever they are being investigated?

Even if one posits that ClearView will act in an aboveboard manner and refrain from abusing its visibility into ongoing investigations for commercial gain, there is the question of operational security. Real-time knowledge of law enforcement actions is too tempting a target for criminals and nation states like to pass up. What happens when ClearView is breached by the Russian mob or an APT group working on behalf of China? One can imagine face-recognition systems also being applied to counter-intelligence scenarios to track foreign agents operating on US soil. If you are the nation sponsoring those agents, you want to know when their names come under scrutiny. More importantly you care whether it is the Poughkeepsie police department or the FBI asking the questions.

Being able to modify search results has equally troubling implications. It is a small leap from alerting someone that they are under investigation to withholding results or better yet, deliberately returning bogus information to throw off an investigation or frame an innocent person. The statistical nature of face-recognition and incompleteness of a database cobbled together from public sources makes it much easier to hide such deception. According to the Times, ClearView returns a match only about 75% of the time. (The article did not cite a figure for the false-positive rate, where the system returns results which are later proven to be incorrect.) Results withheld on purpose to protect designated individuals can easily blend in with legitimate failures to identify a face. Similarly ClearView could offer “immunity from face recognition” under the guise of Right To Be Forgotten requests, offering to delete all information about a person from their database— again for a fee presumably.

As before, even if ClearView avoids such dubious business models and remains dedicated to maintaining the integrity of its database, attackers who breach ClearView infrastructure can not be expected to have similar qualms. A few tweaks to metadata in the database could be enough to skew results. Not to mention that a successful breach is not necessary to poison the database to begin with: Facebook and LinkedIn are full of fake accounts with bogus information. Criminals almost certainly have been building such fake online personae by mashing bits of “true” information from different individuals.

This is a situation where ClearView spouting bromides about the importance of privacy and security will not cut it. Private enterprises afforded this much visibility into active police investigations and with this much influence over the outcome of those investigations need oversight. At a minimum companies like ClearView must be prevented from exploiting their privileged role for anything other than the stated purpose— aiding US law enforcement agencies. They need periodic independent audits to verify that sufficient security controls exist to prevent unauthorized parties from tapping into the sensitive information they are sitting on or subverting the integrity of results returned.



Filecoin, StorJ and the problem with decentralized storage (part II)

[continued from part I]

Quantifying reliability

Smart-contracts can not magically prevent hardware failures or compel a service provider at gun point to perform the advertised services. At best blockchains can facilitate contractual arrangements with a fairness criteria: the service provider gets paid if and only if they deliver the goods. Proofs-of-storage verified by a decentralized storage chain are an example of that model. It keeps service providers honest by making their revenue contingent on living up to the stated promise of storing customer data. But as the saying goes, past performance is no guarantee of future results. A service provider can produce the requisite proofs 99 times and then report all data is lost when it is time for the next one. This can happen because of an “honest” mistake or more troubling, because it is more profitable for decentralized providers to break existing contracts.

When it comes to honest mistakes—random hardware failures resulting in unrecoverable data loss— the guarantees that can be provided by decentralized storage alone are slightly weaker. This follows from a limitation with existing decentralized designs: their inability to express the reliability of storage systems, except in most rudimentary ways. All storage systems are subject to risks of hardware failure and data loss. That goes for AWS and Google. For all the sophistication of their custom-designed hardware they are still subject to laws of physics. There is still a mean-time-to-failure  associated with every component. It follows must be a design in place to cope with those failures across the board, ranging from making regular backups to having diesel generators ready to kick in when the grid power fails. We take for granted the existence of this massive infrastructure behind the scenes when dealing with the likes of Amazon. There is no such guarantee for a random counterparty on the blockchain.

Filecoin uses a proof-of-replication intended to show that not only does the storage provider have the data but they have multiple copies. (Ironically that involves introducing even more work on the storage provider to format data for storage— otherwise they can fool the test by re-encrypting one copy into multiple replicas when necessary— further pushing the economics away from the allegedly zero marginal cost.) That may seem comparable to the redundancy of AWS but it is not. Five disks sitting in the same basement hooked up to the same PC can claim “5-way replication.” But it is not meaningful redundancy because all five copies are subject to correlated risk, one lightning-strike or ransomware infection away from total data loss. By comparison Google operates data-centers around the world and can afford to put  each of those five copies in a different facility. Each one of those facilities still has a non-zero chance of burning to the ground or losing power during a natural disaster. As long as the locations are far enough from each other, those risks are largely uncorrelated. That key distinction is lost in the primitive notion of “replication” expressed by smart-contracts.

Unreliable by design

Reliability questions aside, there is a more troubling problem with the economics of decentralized storage. It may well be the most rational— read: profitable— strategy to operate an unreliable service deliberately designed to lose customer data. Here are two hypothetical examples to demonstrate the notion that on a blockchain, there is no success like failure.

Consider a storage system designed to store data and publish regular proofs of storage as promised, but with one catch: it would never return that data if the customer actually requested it. (From the customer perspective: you have backups but unbeknownst to you, they are unrecoverable.) Why would this design be more profitable? Because streaming a terabyte back to the customer is dominated by an entirely different type of operational expense than storing that terabyte in the first place: network bandwidth. It may well be profitable to set up a data storage operation in the middle-of-nowhere with cheap real-estate, abundant power but expensive bandwidth. Keeping data in storage while publishing the occasional proof involves very little bandwidth, because proof-of-storage protocols are very efficient in space. The only problem comes up if the customer actually wants their entire data streamed back. At that point a different cost structure involving network bandwidth comes into play and it may well be more profitable to walk away.

To make this more concrete: at the time of writing AWS charges ~1¢ per gigabyte per month for “infrequently accessed” data but 9¢ per gigabyte of data outbound over the network. Conveniently inbound traffic is free; uploading data to AWS costs nothing. As long as prevailing Filecoin market price is higher than S3 prices, one can operate a Filecoin storage miner on AWS to arbitrage the difference— this is exactly what DropBox used to do before figuring out how to operate its own datacenter. The only problem with this model is if the customer comes calling for their data too early or too often. In that case the bandwidth costs may well disrupt the profitability equation. If streaming the data back would lose money overall on that contract, the rational choice is to walk away.

Walking away from the contract for profit

Recall that storage providers are paid in funny money, namely the utility token associated with the blockchain. That currency is unlikely to work for purchasing anything in the real world and must be converted into dollars, euros or some other unit of measure accepted by the utility company to keep the datacenter lights on. That conversion in turn hinges on a volatile exchange rate. While there are reasonably mature markets in major cryptocurrencies such as Bitcoin and Ethereum, the tail-end of the ICO landscape is characterized by thin order-books and highly speculative trading. Against the backdrop of what amounts to an extreme version of FX risk, the service provider enters into a contract to store data for an extended period of time, effectively betting that the economics will work out. It need not be profitable today but perhaps it is projected to become profitable in the near future based on rosy forecasts of prices going to the moon. What happens if that bet proves incorrect? Again the rational choice is to walk away from the contract and drop all customer data.

For that matter, what happens when a better opportunity comes along? Suppose the exchange rate is stable or those risks are managed using a stablecoin while the market value of storage increases. Buyers are willing to pay more of the native currency per byte of data stashed away. Or another blockchain comes along, promising more profitable utilization of spare disk capacity. That may seem like great news for the storage provider except for one problem: they are stuck with existing customers paying lower rates negotiated earlier. Optimal choice is to renege on those commitments: delete existing customer data and reallocate the scarce space to higher-paying customers.

It is not clear if blockchain incentives can be tweaked to discourage this without creating unfavorable dynamics for honest service providers. Suppose we impose penalties on providers for abandoning storage contracts midway. These penalties can not be clawback provisions for past payments. The provider may well have already spent that money to cover operational expenses. For the same reason, it is not feasible to withhold payment until the very end of the contract period, without creating the risk that the buyer may walk away. Another option is requiring service providers to put up a surety bond. Before they are allow to participate in the ecosystem, they must set aside a lump sum on the blockchain held in escrow. These funds would be used to compensate any customers harmed by failure to honor storage contracts. But this has the effect of creating additional barriers to entry and locking away capital in a very unproductive way. Similarly the idea of taking monetary damages out of  future earnings does not work. It seems plausible that if a service provider screws over Alice because Bob offered a better price, recurring fees paid by Bob should be diverted to compensate Alice. But the service provider can trivially circumvent that penalty while still doing business with Bob: just start over with a new identity completely unlinkable to that “other” provider who screwed over Alice. To paraphrase the New Yorker cartoon on identity: on the blockchain nobody knows you are a crook.

Reputation revisisted

Readers may object: surely such an operation will go out of business once the market recognizes their modus operandi and no one is willing to entrust them with storing data? Aside from the fact that lack of identity on a blockchain renders it meaningless to go out-of-business, this posits there is such a thing as “reputation” buyers take into account when making decisions. The whole point of operating a storage market on chain is to allow customers to select the lowest bidder while relying on the smart-contract logic to guarantee that both sides hold up their side of the bargain. But if we are to invoke some fuzzy notion of reputation as selection criteria for service providers, why bother with a blockchain? Amazon, MSFT and Google have stellar reputations in delivering high-reliability, low-cost storage with  no horror stories of customers randomly getting ripped-off because Google decided one day it would be more profitable to drop all of their files. Not to mention, legions of plaintiffs’ attorneys would be having a field day with any US company that reneges on contracts in such cavalier fashion, assuming a newly awakened FTC does not get in on the action first. There is no reason to accept the inefficiencies of a blockchain or invoke elaborate cryptographic proofs if reputation is a good enough proxy.



Filecoin, StorJ and the problem with decentralized storage (part I)

Blockchains for everything

Decentralized storage services such as Filecoin and StorJ seek to disrupt the data-storage industry, using blockchain tokens to create a competitive marketplace that can offer more space at lower cost. They also promise to bring a veneer of legitimacy to the Initial Coin Offering (ICO) space. At a time when ICOs were being mass-produced as thinly-veiled, speculative investment vehicles that are likely to run afoul of the Howey test as unregistered securities, file-storage looks like a shining example of an actual utility tokens, for having some utility. Instead of betting on the “greater fool” theory of offloading the token on the next person willing to pay a higher price, these tokens are good for a useful service: paying someone else to store your backups. This blog post looks at some caveats and overlooked problems in the design.

Red-herring: privacy

A good starting point is to dispel the alleged privacy advantage. Decentralized storage system often tout their privacy advantage: data is stored encrypted by its owner, such that the storage provider can not read it even if they wanted to. That may seem like an improvement over the current low bar which relies on service providers swearing on a stack of pre-IPO shares that, pinky-promise, that they never not dip into customer data for business advantage, a promise more often honored in the breach as the examples of Facebook and Google repeatedly demonstrate. But there is no reason to fundamentally alter the data-storage model to achieve E2E security  against rogue providers. While far from being the path of least resistance, there is a long history of alternative remote backup services such as tarsnap for privacy-conscious users. (All 17 of them.) Previous blog posts here have demonstrated that it is possible to implement bring-your-own-encryption with vanilla cloud storage services such as AWS such that the cloud service is a glorified remote drive storing random noise it can not make sense of. These models are far more flexible than arbitrary, one-size-fits-all encryption model hard-coded into protocols such as StorJ. Users are free to adopt their preferred scheme, compatible with their existing key management model. For example with AWS Storage Gateway, Linux users can treat cloud storage as an iSCSI volume with LUKS encryption while those on Windows can apply Bitlocker-To-Go to protect that volume exactly as they would encrypt a USB thumb-drive. Backing up rarely accessed data in an enterprise is even easier: nothing more fancy than scripts to GPG-sign and encrypt backups before uploading them to AWS/Azure/GCP is necessary.

Facing the competition

Once we accept the premise that privacy alone can not be a differentiator for backup services—users can already solve that problem without depending on the service provider—the competitive landscape reverts to that of a commodity service. Roughly speaking, providers compete on three dimensions: reliability, cost and speed.

  • Cost is the price paid for storing each gigabyte of data for a given period of time.
  • Speed refers to how quickly that data can be downloaded when necessary and to a lesser extent, how quickly it can be uploaded during the backup process.
  • Reliability is the probability of being able to get all of your data back whenever you need it. A company that retains 99.999% of customer data while irreversibly losing the remaining 0.001% will not stay in business long. Even 100% retention rate is not great if the service only operates from 9AM-4PM.

The economic argument against decentralized storage can be stated this way: it is very unlikely that a decentralized storage market can offer an alternative that can compete against centralized providers— AWS, Google, Azure— when measured on any of these dimensions. (Of course nothing prevents Amazon or MSFT from participating in the decentralized marketplace to sell storage, but this would be another example of doing with increased friction something on a blockchain that can be done much more efficiently via existing channels.)

Among the three criteria, cost is easiest one to forecast. Here is the pitch from StorJ website:

“Have unused hard drive capacity and bandwidth?
Storj pays you for your unused hard drive capacity and bandwidth in STORJ tokens!”

Cloud services are ruled by a ruthless economy of scales. This is where Amazon, Google, MSFT and a host of other cloud providers shine, reaping the benefits of investment in data-centers and petabytes of storage capacity. Even if we ignore the question of reliability, it is very unlikely that the hobbyist with a few spare drives sitting in their basement can have a lower, per gigabyte cost.

The standard response to this criticism is pointing out that decentralized storage can unlock spare, unused capacity at zero marginal cost. Returning to our hypothetical hobbyist, he need not add new capacity to compete with AWS. Let us assume he already owns excess storage already paid for that sits underutilized; there is only so much space you can take up with vacation pictures. Disks consume about the same energy whether they are 99% of 1% full. Since the user is currently getting paid exactly $0 for that spare capacity, any value above zero is a good deal, according to this logic. In that case, any non-zero price point is achievable, including one that undercuts even the most cost-effective cloud provider. Our hobbyist can temporarily boot up those ancient PCs, stash away data someone on the other side of the world is willing to pay to safeguard and shutdown the computer once the backups are written. The equipment remains unplugged from the wall until such time as the buyer comes calling for their data.

Proof-of-storage and cost of storage

The problem with this model is that decentralized storage demands much more than mere inert storage of bits. They must achieve reliability in the absence of the usual contractual relationship, namely, someone you can sue for damages if the data disappears. Instead the blockchain itself must enforce fairness in the transaction: the service provider gets paid only if they are actually storing the data entrusted for safeguarding. Otherwise the provider could pocket the payment, discard uploaded data and put that precious disk space to some other use. Solving this problem requires a cryptographic technique called proofs-of-data-possession (PDP) or alternatively proofs-of-storage. Providers periodically run a specific computation over the data they promised to store— a computation that is only possible if they still have 100% of that data— and publish the results on the blockchain, which in turn facilitates payment conditional on periodic proofs. Because the data-owner can observe these proofs, they are assured their their precious data is still around. The key property is that the owner does not need access to the original file to check correctness: only a small “fingerprint” about the uploaded data is retained. That in a nutshell is the point of proof-of-storage; if the owner needed access to the entire dataset to verify the calculation, it would defeat the point of outsourcing storage.

While proofs of storage may keep service providers honest, it breaks one of the assumptions underlying the claimed economic advantage: leveraging idle capacity. Once we demand periodically going over the bits and running cryptographic calculations, the storage architecture can not be an ancient PC unplugged from the wall. There is a non-zero marginal cost to implementing proof-of-storage. In fact there is an inverse relationship between latency and price. Tape archives sitting on a shelf a much lower cost per gigabyte than spinning disks attached to a server. These tradeoffs are even reflected in the pricing model charged by Amazon: AWS offers a storage tier called Glacier which is considerably cheaper than S3 but comes with significant latency— on the order of hours— for accessing data. Requiring periodic proof-of-storage undermines  precisely the one model— offline media gathering dust in a vault— that has the best chance of undercutting large-scale centralized providers.

Beyond the economics, there is a more subtle problem with proof-of-storage: knowing your data is there does not mean that you can get it back when needed. This is the subject of the next blog post.