Using Intel SGX for SSH keys (part I)

Previous posts looked at using dedicated cryptographic hardware— smart-cards, USB tokens or the TPM— for managing key material related to common scenarios such as SSH, full-disk-encryption or PGP. Here we consider doing the same using a built-in feature of recent-generation Intel CPUs: Software Guard Extensions or SGX for short. The first part will focus on the mechanics of achieving that result using existing open-source software, while a follow-up post will compare SGX to alternative versions that leverage discrete hardware.

First let’s clarify the objective by drawing a parallel with smart-cards. The point of using a smart-card for storing keys is to isolate the secret material from code running on the untrusted host machine. While host applications can instruct the card to use those keys, for example to sign or decrypt a message, it remains at arm’s length from the key itself. In a properly implemented design, raw key-bits are only accessible to the card and can not be extracted out of the card’s secure execution environment. In addition to simplifying key management by guaranteeing that only copy of the key exists at all times, it reduces exposure of the secret to malicious host applications which are prevented from making additional copies for future exploitation.

Most of this translates directly to the SGX context, except for how the boundary is drawn. SGX is not a a separate piece of hardware but a different execution mode of the CPU itself. The corresponding requirement can be rephrased as: manage keys such that raw keys are only accessible to a specific enclave, while presenting an “oracle” abstraction to other applications running on the untrusted commodity OS.

The idea of using a secure execution environment as a “virtual” cryptographic hardware is so common that one may expect to find an existing solution for this. Sure enough a quick search “PKCS11 SGX” turns up two open-source projects on Github. The first one appears to be a work-in-progress that is not quite functional at this time. The second one is more promising: called crypto-api-toolkit the project is under the official Intel umbrella at Github and features a full-fledged implementation of a cryptographic token as an enclave, addressable through a PKCS#11 interface. This property is crucial for interoperability since most applications on Linux are designed to access cryptographic hardware through a PKCS#11 interface. That long list includes OpenSSH (client and server), browsers (Firefox and Chrome) and VPN clients (the reference OpenVPN client as well openconnect which is compatible with Cisco VPN appliance.) This crypto-api-toolkit project turns out to check all the necessary box.


This PoC is based on an earlier version of the code-base which runs openssl inside the enclave. The latest version on Github has switched to SoftHSMv2 as the underlying engine. (In many ways that is is a more natural choice, considering SoftHSM itself aims to be a pure, portable simulation of a cryptographic token intended for execution on commodity CPUs.)

Looking closer at the code, there are a number of minor issues that prevent direct use of the module with common applications for manipulating tokens such as the OpenSC suite.

  • crypto-api-toolkit has some unusual requirements around object attributes, which are above and beyond what the PKCS#11 specification demands
  • While the enclave is running a full-featured version of openssl, the implementation restricts the available algorithms and parameters. For example it arbitrarily restricts elliptic-curve keys to a handful of curves, even though openssl recognizes a large collection of curves by OID.
  • A more significant compatibility issue lies in the management object attributes. The stock implementation does not support the expected way of exporting EC public-keys, namely by querying for a specific attribute on the public-key object.

After a few minor tweaks [minimal patch] to address these issues, SSH use-case works end-to-end, if not necessarily appease all PKCS#11 conformance subtleties.

Kicking the tires

First step is building and install the project. This creates all of the necessary shared libraries, including the signed enclave and installs them in the right location but does not yet create a virtual token. The easiest way to do that is to run the sample PKCS#11 application included with the project.


Initialize a virtual PKCS#11 token implemented in SGX


Now we can observe the existence of a new token and interrogate it. pkcs11-tool utility from the OpenSC suite comes in handy for this. For example we can query for supported algorithms, also known as “mechanisms” in PKCS#11 terminology:


New virtual token visible and advertising different algorithms.

(Note that algorithms recognized by OpenSC are listed by their symbolic name such as “SHA256-RSA-PKCS” while newer algorithm such as EdDSA are only shown by numeric ID .)

This token however is not yet in usable state. Initializing a token defines the security-officer (SO) role, which is the PKCS#11 equivalent of the administrator. But the standard “user” role must be initialized with a separate call by the SO first. Quick search shows that the sample application uses the default SO PIN of 12345678:


Initializing the PKCS#11 user role.

With the user role initialized, it is time for generating some keys:


RSA key generation using an SGX enclave as PKCS#11 token


The newly created keypair is reflected in the appearance of corresponding files on the local file system. Each token is associated with a subdirectory under “/opt/intel/crypto-api-toolkit/tokens” where metadata and objects associated with the token are persisted. This is necessary because unlike a smart-card or USB token, an enclave does not have its own dedicated storage. Instead any secret material that needs to be persisted must be exported in sealed state and saved by the untrusted OS. Otherwise any newly generated object would cease to exist once the machine is shutdown.

Next step is enumerating the objects created and verifying that they are visible to the OpenSSH client:


Enumerating PKCS#11 objects (with & without login) and retrieving the public-key for SSH usage

In keeping with common convention, the RSA private-key has the CKA_PRIVATE attribute set. It will not be visible when enumerating objects unless the user first logs in to the virtual token. This is why the private key object is only visible in the second invocation.

OpenSSH can also see the public-key and deem this RSA key usable for authentication. Somewhat confusingly, ssh-keygen with the “-D” argument does not generate a new key as implied by the command name. It enumerates existing keys available on all available tokens associated with the given PKCS#11 module.

We can add this public-key to a remote server and attempt a connection to check if the openssh client is able to sign with the key. While Github does not provide interactive shells, it is arguably easiest way to check for usability of SSH keys:


SSH using private-key managed in SGX

Beyond RSA

Elliptic curve keys also work:


Elliptic-curve key generation using NIST P-256 curve

Starting with release 8.0, OpenSSH can use elliptic curve keys on hardware tokens. This is why the patch adds support for querying the CKA_EC_POINT attribute on the public-key object, by defining a new enclave call to retrieve that attribute. (As an aside: while that  follows the existing pattern for querying the CKA_EC_PARAMS attribute, this is an inefficient design. These attributes are neither sensitive or variable over the lifetime of the object. In fact there is nothing sensitive about a public-key object that requires calling into the enclave. It would have been much more straightforward to export this object once and for all in the clear for storage in the untrusted side.)

These ECDSA keys are also usable for SSH with more recent versions of OpenSSH:


More recent versions of OpenSSH support using ECDSA keys via PKCS#11

Going outside the SSH scenario for a second, we can also generate elliptic-curve keys over a different curve such as secp256k1. While that key will not be suitable for SSH, it can be used for signing cryptocurrency transactions:


Generating and using an ECDSA key over secp256k1, commonly used for cryptocurrency applications such as Bitcoin

While this proof-of-concept suggests that it is possible to use an SGX enclave as a virtual cryptographic token, it is a different question how that compares to using using real dedicated hardware. The next post will take up that comparison.




Helpful deceptions: location privacy on mobile devices


Over at the New York Times, an insightful series of articles on privacy continues to give consumers disturbing peeks at how the sausage gets made in the surveillance capitalism business. The episode on mobile location tracking is arguably one of the more visceral episodes, demonstrating the ability to isolate individuals as they travel from sensitive locations— the Pentagon, White House, CIA parking-lot— back to their own residence. This type of surveillance capability is not in the hands of a hostile nation state (at least not directly; there is no telling where the data ends up downstream after it is repeatedly repurposed and sold) It is masterminded by run-of-the-mill technology companies foisting their surveillance operation on unsuspecting users in the guise of helpful mobile applications.

But NYT misses the mark on how users can protect themselves. Self-help guide dutifully points out users can selectively disable location permission for apps:

The most important thing you can do now is to disable location sharing for apps already on your phone.

Many apps that request your location, like weather, coupon or local news apps, often work just fine without it. There’s no reason a weather app, for instance, needs your precise, second-by-second location to provide forecasts for your city.

This is correct in principle. For example Android makes it possible to view which apps have access to location and retract that permission anytime. The only problem is many apps will demand that permission right back or refuse to function. This is a form of institutionalized extortion, normalized by the expectation that most applications are “free,” which is to say they are subsidized by advertising that in turn draws on pervasive data collection. App developers withhold useful functionality from customers unless the customer agrees to give up their privacy and capitulates to this implicit bargain.

Interestingly there is a more effective defense available to consumers on Android, but it is currently hampered by a half-baked implementation. Almost accidentally, Android allows designating an application to provide alternative location information to the system. This feature is intended primarily for those developing Android apps, buried deep under developer options.


It is helpful for an engineer developing an Android app in San Francisco to be able to simulate how her app will behave for a customer located in Paris or Zanzibar, without so much as getting out of her chair. Not surprisingly there are multiple options in PlayStore that help set artificial locations and even simulate movement. Here is Location Changer configured to provide a static location:

Screenshot_20200211-150017_Location Changer.jpg

(There would be a certain irony in this app being advertising supported, if it were not common for privacy-enhancing technologies to be subsidized by business models not all that different from the ones they are allegedly protecting against.)

At first this looks like a promising defense against pervasive mobile tracking. Data-hungry tracking apps are happy, still operating under the impression that they retain their entitlement to location data and track users at will. (There is no indication the data is not coming from the GPS and is instead provided by another mobile app.) Because that data no longer reflects the actual position of the device, its disclosure is harmless.

That picture breaks down quickly on closer look. The first problem is that the simulated location is indiscriminately provided to all apps. That means not only invasive surveillance apps but also legitimate apps with perfectly good justification for location data will receive bogus information. For example here is Google Maps also placing the user in Zanzibar, somewhat complicating driving directions:


The second problem is that common applications providing simulated location only have rudimentary capabilities, such as reporting a fixed location or simulating motion along a simple linear path— one that goes straight through buildings, tunnels under natural obstacles and crosses rivers. It would be trivial for apps to detect such anomalies and reject the location data or respond with additional prompts to shame the device owner into providing true location. (Most apps do not appear to be making that effort today, probably because few users have resorted to this particular subterfuge. But under an adversarial model, we have to assume that once such tactics are widespread, surveillance apps will respond by adding such detection capabilities.)

What is required is a way to provide realistic location information that is free of anomalies, such as a device stuck at the same location for hours or suddenly “teleported” across hundreds of miles. Individual consumers have access to a relatively modest sized corpus of such data—  their own past history. In theory we can all synthesize realistic looking location data for the present by sampling and remixing past location history. This solution is still unsatisfactory since it is still built on data sampled from a uniquely identifiably individual. That holds even if the simulated app is replaying an ordinary day in the life over and over again in a Groundhog Day loop. It may hold no new information about her current whereabouts, but it still reveals information about the person. For example, the simulated day will likely start and end at their home residence. What is needed is a way to synthesize realistic location information based on actual data from other people.

Of course a massive repository of such information exists in the hands of one company that arguably bears most responsibility for creating this problem in the first place: Google. Because Google also collects location information from hundreds of millions of iPhone and Android users, the company can craft realistic location data that can help users the renegotiate the standard extortion terms with apps by feeding them simulated data.

Paradoxically, Google as a platform provider is highly motivated to not provide such assistance. That is a consequence of the virtuous cycle that sustains platforms such as Android: more users make the platform attractive to developers who are incentivized to write new apps and the resulting ecosystem of apps in turn makes the platform appealing to users. In a parallel to what has been called the original sin of the web— reliance on free content subsidized by advertising— that ecosystem of mobile apps is largely built around advertising which is in turn fueled by surveillance of users. Location data is a crucial part of that surveillance operation. Currently developers face a simple, binary model for access to location. Either location data is available, as explicitly requested by the application manifest or it has been denied, in the unlikely scenario of a privacy-conscious user after having read one too many troubling articles on privacy. There is no middle ground where convincing but bogus location data has been substituted to fool the application at the user’s behest. Enabling that options will clearly improve privacy for end-users. But it will also rain on the surveillance business model driving the majority of mobile apps.

This is a situation where the interests of end-users and application developers are in direct conflict. Neither group directly has a business relationship Google—no one has to buy a software license for their copy of Android and only a fraction of users have paying subscriptions from Google. Past history on this is not encouraging. Unless a major PR crisis or regulatory intervention forces their hand, platform owners side with app developers, for good reasons. Compared to the sporadic hand-wringing about privacy among consumers, professional developers are keenly aware of their bottom line at all times. They will walk away from a platform if it becomes too consumer-friendly and interferes in cavalier tracking and data-collection practices that help keep afloat advertising-supported business models.


A clear view into AI risks: watching the watchers

A recent NYT expose on ClearView only scratches the surface on the problems with outsourcing critical law-enforcement functions to private companies. There is a lot of  To recap: ClearView AI is possibly the first startup to have commercialized face-recognition-as-a-service (FRaaS?) and riding high on a recent string of successes with police departments in the US. The usage model could not be any easier: upload an image of a person of interest, ClearView locates other pictures of the same person from its massive database of images scraped from public sources such as social media. Imagine going from a grainy surveillance image taken from a security camera to the LinkedIn profile of the suspect. It is worth pointing out that the services hosting the original images including Facebook were none too happy about the unauthorized scraping. Nor was there any consent from users to participate in this AI experiment; as with all things social-media, privacy is just an afterthought.

Aside from the blatant disregard for privacy, what could go wrong here?
NYT article already hints at one troubling dimension of the problem. While investigating ClearView, the NYT journalist asked various members of police departments  with authorized access to the system to search for himself. This experiment initially turned up several hits as expected, demonstrating the coverage of the system. But halfway through the experiment, something strange happened: suddenly the author “disappeared” from the system with no information returned on subsequent searches, even when using the same image successfully matched before. No satisfactory explanation for this came forward. At first it is chalked up to a deliberate “security feature” where the system detects and blocks unusual pattern of queries— presumably the same image being searched repeatedly? Later the founder claims it is a bug and it is eventually resolved. (Reading between the lines suggests a more conspiratorial interpretation: ClearView gets wind of a journalist writing an expose about the company and decides to remove some evidence that demonstrates the uncanny coverage of its database.)

Going with Hanlon’s razor and attributing this case of the “disappearing” person to an ordinary bug, the episode highlights two troubling issues:

  • ClearView learns which individuals are being searched
  • ClearView controls the results returned

Why is this problematic? Let’s start with the visibility issue, which is practically unavoidable. This means that a private company effectively knows who is under  investigation by law enforcement and in which jurisdiction. Imagine if every police department CCed Facebook every time they sent an email to announce that they are opening an investigation into citizen John Smith. That is a massive amount of trust placed in a private entity that is neither accountable to public oversight nor constrained by what it can do with that information.

Granted there are other situations when private companies are necessarily privy to ongoing investigations. Telcos have been servicing wiretaps and pen-registers for decades and more recently ISPs have been tapped as a treasure trove of information on the web-browsing history of their subscribers. But as the NYT article makes clear, ClearView is no Facebook or AT&T. Large companies like Facebook, Google and Microsoft receive thousands of subpoenas every year for customer information, and have developed procedures over time for compartmentalizing the existence of these requests. (For the most sensitive category of requests such as National Security Letters and FISA warrants, there are even more restrictive  procedures.) Are there comparable internal controls at ClearView? Does every employee have access to this information stream? What happens when one of those employees or one of their friends becomes the subject of an investigation?

For that matter, what prevents ClearView from capitalizing on its visibility into law-enforcement requests and trying to monetize both sides of the equation? What prevents the company from offering an “advance warning” service— for a fee of course— to alert individuals whenever they are being investigated?

Even if one posits that ClearView will act in an aboveboard manner and refrain from abusing its visibility into ongoing investigations for commercial gain, there is the question of operational security. Real-time knowledge of law enforcement actions is too tempting a target for criminals and nation states like to pass up. What happens when ClearView is breached by the Russian mob or an APT group working on behalf of China? One can imagine face-recognition systems also being applied to counter-intelligence scenarios to track foreign agents operating on US soil. If you are the nation sponsoring those agents, you want to know when their names come under scrutiny. More importantly you care whether it is the Poughkeepsie police department or the FBI asking the questions.

Being able to modify search results has equally troubling implications. It is a small leap from alerting someone that they are under investigation to withholding results or better yet, deliberately returning bogus information to throw off an investigation or frame an innocent person. The statistical nature of face-recognition and incompleteness of a database cobbled together from public sources makes it much easier to hide such deception. According to the Times, ClearView returns a match only about 75% of the time. (The article did not cite a figure for the false-positive rate, where the system returns results which are later proven to be incorrect.) Results withheld on purpose to protect designated individuals can easily blend in with legitimate failures to identify a face. Similarly ClearView could offer “immunity from face recognition” under the guise of Right To Be Forgotten requests, offering to delete all information about a person from their database— again for a fee presumably.

As before, even if ClearView avoids such dubious business models and remains dedicated to maintaining the integrity of its database, attackers who breach ClearView infrastructure can not be expected to have similar qualms. A few tweaks to metadata in the database could be enough to skew results. Not to mention that a successful breach is not necessary to poison the database to begin with: Facebook and LinkedIn are full of fake accounts with bogus information. Criminals almost certainly have been building such fake online personae by mashing bits of “true” information from different individuals.

This is a situation where ClearView spouting bromides about the importance of privacy and security will not cut it. Private enterprises afforded this much visibility into active police investigations and with this much influence over the outcome of those investigations need oversight. At a minimum companies like ClearView must be prevented from exploiting their privileged role for anything other than the stated purpose— aiding US law enforcement agencies. They need periodic independent audits to verify that sufficient security controls exist to prevent unauthorized parties from tapping into the sensitive information they are sitting on or subverting the integrity of results returned.



Filecoin, StorJ and the problem with decentralized storage (part II)

[continued from part I]

Quantifying reliability

Smart-contracts can not magically prevent hardware failures or compel a service provider at gun point to perform the advertised services. At best blockchains can facilitate contractual arrangements with a fairness criteria: the service provider gets paid if and only if they deliver the goods. Proofs-of-storage verified by a decentralized storage chain are an example of that model. It keeps service providers honest by making their revenue contingent on living up to the stated promise of storing customer data. But as the saying goes, past performance is no guarantee of future results. A service provider can produce the requisite proofs 99 times and then report all data is lost when it is time for the next one. This can happen because of an “honest” mistake or more troubling, because it is more profitable for decentralized providers to break existing contracts.

When it comes to honest mistakes—random hardware failures resulting in unrecoverable data loss— the guarantees that can be provided by decentralized storage alone are slightly weaker. This follows from a limitation with existing decentralized designs: their inability to express the reliability of storage systems, except in most rudimentary ways. All storage systems are subject to risks of hardware failure and data loss. That goes for AWS and Google. For all the sophistication of their custom-designed hardware they are still subject to laws of physics. There is still a mean-time-to-failure  associated with every component. It follows must be a design in place to cope with those failures across the board, ranging from making regular backups to having diesel generators ready to kick in when the grid power fails. We take for granted the existence of this massive infrastructure behind the scenes when dealing with the likes of Amazon. There is no such guarantee for a random counterparty on the blockchain.

Filecoin uses a proof-of-replication intended to show that not only does the storage provider have the data but they have multiple copies. (Ironically that involves introducing even more work on the storage provider to format data for storage— otherwise they can fool the test by re-encrypting one copy into multiple replicas when necessary— further pushing the economics away from the allegedly zero marginal cost.) That may seem comparable to the redundancy of AWS but it is not. Five disks sitting in the same basement hooked up to the same PC can claim “5-way replication.” But it is not meaningful redundancy because all five copies are subject to correlated risk, one lightning-strike or ransomware infection away from total data loss. By comparison Google operates data-centers around the world and can afford to put  each of those five copies in a different facility. Each one of those facilities still has a non-zero chance of burning to the ground or losing power during a natural disaster. As long as the locations are far enough from each other, those risks are largely uncorrelated. That key distinction is lost in the primitive notion of “replication” expressed by smart-contracts.

Unreliable by design

Reliability questions aside, there is a more troubling problem with the economics of decentralized storage. It may well be the most rational— read: profitable— strategy to operate an unreliable service deliberately designed to lose customer data. Here are two hypothetical examples to demonstrate the notion that on a blockchain, there is no success like failure.

Consider a storage system designed to store data and publish regular proofs of storage as promised, but with one catch: it would never return that data if the customer actually requested it. (From the customer perspective: you have backups but unbeknownst to you, they are unrecoverable.) Why would this design be more profitable? Because streaming a terabyte back to the customer is dominated by an entirely different type of operational expense than storing that terabyte in the first place: network bandwidth. It may well be profitable to set up a data storage operation in the middle-of-nowhere with cheap real-estate, abundant power but expensive bandwidth. Keeping data in storage while publishing the occasional proof involves very little bandwidth, because proof-of-storage protocols are very efficient in space. The only problem comes up if the customer actually wants their entire data streamed back. At that point a different cost structure involving network bandwidth comes into play and it may well be more profitable to walk away.

To make this more concrete: at the time of writing AWS charges ~1¢ per gigabyte per month for “infrequently accessed” data but 9¢ per gigabyte of data outbound over the network. Conveniently inbound traffic is free; uploading data to AWS costs nothing. As long as prevailing Filecoin market price is higher than S3 prices, one can operate a Filecoin storage miner on AWS to arbitrage the difference— this is exactly what DropBox used to do before figuring out how to operate its own datacenter. The only problem with this model is if the customer comes calling for their data too early or too often. In that case the bandwidth costs may well disrupt the profitability equation. If streaming the data back would lose money overall on that contract, the rational choice is to walk away.

Walking away from the contract for profit

Recall that storage providers are paid in funny money, namely the utility token associated with the blockchain. That currency is unlikely to work for purchasing anything in the real world and must be converted into dollars, euros or some other unit of measure accepted by the utility company to keep the datacenter lights on. That conversion in turn hinges on a volatile exchange rate. While there are reasonably mature markets in major cryptocurrencies such as Bitcoin and Ethereum, the tail-end of the ICO landscape is characterized by thin order-books and highly speculative trading. Against the backdrop of what amounts to an extreme version of FX risk, the service provider enters into a contract to store data for an extended period of time, effectively betting that the economics will work out. It need not be profitable today but perhaps it is projected to become profitable in the near future based on rosy forecasts of prices going to the moon. What happens if that bet proves incorrect? Again the rational choice is to walk away from the contract and drop all customer data.

For that matter, what happens when a better opportunity comes along? Suppose the exchange rate is stable or those risks are managed using a stablecoin while the market value of storage increases. Buyers are willing to pay more of the native currency per byte of data stashed away. Or another blockchain comes along, promising more profitable utilization of spare disk capacity. That may seem like great news for the storage provider except for one problem: they are stuck with existing customers paying lower rates negotiated earlier. Optimal choice is to renege on those commitments: delete existing customer data and reallocate the scarce space to higher-paying customers.

It is not clear if blockchain incentives can be tweaked to discourage this without creating unfavorable dynamics for honest service providers. Suppose we impose penalties on providers for abandoning storage contracts midway. These penalties can not be clawback provisions for past payments. The provider may well have already spent that money to cover operational expenses. For the same reason, it is not feasible to withhold payment until the very end of the contract period, without creating the risk that the buyer may walk away. Another option is requiring service providers to put up a surety bond. Before they are allow to participate in the ecosystem, they must set aside a lump sum on the blockchain held in escrow. These funds would be used to compensate any customers harmed by failure to honor storage contracts. But this has the effect of creating additional barriers to entry and locking away capital in a very unproductive way. Similarly the idea of taking monetary damages out of  future earnings does not work. It seems plausible that if a service provider screws over Alice because Bob offered a better price, recurring fees paid by Bob should be diverted to compensate Alice. But the service provider can trivially circumvent that penalty while still doing business with Bob: just start over with a new identity completely unlinkable to that “other” provider who screwed over Alice. To paraphrase the New Yorker cartoon on identity: on the blockchain nobody knows you are a crook.

Reputation revisisted

Readers may object: surely such an operation will go out of business once the market recognizes their modus operandi and no one is willing to entrust them with storing data? Aside from the fact that lack of identity on a blockchain renders it meaningless to go out-of-business, this posits there is such a thing as “reputation” buyers take into account when making decisions. The whole point of operating a storage market on chain is to allow customers to select the lowest bidder while relying on the smart-contract logic to guarantee that both sides hold up their side of the bargain. But if we are to invoke some fuzzy notion of reputation as selection criteria for service providers, why bother with a blockchain? Amazon, MSFT and Google have stellar reputations in delivering high-reliability, low-cost storage with  no horror stories of customers randomly getting ripped-off because Google decided one day it would be more profitable to drop all of their files. Not to mention, legions of plaintiffs’ attorneys would be having a field day with any US company that reneges on contracts in such cavalier fashion, assuming a newly awakened FTC does not get in on the action first. There is no reason to accept the inefficiencies of a blockchain or invoke elaborate cryptographic proofs if reputation is a good enough proxy.



Filecoin, StorJ and the problem with decentralized storage (part I)

Blockchains for everything

Decentralized storage services such as Filecoin and StorJ seek to disrupt the data-storage industry, using blockchain tokens to create a competitive marketplace that can offer more space at lower cost. They also promise to bring a veneer of legitimacy to the Initial Coin Offering (ICO) space. At a time when ICOs were being mass-produced as thinly-veiled, speculative investment vehicles that are likely to run afoul of the Howey test as unregistered securities, file-storage looks like a shining example of an actual utility tokens, for having some utility. Instead of betting on the “greater fool” theory of offloading the token on the next person willing to pay a higher price, these tokens are good for a useful service: paying someone else to store your backups. This blog post looks at some caveats and overlooked problems in the design.

Red-herring: privacy

A good starting point is to dispel the alleged privacy advantage. Decentralized storage system often tout their privacy advantage: data is stored encrypted by its owner, such that the storage provider can not read it even if they wanted to. That may seem like an improvement over the current low bar which relies on service providers swearing on a stack of pre-IPO shares that, pinky-promise, that they never not dip into customer data for business advantage, a promise more often honored in the breach as the examples of Facebook and Google repeatedly demonstrate. But there is no reason to fundamentally alter the data-storage model to achieve E2E security  against rogue providers. While far from being the path of least resistance, there is a long history of alternative remote backup services such as tarsnap for privacy-conscious users. (All 17 of them.) Previous blog posts here have demonstrated that it is possible to implement bring-your-own-encryption with vanilla cloud storage services such as AWS such that the cloud service is a glorified remote drive storing random noise it can not make sense of. These models are far more flexible than arbitrary, one-size-fits-all encryption model hard-coded into protocols such as StorJ. Users are free to adopt their preferred scheme, compatible with their existing key management model. For example with AWS Storage Gateway, Linux users can treat cloud storage as an iSCSI volume with LUKS encryption while those on Windows can apply Bitlocker-To-Go to protect that volume exactly as they would encrypt a USB thumb-drive. Backing up rarely accessed data in an enterprise is even easier: nothing more fancy than scripts to GPG-sign and encrypt backups before uploading them to AWS/Azure/GCP is necessary.

Facing the competition

Once we accept the premise that privacy alone can not be a differentiator for backup services—users can already solve that problem without depending on the service provider—the competitive landscape reverts to that of a commodity service. Roughly speaking, providers compete on three dimensions: reliability, cost and speed.

  • Cost is the price paid for storing each gigabyte of data for a given period of time.
  • Speed refers to how quickly that data can be downloaded when necessary and to a lesser extent, how quickly it can be uploaded during the backup process.
  • Reliability is the probability of being able to get all of your data back whenever you need it. A company that retains 99.999% of customer data while irreversibly losing the remaining 0.001% will not stay in business long. Even 100% retention rate is not great if the service only operates from 9AM-4PM.

The economic argument against decentralized storage can be stated this way: it is very unlikely that a decentralized storage market can offer an alternative that can compete against centralized providers— AWS, Google, Azure— when measured on any of these dimensions. (Of course nothing prevents Amazon or MSFT from participating in the decentralized marketplace to sell storage, but this would be another example of doing with increased friction something on a blockchain that can be done much more efficiently via existing channels.)

Among the three criteria, cost is easiest one to forecast. Here is the pitch from StorJ website:

“Have unused hard drive capacity and bandwidth?
Storj pays you for your unused hard drive capacity and bandwidth in STORJ tokens!”

Cloud services are ruled by a ruthless economy of scales. This is where Amazon, Google, MSFT and a host of other cloud providers shine, reaping the benefits of investment in data-centers and petabytes of storage capacity. Even if we ignore the question of reliability, it is very unlikely that the hobbyist with a few spare drives sitting in their basement can have a lower, per gigabyte cost.

The standard response to this criticism is pointing out that decentralized storage can unlock spare, unused capacity at zero marginal cost. Returning to our hypothetical hobbyist, he need not add new capacity to compete with AWS. Let us assume he already owns excess storage already paid for that sits underutilized; there is only so much space you can take up with vacation pictures. Disks consume about the same energy whether they are 99% of 1% full. Since the user is currently getting paid exactly $0 for that spare capacity, any value above zero is a good deal, according to this logic. In that case, any non-zero price point is achievable, including one that undercuts even the most cost-effective cloud provider. Our hobbyist can temporarily boot up those ancient PCs, stash away data someone on the other side of the world is willing to pay to safeguard and shutdown the computer once the backups are written. The equipment remains unplugged from the wall until such time as the buyer comes calling for their data.

Proof-of-storage and cost of storage

The problem with this model is that decentralized storage demands much more than mere inert storage of bits. They must achieve reliability in the absence of the usual contractual relationship, namely, someone you can sue for damages if the data disappears. Instead the blockchain itself must enforce fairness in the transaction: the service provider gets paid only if they are actually storing the data entrusted for safeguarding. Otherwise the provider could pocket the payment, discard uploaded data and put that precious disk space to some other use. Solving this problem requires a cryptographic technique called proofs-of-data-possession (PDP) or alternatively proofs-of-storage. Providers periodically run a specific computation over the data they promised to store— a computation that is only possible if they still have 100% of that data— and publish the results on the blockchain, which in turn facilitates payment conditional on periodic proofs. Because the data-owner can observe these proofs, they are assured their their precious data is still around. The key property is that the owner does not need access to the original file to check correctness: only a small “fingerprint” about the uploaded data is retained. That in a nutshell is the point of proof-of-storage; if the owner needed access to the entire dataset to verify the calculation, it would defeat the point of outsourcing storage.

While proofs of storage may keep service providers honest, it breaks one of the assumptions underlying the claimed economic advantage: leveraging idle capacity. Once we demand periodically going over the bits and running cryptographic calculations, the storage architecture can not be an ancient PC unplugged from the wall. There is a non-zero marginal cost to implementing proof-of-storage. In fact there is an inverse relationship between latency and price. Tape archives sitting on a shelf a much lower cost per gigabyte than spinning disks attached to a server. These tradeoffs are even reflected in the pricing model charged by Amazon: AWS offers a storage tier called Glacier which is considerably cheaper than S3 but comes with significant latency— on the order of hours— for accessing data. Requiring periodic proof-of-storage undermines  precisely the one model— offline media gathering dust in a vault— that has the best chance of undercutting large-scale centralized providers.

Beyond the economics, there is a more subtle problem with proof-of-storage: knowing your data is there does not mean that you can get it back when needed. This is the subject of the next blog post.



Off-by-one: the curious case of 2047-bit RSA keys

This is the story of an implementation bug discovered while operating an enterprise public-key infrastructure system. It is common in high-security scenarios for private keys to be stored on dedicated cryptographic hardware rather than managed as ordinary files on the commodity operating system. Smart-cards, hardware tokens and TPMs are examples of popular form factors. In this deployment, every employee was issued a USB token designed to connect to old-fashioned USB-A ports. USB tokens have a usability advantage over smart-cards in situations when most employees are using laptops: there is no separate card reader required, eliminating one piece carry lug around. The gadget presents itself to the operating system as a combined card reader with a card always present. Cards on the other hand have an edge for “converged access” scenarios involving both logical and physical access. Dual-interface cards with NFC can also be tapped against badge readers to open doors. (While it is possible to shoe-horn NFC into compact gadgets and this has been done, physical constraints on antenna size all but guarantee poor RF performance. Not to mention one decidedly low-tech but crucial aspect of an identity badge is having enough real-estate for the obligatory photograph and name of the employee spelled out in legible font.)

The first indication of something awry with the type of token used came from a simple utility rejecting the RSA public-key for a team member. That public-key had been part of a pair generated on the token, in keeping with the usual provisioning process that guarantees keys live on the token throughout their entire lifecycle. To recap that sequence:

  • Generate a key-pair of the desired characteristics, in this case 2048-bit RSA. This can be surprisingly slow with RSA on the order of 30 seconds, considering the hardware in question is powered by relatively modest SoCs under the hood.
  • Sign a certificate signing-request (CSR) containing the public-key. This is commonly done as a single operation at the time of key-generation, due to an implementation quirk: most card standards such as PIV require a certificate present before clients can use the card because they do not know have a way to identify private-keys in isolation.
  • Submit that CSR to the enterprise certificate authority to obtain a certificate. In principle certificates can be issued out of thin air. In reality most CA software can only accept a CSR containing the public-key of the subject, signed with the corresponding private key— and they will verify that criteria.
  • Load issued certificate on the token. At this point the token is ready for use in any scenario demanding PKI credentials, be it VPN, TLS client authentication in a web-browser, login to the operating system or disk-encryption.

On this particular token, that sequence resulted in a 2047-bit RSA key, one bit off the mark and falling short of the NIST recommendations to boot. A quick glance showed the provisioning process was not at fault. Key generation was executed on Windows using the tried-and-true certreq utility.  (Provisioning new credentials is commonly under-specified compared to steady-state usage of existing credentials, and vendors often deign to publish software for Windows only.) That utility takes an INF file as configuration specifying key type to generate. Quick glance at the INF file showed the number 2048 had not bit-rotted into 2047.

Something else lower in the stack was ignoring those instructions or failing to generate keys according to the specifications. Looking through other public-keys in the system showed that this was not in fact an isolated case. The culprit appeared to be the key-generation logic on the card itself.

Recall that when we speak of a “2048-bit RSA key” the counts are referring to the size of the modulus. An RSA modulus is the product of two large primes of comparable size. Generating a 2048-bit RSA key then is done by generating two random primes of half that size at 1024-bits and multiplying those two values together.

There is one catch with this logic: the product of two numbers N-bits in size each is not guaranteed to be 2·N. That intuitive-sounding 2·N result is an upper-bound: the actual product can be 2·N or 2·N-1 bits. Here is an example involving tractable numbers and the more familiar decimal notation. It takes two digits to express the prime numbers 19 and 29. But multiplying them we get 19 * 29 = 551, a number spanning three digits instead of four. By contrast the product of two-digit primes 37 and 59 is 2183, which is four digits as expected.

Informally, we can say not all N-bit numbers are alike. Some are “small,” meaning they are close to the lower bound of 2n-1, the smallest possible N-bit number. At the other end of the spectrum are “large” N-bit numbers, closer to the high end of the permissible range at 2ⁿ. Multiplying large N-bit numbers produces the expected 2N-bit product, while multiplying small ones can fall short of the goal.

RSA implementations commonly correct for this by setting some of the leading bits of the prime to 1, forcing each generated prime to be “large.” In other words, the random primes are not randomly selected from the full interval [2n-1, 2n – 1] but a tighter interval that excludes “small” primes. (Why not roll the dice on the full interval and check the product after the fact? See earlier point about the time-consuming nature of RSA key generation. Starting over from scratch is expensive.) This is effectively an extension of logic that is already present for prime generation,  namely setting the most significant bit to one. Otherwise the naive way to “choose random N-bit prime” by considering the entire interval [0, 2n – 1] can result in a much shorter prime, one that begins with an unfortunate run of leading zeroes. That guarantees failure: if one of the factors is strictly less than N bits, the final modulus can never hit the target of 2N bits.

So we know this token has a design flaw in prime generation that occasionally outputs 2047-bit modulus when asked for 2048. How occasional? If it were only setting the MSB to one and otherwise picking primes uniformly in the permissible interval, the error rate can be approximated by the probability that two random variables X and Y selected independently at random from the range [1, 2] have a product less than 2. Visualized geometrically, this is the area under the curve xy < 2 in a square region defined by sides in the same interval. That is a standard calculus problem that can be solved by integration. It predicts about 40% of RSA modulus falling short by one bit. That fraction is not consistent with the observed frequency which is closer to 1 in 10, an unlikely outcome from a Bayesian perspective if that was an accurate model for what the token is doing. (Note that if two leading bits were forced set on both primes, the error case is completely eliminated. From that perspective, the manufacturer was “off-by-one” according to more than one meaning of the phrase.)

So how damaging is this particular quirk to the security of RSA keys? It is certainly an embarrassing by virtue of how easy it should have been to catch this during testing. That does not reflect well on the quality assurance. Yet the difficulty of factoring 2047-bit keys is only marginally lower than that for full 2048-bits— which is to say far outside the range of currently known algorithms and computing power available to anyone outside the NSA. (Looked another way, forcing another bit to 1 when generating the primes also reduces the entropy.) Assuming this is the only bug in RSA key generation, there is no reason to throw away these tokens or lash out at the vendor. Also to be clear: this particular token was not susceptible to the ROCA vulnerability that affected all hardware using Infineon chips. In contrast to missing one from the generated key, Infineon RSA library produced full-size keys that were in fact much weaker than they appeared due to the special structure of the primes. That wreaked havoc on many large-scale systems, including latest generation Yubicrap (after a dubious switch from NXP to Infineon hardware) and the Estonian government electronic ID card system. In fact the irony of ROCA is proving that key length is far from being the only criteria for security. Due to the variable strategies used by Infineon to generate primes at different lengths, customer were better off using shorter RSA keys on the vulnerable hardware:

Screen Shot 2019-12-04 at 10.09.50 AM.png

(Figure 1, taken from the paper “The Return of Coppersmith’s Attack: Practical Factorization of Widely Used RSA Moduli”)

The counter-intuitive nature of ROCA is that the estimated worst-case factorization time (marked by the blue crosses above) does not increase in an orderly manner with key length. Instead there are sharp drops around 1000 and 2000 bits creating a sweet spot for the attack where the cost of recovering keys is drastically lower. Meanwhile the regions shaded yellow and orange correspond to key lengths where the attack is not feasible. To pick one example from the above graph, 1920-bit keys would not have been vulnerable to the factorization scheme described in the paper. Even 1800-bit keys would have been a better choice than the NIST-anointed choice of 2048. While 1800 keys were still susceptible to the attack, it would have required too much computing power—note the Y-axis has logarithmic scale— while 2048-bit keys were well within range of factoring with commodity hardware that can be leased from AWS.

It turns out that sometimes it is better missing the mark by one bit than hitting the target dead-on with the wrong algorithm.


Airbnb reviews and the prisoner’s dilemma

Reputation in the sharing economy

[Full disclosure: this blogger was head of information security for Airbnb 2013-2014]

In a recently published Vice article titled “I Accidentally Uncovered a Nationwide Scam on Airbnb” a journalist goes down the rabbit-hole of tracking down instances of fraud at scale on the popular sharing economy platform. The scam hinges on misrepresentation: unsuspecting guests sign up for one listing based on the photographs, only to be informed minutes before their check-in time about an unforeseen problem with unit that precludes staying there. Instead the crooks running the scam directs the guest to an allegedly better or more spacious unit also owned by the host. As expected, this bait-and-switch does not turn out very well for the guest, who discover upon arrival that their new lodgings are less than stellar: run-down, unsanitary and in some cases outright dangerous.

First, there is no excuse for the failure to crack down on these crooks. As the headline makes clear, this is not an isolated incident. Multiple guests were tricked by the same crook in exactly the same manner. In an impressive bit of sleuthing, the Vice journalist proceeds to identify multiple listings on the website using staged pictures with the same furniture and reach out to other guests conned by the same perpetrator. (She even succeeds in digging up property records for the building where the guests are routed after their original listing mysteriously becomes unavailable, identifying the owner and his company on LinkedIn.) Airbnb is not a struggling early stage stratup. It has ample financial resources to implement basic quality assurance: every listing must be inspected in person to confirm that its online depiction does not contain materially significant misrepresentations. The funds used to fight against housing ordinances or insult public libraries in San Francisco are better off redirected to combatting fraud or compensating affected customers. Ironically the company exhibited such a high-touch approach in its early days while it was far more constrained in workforce and cash: employees would personally visit hosts around the country to take professional photographs of their listings.

Second, even if one accepts the premise that 100% prevention is not possible— point-in-time inspection does not guarantee the host will continue to maintain the same standards— there is no excuse for appalling response from customer support. One would expect that guests are fully refunded for the cost of their stay or better yet, that Airbnb customer support can locate alternative lodgings in the same location in real time once guests discover the bait-and-switch. These guests were not staying in some remote island with few options; at least some of the recurring fraud took place in large, metropolitan areas such as Chicago where the platform boasts thousands of listings to choose from. Failing all else, Airbnb can always swallow its pride and book the guest into a hotel. Instead affected guests are asked to navigate a Kafkaesque dispute resolution process to get their money back even for one night of their stay. In one case the company informs the guest that the “host”— in other words, the crooks running this large-scale fraudulent enterprise— have a right to respond before customer support can take action.

Third, the article points to troubling failures of identity verification on the platform, or at least identity misrepresentation. It is one thing for users of social networks to get by with pseudonyms and nicknames. A sharing platform premised on the idea that strangers will be invited into each others’ place of residence is the one place where verified, real-world identity is crucial for deterring misconduct. If there is a listing hosted by “Becky” and “Andrew,” customers have every reason to believe that there are individuals named Becky and Andrew involved with that listing in some capacity. The smiling faces in the picture need not necessarily be the property owners or current lease-holder living there. They could be agents of the owner helping manage the listing or even professional employees at a company that specializes in brokering short-term rentals on Airbnb. But there is every expectation that such individuals exist, along with a phone number where they can be reached— otherwise, what is the point of collecting this information? Instead as the article shows, they appear to be fictitious couples with profile pictures scraped from a stock-photography website. The deception was in plain sight: an Airbnb review from 2012 referred to the individual behind the profile by his true name, not the fabricated couple identity. While there is an argument for using shortened versions, diminutives, middle-names or Anglicized names instead of the “legal” first name printed on official government ID, participants should not be allowed to make arbitrary changes to an existing verified profile.

To be clear: identity verification can not necessarily stop bad actors from joining the platform any more than the receptionist’s perfunctory request for driver’s license stops criminals from staying at hotels. People can and do commit crimes under their true identity. One could argue that Airbnb ought to run a background check on customers and reject those with prior convictions for violent offenses. Aside from being obviously detrimental to the company bottom line and possibly even running afoul of laws against discrimination (not that violating laws has been much of a deterrent for Airbnb) such an approach is difficult to apply globally. It is only for US residents that a wealth of information can be purchased on individuals, conveniently indexed by their social security number. More to the point, there is no “precrime unit” a la The Minority Report for predicting whether an individual with an otherwise spotless record will misbehave in the future once admitted on to the platform.

Far more important is to respond swiftly and decisively once misbehavior are identified, in order to guarantee the miscreants will never be able to join the platform again under some other disguise. At the risk of sounding like the nightmarish social-credit system being imposed in China as an instrument of autocratic control, one could envision a common rating system for the sharing economy: if you are kicked out of Airbnb for defrauding guests, you are also prevented from signing up for Lyft. (Fear not, Uber will likely accept you anyway.) In this case a single perpetrator brazenly operated multiple accounts on the platform, repeatedly bait-and-switching guests over to units in the same building he owned, leaving behind an unmistakable trail of disgruntled guest reviews. Airbnb still could not connect the dots.

The problem with peer-reviews

Finally and this is the most troubling aspect, the article suggests the incentive system for reviews is not working as intended. In a functioning market, peer reviews elicit honest feedback and accurately represent the reputation of participants. The article points to several instances where guests inconvenienced by fraudulent listings were either reluctant to leave negative feedback. Even worse, there were situations when the perpetrators of the scams left scathing reviews full of fabrications for the guests, in an effort to cast doubt on the credibility of the understandably negative reviews those guests were expected to leave.

Incidentally, Airbnb did change its review system around 2014 to better incentivize both parties to provide honest feedback without worrying about what their counterparty will say. Prior to 2014, reviews were made publicly visible as soon as the guest or host provided them in the system. This created a dilemma: both sides were incentivized to wait for the other to complete their review first, so they could adjust their feedback accordingly. For example, if guests are willing to overlook minor issues with the listing, the host may be willing to forgive of some their minor transgressions. But if the guest review consisted of nitpicking about every problem with the listing (“too few coffee mugs— what is wrong with this place?”) the host will be inclined to view guest conduct through an equally harsh lens (“they did not separate the recycling— irresponsible people”) That creates an incentive for providing mostly anodyne, meaningless feedback and avoiding confrontation at all costs. After all, the side that writes a negative review first is at a distinct disadvantage. Their counterparty can write an even harsher response, not only responding to the original criticism but also piling on far more serious criticisms against the author. It also means that reviews may take longer to arrive. When neither side wants to go first, the result is a game-of-chicken between guest & host played against the review deadline.

In the new review model, feedback is hidden until both sides complete their reviews. After that point, it is revealed simultaneously. That means both sides are required to provide feedback independently, without visibility into what their counterparty wrote ahead of time. In theory this elicits more honest reviews— there is no incentive to suppress negative feedback out of a concern that the other side will modify their review in response. (There is still a 30-day deadline to make sure feedback is provided in a timely manner; otherwise either side could permanently hold the reviews hostage.) The situation is similar to the prisoner’s dilemma from game theory: imagine both guest and host having grievances about a particular stay. The optimal outcome from a reputation perspective is one where both sides suppress the negative feedback (“cooperate”) leaving positive reviews, which looks great for everyone— and Airbnb. But if one side defects and leaves a negative review featuring their grievance, the other side will look even worse. Imagine a scenario where the guests say everything was great about the listing and host, while the host claims the guests were terrible people and demands payment from Airbnb for the damage. Even if these charges were fabricated, the guests have lost much of their credibility to counter the false accusations by going on the record with a glowing review about the host. So the stable strategy is to “defect:” include negative feedback in the review, expecting that the counterparty will likewise include their own version of the same grievance.

But game theoretical outcomes are only observed in the real world when participants follow the optimal strategies expected of rational agents. Decades of behavioral economics research suggest that actual choices made by humans can deviate significantly from that ideal. The Vice article quotes guests who were reluctant to leave negative reviews about the fraudulent hosts even after their decidedly unhappy experiences. This is not surprising either. There are other considerations that go into providing feedback beyond fear of retaliation. For example there are social norms against harshly criticizing other people; recall that all reviews are visible on Airbnb. Other users can look up a prospective guest and observe that he/she has been providing 1 star reviews  to all of their hosts. In the absence of such constraints, the game-theoretical conclusion would be taken to an extreme where both sides write the most negative review possible, constrained only by another social norm against making false statements.

Either way, the incentive structure for reviews clearly needs some tweaks to elicit accurate feedback.


Update: Airbnb announced that the company will be manually verifying all 7 million of their listings.