We can decrypt it for you: smart-contracts for outsourced encryption


A small-step for cryptographic agility

Among the features introduced by the Byzantine hard-fork of Ethereum, one of the more under-appreciated ones may be support for RSA operations. Most virtual currencies have a very weak story around cryptographic agility, defined as the ability to easily change the cryptographic primitives used such as signature algorithms, block ciphers and hash functions. For example Bitcoin settled on ECDSA over one elliptic curve— secp256k1— as the signature algorithm and SHA256 for hashing. (There is a long-standing proposal to introduce Schnorr signatures that could introduce some variety but it is yet again limited to that one curve.) Ethereum also uses ECDSA over the same curve, but inexplicably hard-codes a pre-standardization version of SHA3 for hashing.

Arguably these decisions are  short-sighted given that large-scale blockchains are notoriously difficult to evolve once deployed. Convergence of the system depends on on all participants agreeing on a set of ground rules for interpreting transactions appearing on the chain. If one faction invents some new-fangled feature that the rest of the network does not recognize, they will disagree on which transactions are valid and what the true state of the world is. In effect the  Bitcoin is culturally allergic to using hard-forks to introduce novel functionality— witness the acrimonious debates over increasing block size— and historically relied on incremental soft-forks for improvement. Ethereum Foundation has shown no such restraint, shepherding the currency through multiple hard-forks. Some were controversial and did not exactly go as planned; the 2016 hard-fork to bailout DAO instead gave rise to Ethereum Classic. More recent forks have been less contentious, likely because they introduced features without obviously being designed for the benefit of a particular contract or stakeholder.

In one sense, RSA support it is pure syntactic sugar: EVM is Turing complete. Modular exponentiation, the operation underlying RSA, could have been implemented in Solidity or any other language that compiles to EVM byte-code. Granted the same could be said of ECDSA support provided by the built-in ecrecover contract invocation: that could have been left up to every one to implement on their own. Instead EVM designers made a pragmatic decision about providing a native operation to support a feature expected to have widespread usage. This is the same judgment call API designers face constantly: whether to provide a single convenient API that packages commonly used functionality or leave it up to user to stitch that together from existing APIs. In this blog post we evaluate one scenario that is now made possible with RSA support, namely outsourcing key-management and encryption to trusted third-parties.

We can decrypt that for you

Common disk-encryption solutions including MSFT Bitlocker and Apple FileVault have an option to back-up keys to the enterprise. This is achieved by encrypting contents to two or more different keys: one chosen by the user and one chosen by the IT department, usually an RSA public-key. This allows the company to access storage on those devices even if the assigned user is unable or unwilling to assist. Taking this one scenario step further: suppose the IT department does not want to manage that additional key that can decrypt all employee devices. Can they outsource this to a trusted third-party in a verifiable way? The third-party is retained to provide on-demand decryption when the IT department needs to access one of the devices using the secondary key.  Alternatively the third-party could act as “back-up:” even if the IT department itself has a key, they may want additional redundancy.

There is an important question around whether such draconian arrangements are justifiable or consistent with precedent. For now we will put aside these policy questions, which are highly dependent on jurisdiction. US companies typically adapt the stance that employees have no expectation of privacy on company-issued devices. In centrally managed IT environments, it is very common for the company to have root/administrator privileges over all machines. Those privileges allows IT personnel to remotely execute arbitrary commands on user devices and access any stored information. In other words, being able to decrypt storage offline is not necessarily granting a new capabilities that is not already present in a traditional, all-Windows environment with machines are joined to an Active Directory domain.

One important tangent on policy is warranted here before moving on to the implementation. On the surface, outsourcing encryption looks awfully similar to the dangerous and discredited notion of mandatory key-escrow. Starting in the 1990s, US government has requested backdoors in cryptographic protocols, asking that all encryption products effectively use two keys: one for the intended recipient and one for law-enforcement in case future access is required. Following strong backlash from cryptography and civil-liberties communities, these ideas seeming fell out of favor as access to strong encryption increasingly became taken for granted. But the uneasy truce came to an abrupt end in the aftermath of the 2014 San Bernardino shootings which resurrected calls for key-escrow. Compelled key-escrow is problematic because unwitting users are being forced to trust third-parties they have no control over or trust, merely by virtue of purchasing gadgets or software from vendors subject to that jurisdiction. This is different than voluntary escrow taking place in a closed system such as an enterprise IT environment, where the company is making a deliberate risk-benefit decision to outsource key-management to selected third-parties. In addition to the freedom of choosing these providers—or not choosing for that matter, in favor of keeping that function in-house— they can also reduce risk by splitting keys. For example, 3 providers can be selected and the original secret split in a 2-of-3 threshold scheme with each share encrypted to a different provider. That arrangement removes single points of failure, requiring cooperation/malfeasance/compromise of multiple providers to recover the secret.

Design outline

A smart contract running on Ethereum can provide a number of useful properties:

  • Backup service gets paid per decryption in a verifiable manner— payment is made if and only if a successful decryption is provided
  • The backup service can post a fidelity-bond attesting to the security of its key management. These funds are under control of the contract and designed to be released to an attacker upon submission of evidence that the key has been compromised. In effect, this is a self-enforcing bug bounty as well as a canary designed to alert users of a key compromise.  As long as the economic value of the key for an attacker is less than the value of the bounty, a rational attacker would choose to claim the bounty and in the process, alert the entire world that the key has been compromised. If no such evidence emerges after some time, the funds are released to the operator of the service. (Alternatively funds can be returned gradually over time, with some fraction returned every few months of successful operation.)

How would such a system operate?  Since a contract itself can not hold secrets, the actual decryption keys must be held by the service provider. But the provider can publish the public-key which is sufficient for encrypting data. The customer is free to encrypt arbitrary plaintext using that key. But when it is time for decryption, they have to go through the smart contract to reach out to the service provider. This contract has three methods:

  1. Request decryption: Customer submits a request to decrypt a ciphertext encrypted in the RSA key of the backup service. This call is also required to send previously agreed-upon amount in ether to the contract as pre-payment. In practice it would also be subject to an authorization check based on message sender address; only the legitimate customer is allowed to request decryption. This prevents random interlopers who stumble on ciphertext from using the contract as a decryption oracle.
  2. Submit decryption result: Invoked by the operator of the backup service, providing the plaintext for a ciphertext previously uploaded by the customer.
  3. Request refund: Invoked by the user to get their money back if the service was unable to provide service in a timely manner. (Note there is a race-condition here in the event of blockchain congestion or miner censorship. The decryption result becomes visible to the customer as soon as the service provider broadcasts it to the network. But it will not count towards meeting the deadline until it is mined.)

Step #2 is where support for RSA operations come in handy. This is the point where the smart-contract must verify whether a correct decryption was provided, by applying the encryption step to the alleged plaintext. (Note we are assuming raw RSA decryption— this is also required to support “blinding” as explained below. The plaintext itself may have padding such as OAEP but its fully-padded version must be returned by the backup service.) If the provided plaintext is correct, the payment originally sent in step #1 is “captured” for good. Otherwise the funds remain in a provisional state. If no decryption materializes after some time period such as 24 hours, the user can invoke the refund function. This function checks the internal contract state and ascertains whether an outstanding decryption request initiated by the same caller has not been successfully resolved. If that is the case, the initial payment is sent back to the caller.

This construction sounds counter-intuitive in one aspect: everything taking place on public blockchains including Ethereum is visible to the entire world. Any one else can observe the above transactions getting posted, including the decryption result. That may look like a serious problem— everyone else is learning the secret— but there is an easy work around: RSA cryptosystem supports blinding using the multiplicative homomorphism of RSA. Instead of decrypting the primary ciphertext, the customer submits a modified version that is masked by a random factor. The decryption of that masked ciphertext still allows recovering the original plaintext, but only for the original submitter with knowledge of the masking factor. Other people observing the blockchain transactions are effectively witnessing the decryption of a random value that reveals no information about the original.

Improving accountability

What about providing accountability for the service provider? The idea here is to augment the contract with additional methods that an attacker who compromised the key can invoke to prove that fact and receive the value of the fidelity-bond as their reward. There are different ways to do this varying in complexity and level of evidence required. For example, an attacker can submit the factorization of the modulus and the contract verifies that the product of factors is equally to the modulus. This is sufficient but not necessary to prove key compromise. For example, in many situations the private key material is kept on tamper-resistant hardware such as an HSM which is designed to perform operations without surrendering the raw key bits. If an adversary obtains possession of the HSM and has the necessary credentials to operate it, the key is effectively compromised regardless of whether raw bits can be exfiltrated out of the hardware.

We can lower the burden of proof and require attackers to only prove they can decrypt arbitrary ciphertexts. This is tricky because the contract itself is used to arrange for decryption of arbitrary ciphertexts and those results are available on the blockchain for all to see. Not to mention, when the RSA cryptosystem is used without padding, existential forgery of what appears to be “successful” decryptions is easy. Anyone can fabricate plaintext/ciphertext pairs for which they claim to have magic decryption capability. Even in a design where that choice is taken away, the customer of the encryption service can subvert the process: they can request a decryption and leverage that result as “evidence” to prove they have control over the private key. (Use of RSA blinding makes it impossible to ascertain that they were the same ciphertext, since a randomly masked value is getting decrypted each time.) A simple if not entirely satisfactory approach is to implement a challenge/response protocol during which ordinary decryption requests are paused. The “prover” who claims to have control over the private key initiates the process by invoking a contract function. To prevent spam, we can require that such challenges are accompanied by some amount of ETH as commitment to weed out provers with no hope of successfully completing the process. That function returns a challenge, which is based on recent block hash of the chain, message sent by the prover and internal contract state that incorporates all ciphertexts it has been asked to decrypt to date.(For example we can maintain a running hash that is updated with each request received.) Note that contracts can not provide “random numbers” in the usual sense; incorporating past ciphertexts prevents using information disclosed during past decryption operations to answer the current challenge. The prover is given a deadline measured in block-height or time to provide the decryption of that ciphertext. If the valid RSA plaintext emerges during that window, the funds maintained as fidelity bond by the contract are sent to the prover. Otherwise the challenge is considered a failure and the contract resumes decryption services. By creating unavoidable penalties in the event of key compromise, this approach provides financial incentives for the service provider to take adequate measures. (But it does not prevent a corrupt provider from voluntarily decrypting ciphertexts when requested by parties other than the legitimate customer.)

CP

 

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s