Cloud storage with end-to-end encryption: AWS Storage Gateway (part I)

Cloud storage sans surveillance-capitalism

This post picks up on a series of experiments from 2013, originally written in the aftermath of the Snowden disclosures. These experiments started with one question: how feasible is it to use cloud storage services as a glorified remote drive, without giving the service provider any visibility into data stored? This approach stands in stark contrast to how most cloud providers would prefer their services to be used. Call it the Surveillance Capitalism approach to storage: Google Drive, Dropbox and Microsoft One Drive all operate in terms of individual files. While each provider may proudly tout their encryption-in-transit and encryption-at-rest to protect those files as they bound around the internet, they all gloss over one inconvenient detail: the provider has access to the contents of that file. In fact the whole business model is predicated on being able to “add value” to contents. For example if it is a PDF, index the text and allow searching by keywords across your entire collection of documents. If it is a photograph, analyze and automatically tag the image with names using facial recognition. For the most part, all of these applications require access to the cleartext content. While there is a nascent research field for working with encrypted data—where the service provider only has access to encrypted contents but can not recover the original plaintext— these applications are largely confined to a research setting. “Trust us,” the standard Silicon Valley bargain goes: “we need access to your data so we can provide valuable services at zero (perceived, upfront) cost to you.

This approach underserves a vocal segment of consumers who are uncomfortable with that trade-off, who would gladly dispense with these so-called value adds or pay a premium in exchange for better privacy. Specialized services cropped up in the aftermath of Snowden revelations catering to that segment, promising to provide encryption-at-rest with keys only held by the customer— Bring Your Own Keys or BYOK model. Yet each offering was accompanied by the baggage of home-brew design and proprietary clients required to access data protected according to that model. This made integration tricky, because protecting remote data looked nothing like protecting local data. Each platform already has a de facto standard for encrypting local disk drives: Bitlocker for Windows, LUKS for Linux and Filevault on OSX.  Their prevalence lead many individuals and organizations to adopt key management strategies tailored to that specific standard, designed to achieve desired security and reliability level. For example an organization may want encryption keys rooted in hardware such as TPM while also requiring some recovery option in case that TPM gets bricked. Proprietary designs for encrypting remote storage are unlikely to fit into that framework or achieve the same level of security assurance.

AWS Storage Gateway

AWS Storage Gateway product is hardly new. Part of the expanding family of Amazon Web Services features, it was first introduced in 2012. Very little has changed in the way of high-level functionality— this blog post could have been published seven years ago. While AWS also provides file-oriented storage options such as S3 and Glacier, ASG operates on a different model: it provides an iSCSI interface, presenting the abstraction of a block device. An iSCSI volume is accessed the same way a local solid-state or spinning drive would be addressed in terms of chunks of storage: “fetch the contents of block #2” or “write these bits to block #5”). One corollary is that existing operating system features that work on local disks also work on remote iSCSI volumes— modulo minor caveats. In particular full disk encryption schemes can be used to encrypt them in the same way they can encrypt local drives— an earlier blogpost walked through the example of Windows Bitlocker-To-Go encrypting an iSCSI volume using smart-cards.

The “gateway” itself is either a dedicated hardware appliance available for sale or virtual machine that can be hosted on any common virtualization platform. Management of the appliances is split between the AWS Management Console and a restricted shell running on the appliance itself.

Screen Shot 2019-07-15 at 7.27.15 AM.png
Gateway VM on VMware Fusion, logged into the restricted shell for configuration

iSCSI beyond the local neighborhood

ASG represents more than a shift from one protocol to another more convenient protocol. After all, no one needed help from Amazon to leverage iSCSI; it is commodity technology dating back two decades. It does not even require specialized storage hardware. Windows Server 2012 can act as a virtual iSCSI target, providing any number of volumes with specific size that can be accessed remotely. So why not launch a few Windows boxes in the cloud— perhaps at AWS even— create iSCSI volumes and call it a day?

The short answer is iSCSI is not designed to operate over untrusted networks. It provides relatively weak, password-based initial authentication and more importantly, provides no security on the communication link. The lack of confidentiality is not necessarily a problem when one assumes data itself is already encrypted, but lack of integrity is a deal breaker: it means an adversary can modify bits on the wire, resulting in data corruption during reads or writes. Granted, sound full-disk encryption (FDE) schemes seek to prevent attackers from making controlled changes to data. Corrupted blocks will likely decrypt to junk instead of a malicious payload. But this is hardly consolation for customers who lose valuable data. For this reason iSCSI is a better fit inside trusted local networks, such as one spanning a datacenter. On the other hand, if servers providing those iSCSI targets are inside the datacenter, one has not achieved true “cloud storage” in the sense of having remote backups— now those serves have to be backed up some place else in the cloud, outside the datacenter.

ASG provides a way out of that conundrum. On the front end, it presents a traditional iSCSI target that other devices on the network can mount and access. While that part is not novel, the crucial piece happens behind the scenes:

  • ASG synchronizes contents with AWS S3. That channel does not use iSCSI; otherwise tit would be turtles all the way down. Instead Amazon has authored a custom Java application that communicates with Amazon using an HTTP-based transport protected by TLS.
  • ASG also has intelligent buffering to synchronize write operations in the background, based on available bandwidth. To be clear, ASG maintains a full copy of the entire disk. It is not a caching optimization designed to keep a small slice of contents on frequency of access. All data is locally present for read operations. But writes must propagate to the cloud and this is where local, persistent buffering provides a performance boost by not having to block on slow and unreliable network connections to the cloud. If the VM crashes before incoming writes are synchronized to the cloud, it can pick up and continue after the VM restarts.

Encrypted personal cloud storage

Local_VM_for_AWS_storage.001.png

Here is a simple model for deploying AWS Storage Gateway to provide end-to-end encrypted personal storage on one device. This example assumes Windows with a virtualization platform such as Hyper-V or VMware Workstation:

  • AWS Storage Gateway will run as a guest VM. While ASG is very much an enterprise technology focused on high-end servers, its resource requirements are manageable for moderate desktops and high-end laptops. iSCSI is not particularly CPU intensive but AWS calls for ~8GB memory allocated to the VM, although the service will run with a little less. It is however more demanding of storage: buffers alone require half terabyte of disk space even when the underlying iSCSI volume itself is only a handful of GB. AWS software will complain and refuse to start until this space is allocated. Luckily most virtualization platforms support dynamically resized virtual disk images. The resulting image has an upper bound on storage, but only takes up as much space as required to hold current contents. This makes it possible to appease the gateway software without dedicating massive amounts of space upfront.
  • Configure networking to allow inbound connections to the VM over the internal network shared between host & guests. The VM must still have a virtual adapter connected to an external network, since it needs to communicate with AWS. But it will not be allowed to accept inbound iSCSI connections from that interface.
  • Use the Windows iSCSI initiator to access the iSCSI target over the local virtual network shared between hosts & guests
  • After the disk is mounted, create an NTFS-formatted volume and configure Bitlocker disk encryption as usual. Windows Disk Manager utility treats the AGS volume as ordinary local disk. In fact the only hint that it is not a vanilla hard drive is contained in the self-reported device name from the gateway software.
AWS_volume_as_local_disk.png

Windows Disk Management view of an iSCSI volume hosted by AWS Storage Gateway

This model works for personal storage but poses some usability problems. In particular it requires a local VM on every device requiring access to cloud storage and does not allow concurrent access from multiple devices. (Mounting the same iSCSI target from multiple initiators in read/write mode is an invitation to data corruption.) The next post will consider a slightly more flexible architecture for accessing cloud data from multiple devices. More importantly we circle back to the original question around privacy: does this design achieve the objective of using cloud storage as a glorified drive, without giving Amazon any ability to read customer data? Considering that the AWS Storage Gateway is effectively blackbox software provided by Amazon and accepts remote updates from Amazon, we need to carefully evaluate the threat model and ask what could happen in the event that AWS goes rogue or is compelled by law enforcement to target a specific customer.

[continued]

CP

 

Ethereum mixing with RSA: getting by without zero-knowledge proofs

Old-fashioned and unpopular as RSA may have become, it is a versatile public-key crypto-system. Starting with Bitcoin, cryptocurrencies have shunned RSA in favor of  signatures based on elliptic curves, initially ECDSA and later moving towards pairing-based cryptography. Ethereum is the lone exception, having added native RSA support with EIP-198. “Native” being the operative keyword. In principle the smart-contract language used by Ethereum is Turing-complete and can implement any computation. In reality computations are bounded by the amount of gas charged which creates two types of limitations. First is a hard-limit by the maximum amount of gas that can be consumed by all transactions in one block, no matter how motivated a user may be to get a complex transaction mined. The second one is a soft, economical incentive to favor cheaper computations when possible. ECDSA signature verification is artificially “cheap” because it is not implemented as ordinary EVM bytecode. Instead it is a special external contract that can be invoked by anyone, at a deeply discounted price compared to what it would have cost to implement the same complex operation from scratch. EIP-198 brings RSA into this model, although the discount is not quite as deep; on any reasonable hardware architecture RSA signature verification is much faster than ECDSA verification. But the arbitrary gas pricing set by EVM inexplicably charges more for the former.

Strictly speaking EIP-198 adds support for modular exponentiation, which is a useful primitive operation that enables more than only RSA signatures. For example El Gamal over the integers and other crypto-systems based on discrete-logarithms are now in the realm of possibility. More importantly, RSA has useful mathematical properties that are notably absent from ECDSA and enable new scenarios. This post covers one example: trustless mixing of Ethereum.

Background on mixers

Mixers are designed to improve privacy by shuffling funds on the blockchain in such a way that sources and destinations are not connected. To take a concrete example: suppose Alice and Bob have 1 ETH each, stored at known addresses. These are the inputs. They would like to shuffle these funds such that each one ends up with 1ETH again— perhaps minus some transaction fees— but at new addresses which are not linkable to original ones by anyone else with full view of the blockchain. These are the outputs. The choice of 1 ETH is arbitrary but it is important that everyone contributes the same amount when there are exactly as many inputs as outputs. Otherwise the quantity itself becomes a signal, deanonymizing the link between inputs and outputs. If Alice contributed a larger input, the larger output also belongs to her.

With two people, the privacy improvement is marginal: since there are only two addresses to begin with, any given output from this process could only have come from exactly one of two original inputs. The best case scenario one could achieve in this setting is that any outside observer will have no more than 50/50 chance of guessing which way the inputs got shuffled. Also Alice and Bob will always know exactly which input corresponds to which output since they control the other one. As the saying goes, privacy loves the company of large numbers.  Uncertainty in the origin of funds increases with more participants.

But an even more important criteria is soundness. Informally stated, participants will not lose money under any circumstance during the execution of the protocol. Here is a straw-man example that fails on that dimension:

  1. Alice informs Bob her preferred destination Ethereum address
  2. Bob responds with his choice of address
  3. Alice and Bob flip a coin to decide how to map inputs to outputs
  4. If  the coin lands heads, Bob sends 1ETH to Alice’s address. Otherwise he sends the same amount to his own address
  5. After observing Bob’s transaction confirm on the blockchain, Alice in turn sends 1ETH either to Bob or herself

This protocol has a glaring weakness. If Alice receives 1ETH from Bob after step #4, she can diverge from the script and keep the extra ether for herself. For that matter, Alice may be unable to complete the protocol: suppose her hardware wallet failed and she no longer has access to any of her private keys. There is nothing to guarantee that steps #4 and #5 are done atomically: either both happen or neither happen. Once started, it must be either run to completion or rolled-back by refunding the money Bob contributed.

While it is possible to repair that flaw using fair-exchange protocols, there is a more fundamental problem with this approach: it assumes that Alice and Bob somehow found each other ahead of time and they have a private channel for coordinating their activities off-chain. This is not a scalable solution, especially when the protocol is generalized to support more than two participants. That only gets worse with scaling beyond two users to design a mixer that can accept hundreds of inputs. All of those actors must coordinate and agree on one permutation of inputs to outputs while minimizing what each person learns— otherwise the mixer is easily defeated by a single rogue participant who infiltrated the group— while guaranteeing that no one will lose funds even if everyone else in the group has conspired against them.

Trusted third-parties as deus ex machina

If we posit the existence of a trusted third-party Trent, trivial solutions emerge:

  1. Alice, Bob, Carol and anyone else interested in mixing funds, sends 1ETH to Trent via Ethereum blockchain
  2. Every participant also sends their preferred output address to Trent. This part must be done privately and can not be additional metadata sent along with ETH contribution in step #1; otherwise the entire world knows the intended shuffle.
  3. Trent sends every participant approximately 1ETH to their preferred address, minus some commission for facilitating this transaction

This protocol loses both soundness and privacy if Trent goes rogue. After collecting all the funds in step #3, he can abscond with them. Similarly Trent has full knowledge of the mapping between inputs and outputs. He can disclose this information at any point in the future, voiding any privacy advantages afforded by a service the participants have already paid for.

Contracts for holding trusted third-parties accountable

Ideally Trent is replaced with an immutable smart-protocol that is guaranteed to stick to the script. Once Ethereum has support for zero-knowledge proofs via SNARKs, this can be done in an autonomous way. But old-fashioned RSA alone is sufficient to enable a middle-ground solution: there is still a trusted third-party involved  to facilitate the protocol but they are prevented from stealing funds or deanonymizing the permutation.

Protocol sketch

Trent publishes a new contract with known EVM bytecode and deposits some funds as “earnest money.” This is effectively a way to incentivize Trent into behaving properly. If Trent executes the protocol faithfully, he will recover the deposit along with any commissions taken for the service. If he diverges from the protocol, the funds will be distributed to participants. Trent generates a unique RSA key-pair and embeds the corresponding public-key in the contract.  Once the contract is initialized, execution operates in three stages: collection, opening and distribution, each with specific deadlines that are hard-coded into the contract. In the first stage, the contract collects funds along with preferred output addresses from participants.

  1. Every participant interested in using the mixer sends 1ETH to the contract, along with a message to be signed by Trent. The contract records the deposit along with the sending address, but does not consider it finalized until Trent has acknowledged it.
  2. Trent must call a different contract method and provide an RSA signature over the requested message. The contract can verify this RSA signature to determine whether Trent signed correctly and only then consider the contribution final.
  3. If Trent fails to complete that step after some deadline, the participant can request a refund (The contract could even be designed to penalize Trent for discriminating against participants, by sending extra funds taken from the initial deposit.)

In the second stage, users reveal their choice of destination address along with a signature obtained from Trent in step #2 above. This is done by invoking another method on the contract to provide the address along with a signature from Trent by calling the contract. One subtlety related to blockchains: participants must use a different Ethereum address when interacting with the contract in each stage. Otherwise the origin of the TX itself allows linking the revealed address to the original request for signing.

Blinded by randomness

This is where unique properties of RSA shine. If users were submitting the destination address verbatim in step #1, it would defeat the point of using a mixer— everyone can observe these calls and learn which address each person picked. But RSA enables blind signatures: Trent can sign a message without known what message he signed. The crucial property is that the message submitted for signing is related to the real message the participant wants signed namely, the destination address. But that relationship is only known to the participant: as far as everyone watching the blockchain is concerned, it appears indistinguishable from a random message. Not even the party in possession of the signing key (Trent, in this case) learns the “true” message being signed or can correlate a signed message presented in the future with the original request on the related message. In effect, users mask their address before submitting it to Trent for signature and then recover the intended signature by using properties of RSA. (There is some additional complexity being glossed over: signatures are computed over the address directly. Instead the address is hashed and suitably padded with a scheme such as PSS. Otherwise RSA allows existential forgery such that one can find unlimited message-signature pairs, although these messages will not have any particular meaning as far as corresponding to an ethereum address.)

To avoid waiting on stragglers indefinitely, a deadline is imposed on the second stage. After this deadline is reached, the contract can start actually disbursing funds to the addresses opened in the second stage. There is no concept of a “scheduled task” on the Ethereum blockchain so the final stage will be initiated when any participant— including potentially Trent— calls into the contract to request distribution after the deadline has elapsed. At this point the contract can confirm that the timestamp or block height is past that deadline and start sending ether to previously verified output addresses.

Detecting and punishing dishonest operators

There is one flaw in the protocol as described: Trent can cheat by issuing extra signed messages. Recall that the presence of a valid RSA signature on a message authorizes the disbursal of 1 ETH to the address contained in that message. That turns every valid signature into a check worth 1ETH. While every participant is supposed to receive one signature for every ETH contributed, nothing prevents Trent from issuing signatures over his own addresses and attempting to cash these in.

This is where the initial “earnest money” contributed by Trent comes in, combined with the deliberate delay in releasing funds. Recall that that funds are not disbursed immediately when a participant calls into the contract with a valid signature. Instead the contract waits until a predetermined deadline, giving everyone a chance to chime in with their address. As long as every honest participant supplies their address, cheating will become obvious. The contract has a record of exactly how many participants have received valid signatures in response to their contribution. If more signatures turn up than participants, it is clear Trent has diverged from the protocol. It could be a deliberate attempt to steal funds or perhaps Trent’s RSA signing key was compromised. Either way, the contract will respond by “refunding” all ether to the original input addresses, along with a sjare of the earnest money put up by Trent. In this case the protocol execution has failed: no mixing has occurred. But all participants receive 100% of their original contribution back along with some compensation for Trent’s incompetence/dishonesty.

Participants do not have to do anything special to catch and punish Trent for cheating, or monitor the blockchain. As long as they are following the protocol and revealing their own signed address before the deadline, any attempt by Trent to fabricate additional unsolicited signatures will backfire and result in “gifting” money. The only way Trent can get away with cheating is if some participants have failed to reveal their signed address in a timely manner, effectively abandoning their funds. Even in that scenario, Trent would be disincentivized from claiming those funds with forged signatures: he would be taking the risk that missing participants may turn up at the last minute and trigger the retaliation logic.

CP

 

DIY VPN (part II)

[continued from part I]

Economics of a self-hosted VPN

This section considers the overall cost of operating a VPN server. First, we can rule out software acquisition costs. On the server side, the community edition for OpenVPN is freely available as open-source software. (The commercial dubbed OpenVPN Access Server comes with a per-user licensing model. While adding useful fit-and-finish including configuration GUI, that version does not change the fundamental scaling or security properties of the system.) On the client side, there are freeware options on all major platforms. One exception is when ADCS is leveraged for the certificate authority. Even there one can lease hourly access to a private Windows server from major cloud providers, rolling that into operational expenses instead of paying outright for the software. As far as operational expenses are concerned, what follows is a back of the envelope calculation using AWS as the infrastructure provider. While AWS has a free-tier for new customers, this analysis assumes that discount has already been exhausted, in the interest of arriving at a sustainable cost basis.

Computation

OpenVPN runs on Linux and is not particularly CPU intensive at the scale envisioned here— supporting a handful of users with 2-3 simultaneous connections at peak capacity. A single t3-nano instance featuring a single virtual CPU and half GB of RAM is perfectly up to task for that load. Based on current AWS pricing, these cost roughly half cent per hour when purchased on demand in US regions such as Virginia or Oregon. The operational cost can be reduced by committing to reserved instances. Paying upfront for one-year takes the price down to 0.30¢/hour while a three year term yields 0.20¢/hour. Even greater savings and flexibility may be achieved with what AWS terms  spot instances: current spot-prices for nano instances hover around 0.16-0.20¢ per hour depending on availability zone. However spot instances comes with the risk that the server may be preempted if demand spikes to the point where prices exceed the bid. That level of reliability may still be acceptable for a personal setup, since one can always bid on another spot instance with higher price. Conservatively using the 0.20¢ estimate, we arrive at EC2 instance costs just shy of $1.50 per month.

Storage

It turns out that the cost of running an instance is not the primary consideration, because of the modest computational requirements of openvpn. When using minimal instance types such as t3-nano, disk space becomes a significant contributor to cost. At the time of writing, AWS charges 10¢ per GB/month for entry-level SSD volumes attached to instances. Costs only go up from there for more demanding loads and guaranteed IOPS. (While there are cheaper options such as spinning disks, those can only be used in addition to existing boot volume and not as replacement.) That means simply keeping a 15GB disk around for a month alone exceeds the cost of running the instance for that same period.

Fortunately Linux distributions are also modest in their demands for disk space requirements. Minimum requirements for Ubuntu18 servernot the desktop edition, which is considerably more resource hungry— weight in at a svelte 256MB of memory and 1.5GB of disk space. It is not possible to translate that directly into an AWS image: all popular AMIs offered by Amazon come with minimum 8GB disk space, more than five times the stated baseline. AWS will not allow launching an EC2 instance based on one of these AMIs with a smaller boot volume.

Bring-your-own-image

There is a work around: AWS also allows customers to import their own virtual machine images from common virtualization platforms. That permits crafting a properly slimmed-down Linux image locally and converting it into an AMI.

Fair warning: this process is clunky and error-prone. After uploading an image to S3 and initiating the conversion as batch process, several minutes can elapse before AWS reports a problem (typically having to do with the image format) that takes users back to square one. It turns out only OVF format images accepted. VMware distributions including Fusion on OSX can export to that format using the included ovftool.

With some buffer to accommodate additional software inserted by AWS into the image, log files and future updates, we end up with an Ubuntu18 server AMI that can squeeze into 3GB SSD volume. That comes out to a recurring storage expense of 30¢ per EC2 instance. There is also the storage associated with the AMI itself, one copy per region. That carries the slightly lower price-tag charged to EBS snapshots and can be amortized across multiple instances launched in the same region. Assuming the worst case scenario of a single server, we arrive at an upper bound of 50¢ per month.

Bandwidth

While computation and storage expenses are highly competitive with typical prices charged by commercial VPN services, bandwidth is one area where Amazon is much less favorable for operating 24/7 VPN. Amazon charges 9¢ per GB for outbound traffic, defined as bits heading out of AWS infrastructure in that region. A VPN server has an unusual bandwidth symmetry. Most servers receive a small amount of data inboundsuch as a request for a web pageand  respond back with a large amount of data outbound, for example a high-resolution image or video stream. But a VPN server is effectively acting as proxy that shuttles traffic in both directions. Every incoming request from the client is forwarded to its true destination, becoming outbound traffic. Likewise every inbound response from that destination is sent back to the client as outbound traffic. So the total traffic volume for purpose of metering is closely approximated by the upstream and downstream traffic generated by users connected to the VPN, plus a small overhead introduced by the VPN protocol itself.

The average US household consumes 190GB of bandwidth per month according to the latest iGR report from 2017. While that report does not distinguish upstream/downstream, it is reasonable to assume downloading is responsible for the bulk of this figure. Adjusting by 50% for projected growth and including another 50GB for mobile devices with their own data plan yields a number around 300GB month of traffic. If 100% of that traffic is routed over the VPN service hosted at AWS, bandwidth costs alone would exceed all other other operational expenses by a factor of ten.

This cost scales linearly with the amount of VPN usage, or equivalently the number of users sharing the VPN. At large enough scales, the same also holds true for EC2 costs, since a single nano instance can not service an unbounded number of subscribers. But those limits are unlikely to be reached for a system intended for a handful of people, while the impact of additional bandwidth usage will be directly reflected in the monthly bill.

Verdict on self-hosting

AWS turns out to be highly competitive for hosting a VPN solution, with the caveat that only moderate usage is required. Considering that popular VPN services for consumers charge in the neighborhood of $3-$10 per month, a single nano instance can offer a better solution at lower cost, especially when the server is shared by multiple people. Advantages to self-hosting include:

  • Eliminating trust in a third-party service, including random client apps. VPN provider is in a position to observe traffic metadata, such as websites visited and frequency For unenlightened websites not using HTTPS, VPN service can even observe the full traffic exchange. Unscrupulous providers have taken advantage of this, most recently Facebook with its Onavo spyware product masquerading as VPN. Hosting your own VPN avoids that dependency, although one must still implicitly trust the infrastructure provider. Amazon has visibility into EC2 traffic which can be used to identify activity: “user coming from IP address 1.2.3.4 connected to the VPN server, and that resulted in VPN server reaching out to websites at address 6.7.8.9” While one can be confident AWS (unlike Facebook) respects privacy and will not randomly mine those logs to spy on their own customers, they can still be compelled to disclose records by law enforcement.
  • High-availability with the backing of Amazon infrastructure. Hardware, storage and networking failures are low probability events.
  • Geodistributed points of presence, with ability to host VPN servers in the US, Europe, South America and Asia.
  • Large selection of IP4 and IP6 addresses from the AWS range, compared to hosting from a residential connection.
  • More importantly, freedom to change IP addresses at will. New elastic IP can be requested from AWS and assigned to an instance anytime the service operator wants to discard their association with the existing address
  • Unlimited number of devices at no additional cost (within the capacity limits of the EC2 instance)

The self-hosting route also comes with two notable caveats:

  • Overhead of maintaining the service, particularly the PKI necessary for issuing/revoking certificates. OpenVPN does not help matters here: it does not directly implement CRL or OCSP for retrieving revocation information. It requires the presence of a CRL file deployed locally. (But one can get the same effect by crafting a custom script to be executed server-side for each connection, to invoke openssl for proper chain validation that pays attention to CDP or OCSP responder links in the certificate. Alternatively a cron job can periodically retrieve CRLs and update the local file openvpn expects to find this information.)
  • Steep pricing for bandwidth. This is less an argument against the DIY approach and more a cautionary note about using AWS for hosting, which is the obvious choice along with Azure and GCE. It turns out Azure and GCE are not that different when it comes to egress traffic. More cost-effective plans are available from alternative providers such as Linode and Digital Ocean, featuring upwards of 1 terabyte egress traffic for the fixed monthly price of server hosting.

CP

 

DIY VPN (part I)

Introduction

Net neutrality is not the only casualty of an FCC under the sway of a new administration. In a little noticed development, ISPs have free reign to spy on customers and sell information based on their Internet traffic. Google has been on a quest to encrypt all web traffic using the TLS protocol— engaging in subtle games of extortion by tweaking the default web browser UI to throw shade at sites still using the default unencrypted HTTP protocol. But there is one significant privacy limitation with TLS: it does not hide the identity of the website the user is visiting. Such metadata is still visible, both in the TLS protocol but also in the addressing scheme used for the underlying IP protocol.

This is where virtual private networks come in handy. By encapsulating all traffic through an encrypted tunnel into another connection point, they can also protect metadata from the prying eyes of surveillance-happy ISPs. But That is only the beginning: VPNs also reduce privacy risks from arguably the apex predator of online privacy— advertising-based revenue models. In the grand tradition of surveillance capitalism, every crumb of information about a customer is enlisted into building comprehensive profiles. Even if consumers never login to Facebook or Google, these networks can still build predictive profiles using alternative identifiers that help track consumers over time. Of all the stable identifiers that users carry around like unwanted, mandatory name-tags forced on them— HTTP cookies, their counterparts complements of Flash and creative browser fingerprinting techniques— IP addresses are one of the hardest to strip away. Incognito/private browsing modes allow shedding cookies and similar trackers at the end of the session, but IP addresses are assigned by internet service providers. In many cases they are surprisingly sticky: most residential broadband IPs rarely change.

From clunky enterprise technology to consumer space

In the post-Snowden era, the default meaning of “VPN” also changed. Once an obscure enterprise technology that allowed employees to connect to internal corporate networks from home, it shifted into the consumer space, pitched to privacy-conscious mainstream endusers as a way to protect themselves online. In an ironic twist, even Facebook the apex predator of online privacy infringement got into the act: the company acquired the analytics service Onavo and offered the existing Onavo VPN app to teenagers free of charge, in exchange for spying on their traffic. (When TechCrunch broke the news, Apple quickly put the kibosh on that particular privacy invasion by revoking the enterprise certificate Facebook had misused to create an app for general distribution with VPN privileges.) The result has been a competitive landscape for consumer VPN services, complete with the obligatory reviews and ranking from pundits.

For power users there is another option: hosting your own VPN service. This post chronicles experiences doing that, organized into two sections. First section an overview of what goes into building a functioning VPN server, using common, off-the-shelf open source components. Second half looks at the economics of the DIY approach, comparing the costs against commercial offerings and more importantly, looking at how different usage patterns— number of presence points, simultaneous users, monthly bandwidth consumed— affect those numbers.

Implementation considerations

“The nice thing about standards is that you have so many to choose from”

Andrew Tannenbaum

Maze of protocols

One of the first challenges in trying to create a VPN solution is the fragmented nature of the standard. Unlike the web where all web-browsers and web-servers have converged on the protocol TLS for protecting communications, there is no comparable “universal” VPN standard with comparable ubiquity. This is either surprising or utterly predictable depending on perspective. It is surprising considering VPNs are 20+ years old. Attempts to standardize VPN protocols are the same vintage: IPSec RFCs were published in 1995. L2TP followed a few years later. With the benefit of two decades to in protocol tinkering— proprietary, standards-based or somewhere in between— you might expect the ecosystem to converge on some least common denominator by now. Instead VPNs remain highly fragmented, with incompatible options pushed by every vendor. The roots of the technology in enterprise computing may have something to do with that outcome. If Cisco, Palo Alto and Fortinet compete on selling pricy VPN gear to large companies with the assumption that every employee will be required to also install a  VPN client from the same vendor who manufactures the hardware, there is little incentive to interoperate with each other.

In this morass of proprietary blackbox protocols, OpenVPN stands out as one of the few options with some semblance of transparency. It is not tied to specific hardware, which makes for convenient DIY setups using cloud hosting: it is much easier— not to mention cost effective— to install software in a virtual server than rack a physical appliance in a datacenter cage. It also has good cross-platform support client side, with a good mixture of freeware and commercial applications on Windows, OSX and Linux flavors— the last one in particular being the black-sheep platform frequently neglected by commercial VPN offerings. (WireGuard is another more recent entry distinguished by modern cryptography, but it does not quite have the same level of implementation maturity.)

OpenVPN_on_iPad

OpenVPN on iPad

OpenVPN server-side

In the interest of reach and portability, this examples uses Ubuntu Server 18 LTS, since many commercial hosting services such as AWS and Linode offer virtual hosting for this operating system. There are extensive online tutorials on setting up the server-side online so this blog post will only summarize the steps here:

  1. Enable IPv4 & IPv6 forwarding via sysctl configuration
  2. Setup iptables rules (again, for IPv4 & IPv6) and make them persistent using iptables-persistent package.
  3. Tweak openvpn server configuration file
  4. Provision credentials
    • Generate Diffie-Hellman parameters
    • Use openssl to generate an RSA key-pair for the server & create a certificate signing request (CSR) based on that pair
    • Submit the certificate request to a certificate authority to obtain a server certificate and install this on the server— more on this in the next section

PKI tar-pit

OpenVPN supports authentication based on different models but the most commonly used design involves digital certificates. Every user of the VPN service as well every server they connect to has a private key and associated credential uniquely identifying that entity. There are no shared secrets such as a single password known to everyone, unlike for example in the case of L2TP VPNs where all users have the same “PSK” or preshared key. One crucial advantage of using public-key credentials is easy revocation: if one user loses their device or stops using the VPN service, their certificate can be revoked without affecting anyone else. By contrast, if an entire company is sharing the same PSK for every employee, it will be painful to rotate that secret for every departure. From an operational perspective, distributing credentials is simpler. Passwords have to be synchronized on both sides in order for the server to verify them. With digital certificates, the secret private key only exists on user device and the server does not need to be informed ahead of time whenever a new certificate is issued. (Although they need access to the certificate revocation list or CRL, in order to check for invalidated credentials.)

Public-key credentials avoid many of the security headaches of passwords but come with the operational burden of managing a PKI. That is arguably the most heavy-weight aspect for spinning up a stand alone VPN infrastructure. Large enterprises have an easier time because the overhead is effectively amortized into other use-cases of PKI in their existing IT infrastructure. Chances are any Window shop at scale is already running a certificate authority to support some enterprise use case. For example employees badges can be smart-cards provisioned with a digital certificate. Or every machine can be automatically issued a certificate using a feature called auto-enroll when they are connected to the domain controller, which is then leveraged for VPN. That seamless provisioning experience is difficult to recreate cross-platform once we accept the requirement to support OSX, Linux, Android and iOS platforms. Instead credentials must be provisioned and distributed manuallyand that includes both VPN users and VPN servers.

There are open-source solutions for running a certificate authority, notably EJBCA. OpenSSL itself can function as a primitive CA. OpenVPN introduced its own solution called easy-rsa, glorified wrapper scripts around OpenSSL to simplify common certificate management tasks.  But arguably the most user-friendly solution here is Windows Active Directory Certificate Services or ADCS. ADCS features both a GUI for simpler use-cases as well as automation through the command-line via old-school certutil and more modern powershell scriplets.

ADCS_2012_management_GUI.PNG

Active Directory Certificate Services GUI on Windows 2012

ADCS is not free software, but it is widely available as cost-effective, hosted solution from Amazon AWS and MSFT Azure among other providers. Considering the CA need only be powered on for the occasional provisioning and revocation, operational expenses will be dominated by the storage cost for the virtual disk image as opposed to the runtime of the virtual machine. One might expect a CA has to stay up 24/7 to serve revocation lists or operate an OSCP responder— and ADCS can do both of those, when the server also has IIS role installed. But OpenVPN can only parse revocation information from a local copy of the CRL. That rules out OCSP and implies CRLs must be manually synchronized to the server, defeating the point of having a server running around the clock to serve up revocation information. In effect OpenVPN only requires an offline CA.

[continued]

CP

Voting with other people’s wallets: plutocracy, blockchain style

“One person, one vote” may be the rallying cry for electoral reform, but governance for cryptocurrency— to the extent one can speak of governance with a straight face in this ecosystem— often operates on a different and decidedly plutocratic principle. Were it not for the extreme volatility of cryptocurrencies against the US dollar, the corresponding slogan would be “one dollar, one vote.” From the design of proof-of-stake algorithms that grant mining shares based on how much capital the miner has committed to the system to the design of polling mechanism that attempt to gauge community opinion on various proposals, influence is unabashedly measured as a function of account balance.

One example popularizing this notion as something of a Gallup-poll for the Ethereum community is Coinvote. Anyone can create a poll and participants vote yay or nay on the proposal, with their vote weighted by the amount of funds they hold. Voting itself does not involve any exchange of funds. Instead of participants cast their vote by sending a specially crafted message, digitally signed using the same private-key associated with their blockchain address. The signature establishes using cryptographic techniques that the voter is indeed the same person who controls the purse strings for the associated blockchain address. This system has been used for gauging community opinion on contentious questions, ranging from the EIP-999 proposal to rescue funds trapped in the now defunct Parity multi-signature wallet to more speculative questions around changing the Ethereum proof-of-work function.

Publications such as CoinTelegraph blithely cite these poll results as if they were representative of the vaguely-defined “community.” But the design is fundamentally unsound. Even if one buys into the dubious  plutocratic ideal that votes count in direct proportion to the voters bank account, Coinvote and similar polling models are fundamentally broken because they fail to account for the quirks of how funds are managed at scale.

Vote early, vote often?

Let’s start with one design subtlety that Coinvote gets right: preventing multiple votes using the same pool of funds. Votes are not counted when they are initially cast but at the end of the polling period, based on blockchain balances at a specific point-in-time. This is important because funds can move around. Imagine that Alice has a pool of ether. She can cast a vote using this pool of funds, then immediately “loan” them out to Bob— or even another blockchain address she controls herself— who casts another vote from the new address. By looking at a snapshot of the blockchain at one specific moment in time, Coinvote avoids double-counting votes from ether that gets shuffled around.

Contractually disenfranchised

The challenges with Coinvote are more complex. First problem is that by virtue of insisting on signed messages, it can not accommodate funds associated with smart contract. Recall that there are two types of Ethereum accounts: externally-owned and smart contracts. The former are what we traditionally associate with cryptocurrency. There is a private-key, carefully guarded by the owner of those funds and he/she wields control over that pool of money by cryptographically signing messages using that secret. The latter is the notion of “programmable money” pioneered by Ethereum. There is no secret associated with such an address per se, but there is a set of conditions expressed in a programming language— the “contract”— associated with the address which determine under what conditions the money can be spent.

Contracts allow expressing detailed logical restrictions around how money can be spent: for example a contract can be designed to only permit sending funds to a small number of “whitelisted” addresses, only if 2 out of 5 shareholders agree but not before March 2021 and no more than 1000 ether per day. It is precisely their expressiveness that makes contracts ideal for storing large pools of funds, with strict controls around spending conditions to prevent theft and misuse. Three out of the ten largest ethereum balances today are held in smart-contracts including the account with highest balance, storing more than 2% of all ether in existence. But that also means the holders of those funds have effectively become disenfranchised: they are ineligible to cast votes by signed messages, since there is no key to sign with.

One work around is for owners of those funds to temporarily transfer funds to a plain address, cast the vote and transfer them back. But that is an unworkable solution: moving out of the contract obviates the security controls imposed by that contract, jeopardizing the safety of those funds while they are controlled by a single key. This is no longer voting with the wallet— it’s voting by waving wads of cash in the air.

Contracts can be designed to interact with other contracts on the blockchain. In principle the problem can be solved with a better design where votes are cast by sending messages to a “ballot-box” contract on blockchain. Since contract invocations require paying mining fees or “gas” in ethereum terminology, this would have to be in addition to and not a replacement for off-chain voting with signed messages. Otherwise it amounts to a poll-tax. But that only shifts the burden to the sender side. Recall that contracts are immutable. Once a contract is published, it can not be modified, upgraded or even patched for bugs. (The illusion of upgradeability as in the Gemini dollar stable-coin contract, is achieved by using a level of indirection: an immutable “proxy” contract at a fixed address maintains a pointer to a second contract that contains the real implementation.) Suppose the Ethereum Politburo decrees a new standard for vote casting/receiving by contracts. All existing smart-contracts in use would still be incompatible with the standard since they predate its introduction, and all associated funds would still have no way to voice their opinion on polls.

Integrity of votes

Speaking of using a smart contract to count votes, there is another benefit to that approach over Coinvote: at least everyone else can verify that votes were recorded accurately. When signed messages are being submitted to a website, there is no guarantee that they will be counted fairly. The organizer could discard votes if they are trying to tip the outcome one way or another. While every vote recorded can be publicly verified— everyone can check the signature on the signed message and inspect the blockchain for the amount of money that vote speaks for— there is no easy way to prove that a vote was suppressed. Suppose a participant objects by producing a signed message they alleged to have used for voting. The organizer can fire back, arguing that it was not delivered in time before the deadline. Or the organizer may not allow the voter to change their mind: a signed message is valid forever. If the voters cast two conflicting votes, the organizer is free to retain the preferred one. We could try to work around suppression by requiring the organizer to issue a time-stamped and signed receipt for each vote cast. But that only shifts the shenanigans to an earlier: Alice voting against a proposal receives her receipt but Bob keeps running into mysterious problems with the website every time he casts a vote in favor.

Moving the vote recording mechanism onto the blockchain at least introduces some semblance of transparency. To the extent that vote suppression is occurring, it is now controlled by miners instead of a centralized organizer. As long as there is competition in mining, every vote cast has a fighting chance of making it into the permanent record of the blockchain— unless that is, the proposal is one miners are unanimous against. Consider voting on a proposal to reduce mining fees: the rational decision for all miners could be to only accept “no” votes and forgo the short-term revenue from recording “yes” votes, in favor of a strategic play to create the appearance of widespread opposition to the measure.

Voting with other wallets

There is a different problem with using blockchain transactions to cast votes: the assumption that one address speaks for one person is wrong. Custodial services— such as cryptocurrency exchanges— commonly pool customer funds into a shared, omnibus account. Typically every customer still has unique deposit addresses, as it is necessary to distinguish incoming funds to decide which customer to credit. But when customers want to withdraw funds, the transaction can originate from any address controlled by the exchange. Wallet software to tries to solve an optimization problem called unspent-output selection, which involves choosing the right combination of available funds to satisfy a particular request. If Alice deposits 1BTC into an exchange and later Bob wants to withdraw 1BTC, it is entirely possible for Alice’s deposit to be used for that request. This does not mean Alice’s funds are gone— it only means that UTXO was among those selected for the transaction initiated by Bob. What goes on under the hood is that the exchange maintains an internal ledger recording the cryptocurrency balances of every customer. Alice still has 1BTC, while Bob’s balance got debited the amount withdrawn. These ledger updates are not reflected on the blockchain: it would be too slow and wildly inefficient in transaction fees if every time a customer bought or sold bitcoin, the corresponding amount of funds had to be shuffled around on the blockchain. (At one point Bitfinex attempted to perform daily reconciliation among customer accounts in response to a CFTC consent decree, but gave up on that design after a 2016 security breach.)

The bottom line is that customers of custodial services can initiate withdrawals from addresses they do not have full control over. This has implications for voting strategies built around sending funds. For example, CarbonVote conducted a poll to gauge community opinion on the DAO bailout. Participants voted by sending a 0-ether transaction to one of two addresses, indicating a preference in favor of or against the proposed fork. A transaction with zero value is allowed in Ethereum and does not result in a decrease in the balance of the originating account. (However the sender must still pay the ethereum transaction fee in “gas” so the poll-tax criticism applies.) As before, the votes are weighted by the balance of funds in the originating address. Knowing how omnibus wallets work, it is trivial to game this poll: withdraw from your exchange account directly to one of the yes/no addresses. If the customer happens to get lucky, the transaction will originate from a very high-value address that holds funds for thousands of other customers. The resulting vote will have outsized influence, far out of proportion to the actual ether held by the person casting that vote.

It turns out those favorable circumstances do happen for many exchanges in the case of ethereum. For several exchanges, the majority of ether in hot-wallets is concentrated in a small number of blockchain addresses. The initial polls misinterpreted “votes” from those addresses, incorrectly assuming that one transaction represented the preferences of a single person/entity who controlled those funds. CarbonVote later attempted to correct this problem by disregarding some addresses known to be associated with popular exchanges.

Of course ignoring votes does not solve the underlying problem of accurately capturing the preferences of customers using custodial accounts. They can always temporarily withdraw their funds to a personal wallet to cast a vote but this is complicated by the requirement to hold the ether at that address until the poll is closed. (Presumably the customers opted for a custodial solution because they did not want to manage the risk of storing cryptocurrency directly.) Nor does it prevent a determined custodian from voting on behalf of customers: even if one well-known address is blacklisted for the purposes of counting votes, the custodian can always move funds around to another address such as their cold-storage and cast votes from there.

For a demonstration of how easily a single custodian can tip the scales, consider the EIP-999 poll. By virtue of its controversial nature, this poll received more than four times as many votes as the next most popular poll on CoinVote. Votes against the EIP edged out those in favor by a margin of approximately 630K ether. Revisiting the list of ethereum accounts with highest balances, there are a dozen individual addresses with funds exceeding that difference, almost all of them associated with exchanges. A single custodian using assets under management to vote could have tipped the scales from “no” to “yes.”

There is an analog to this problem in traditional finance: proxy voting through mutual funds. Nominally shareholders attending an annual meeting can cast their own votes on questions put before the owners, such as significant changes to the business or compensation plans. But the majority of individual investors in the US do not own shares of companies directly. They own them through a qualified investment managers such as mutual-funds and ETFs. That means the mutual fund gets to vote on behalf of its own members— a vote that may not be representative of how the investors owning shares in that fund actually feel about the issue. In fact critics have pointed out mutual funds typically prefer not to rock the boat, routinely siding with company management on share-holder proposals.

Voting with borrowed money

Weighting votes in proportion to the amount of money they speak for creates additional incentives that can distort the system. Most democratic election systems seek to discourage vote selling and buying; in the US it is a criminal offense. Vote-by-ether is an invitation to do that.

It need not even involve any complicity on the part of the seller. Suppose an investor feels strongly about shaping public opinion on a particular issue that has a running poll. They can temporarily accumulate a large ether position on exchanges, cast a vote and hold the funds until the poll is closed, liquidating the position afterwards. This is less “vote buying” and more a case of “influence buying” since the counter-parties selling ether to the investor are not deliberately trying to give away their vote. Clearly this strategy is expensive and high-risk. If ether prices move in the wrong direction, the investor can end up losing money. Even if prices remain relatively flat— a rarity in the volatile world of cryptocurrency— there are transition fees associated with moving into/out of ether. Not to mention that the act of accumulating a large ether position alone can end up moving the market, making it increasingly more expensive to accumulate more votes.

There are better strategies for gaming the system. Suppose there is a large cryptocurrency holder with no opinion one way or another on a given issue. That person can always auction off their vote to the highest bidder. Alternatively if they wanted to remain at arm’s length from direct vote selling, they can temporarily “loan” out the funds in a risk free manner. Smart contracts make this easy: the seller transfers the ether to a smart-contract that only permits two actions: withdrawals back to the seller after a deadline or 0-ether votes cast by the buyer. The buyer can vote on an unlimited number of polls while the seller is guaranteed to regain control of the funds eventually.

Final tally on blockchain polls

In conclusion then, the current crop of blockchain polls have serious design flaws ranging from disenfranchising large segments of cryptocurrency users to allowing others to easily manipulate the outcome using funds they do not even control— voting with other peoples’ wallets. Until these design issues are properly addresses, they can not be considered anything more than unscientific straw polls, unreliable as an input for important decisions affecting the cryptocurrency ecosystem.

CP

 

QuadrigaCX and the case for regulation

[Full disclosure: this blogger was formerly CSO for a regulated cryptocurrency exchange. All opinions expressed are personal.]

After QuadrigaCX

Two lines of argument predictably follow in the aftermath of any cryptocurrency business failure. First is second-guessing by pundits about their favored silver-bullet technology that could have saved the failed venture— if only they implemented multisig/hardware wallets/Lightning or reformatted all webpages to Comic Sans, this unfortunate situation would not have occurred. Second involves increased calls for regulating cryptocurrency. The aftermath of Canadian exchange QuadrigaCX is following that script so far. Quadriga has entered into bankruptcy protection, with $190M USD in customer assets allegedly stuck in their cold wallet, because the only person with access to that system has unexpectedly passed away. 

Before taking up the question of what type of regulation would have helped, there is a more fundamental question about objectives. What outcomes are we trying to achieve with regulatory oversight? Or stated in the negative, what are we trying to avoid? There are at least three compelling lines of argument in favor of regulating some— not necessarily all— institutions handling digital assets:

  1. Consumer protection: Simply put, customers do not want to lose money to security breaches or fraud occurring at cryptocurrency businesses they depend on
  2. Stemming negative externalities: Cryptocurrency businesses must not create new avenues for criminal activity such as money laundering while the rest of the financial industry is working to eliminate the same activity in other contexts
  3. Market integrity: Pricing of digital assets must reflect their fair value assigned by the market as closely as possible, not artificially manipulated by participants with privileged access.

I. Consumer protection

This one is a no-brainer: when people entrust their money to a custodian, they expect to be able to get those funds back. Quadriga debacle represents only one possible story arc culminating in grief for customers. If their court filings are taken at face value, this incident stemmed from an “honest mistake:” the company lacked redundancy around mission-critical operations, relying entirely on exactly one person to fulfill an essential role. To put it more bluntly, it was incompetence. But that is not the only way to lose customer money. Far more common and better publicized in the cryptocurrency world have been losses due to security breaches, where systems holding digital assets were breached by outside actors and funds taken by force. Meanwhile initial coin offerings (ICOs) have been notorious for dishonest behavior by insiders, culminating in so-called exit scams where ICO organizers abscond with the funds raised, without delivering on the project.

There is no bright line between these categories. If a wallet provider is “0wned” because they failed to implement basic security standards, is that routine incompetence or does it rise to the level of gross negligence? In any case, such distinctions do not matter from a consumer perspective. Customer Bob will not feel any better about losing his savings after learning that a sophisticated a North Korean group was behind the theft. (Not entirely hypothetical, since state-sponsored North Korean attackers have been targeting bitcoin businesses, particularly in South Korea. Nor is it surprising that a rogue country would embrace bitcoin to subvert embargoes when it is cut off from the global financial system, in the same way that criminals & dark-markets embraced bitcoin as early adopters.)

That said we need to spell out he limits of what counts as “unreasonable loss.” All investing comes with risk. Notwithstanding the the tulip-bubble perception created by remarkable run-up in prices in 2017, there is no law of nature that bitcoin prices can only increase. As painful a lesson it may have been for investors who jumped on the HODL bandwagon at the wrong time, it is possible to lose money by investing in cryptocurrencies. For the most part, that type of loss can not be pinned on the cryptocurrency brokerage where the buying/selling occurred. Individual agency for investment decisions is a core assumption of free markets. But even then, there are edge-cases. Suppose an exchange decides to list an obscure altcoin whose value later drops to zero because miners abandon it. Does the exchange share in the responsibility for losses? Even if the exchange did not make patently false representations— “buy this asset, it is the next bitcoin”— customers could argue that supporting trading in that asset constitutes an implicit endorsement. For the purposes of this blog post, we will punt these questions to courts for resolution and only focus on losses due to systemic failures in security and reliability.

II. Negative externalities

A very different line of argument in favor of regulatory intervention involves society’s interest in limiting negative externalities. That interest is compelling because the market participants are not directly harmed and may not have any incentive otherwise to correct this behavior. Consider a cryptocurrency exchange favored by criminals to launder profits from illegal activity— BTCe would have been a good example until its 2017 seizure. Arguably others customers of the exchange were not affected adversely from this behavior, at least not in their role as customers per se. After all, criminals in a hurry are not price sensitive: they could be willing to dump their bitcoin at much lower prices or pay a premium to buy bitcoin at higher price than comparable venues where they would not be welcome. That means in a sense everyone involved in that particular trade is better-off: the crooks are happy because they exchanged their ill-gotten gains into a more usable currency, customers who happened to be counter-parties on the other side of that trade are happy because they got a great deal on their bitcoin and the exchange is happy because they collected commissions on the trade.

Society overall however, is not better off. Easy avenues for converting criminal proceeds into usable cash helps incentivize further criminal activity. Economic activity that looks like a win-win for everyone within the confines of a single cryptocurrency exchange can be harmful in the bigger picture when negative externalities are taken into account. To libertarian sensibilities this may represent an overreach by the state to intervene in free-market transactions between fully informed participants: the criminal trying to dump bitcoin and some willing buyer interested in acquiring those assets. But pragmatically speaking, there is significant precedent for regulating financial services based on this premise and the argument that cryptocurrency is somehow exempt from similar considerations does not pass a sanity check.

III. Market integrity

If price discovery is the central function of an exchange, it is important for those price signals to represent true supply and demand. There are many ways the signal can get out of sync from underlying market conditions. For example, in the early days when Chinese exchanges dominated bitcoin trading— before the government got wise to use of cryptocurrency for evading capital controls— there were frequent allegations that self-reported volume on those exchanges was inaccurate. While the subsequent crackdown by PRBC put a damper on volume, even as late as 2018 observers continued to find compelling evidence that leading exchanges were continuing to misrepresent trade volume.

Even in the absence of complicity by exchange operators, financial markets are subject to manipulation attempts by unscrupulous participants. Deliberately employed strategies such as wash-trading and order-spoofing can distort the price signal that emerges. That distorted view poses a challenge both to participants and third-parties. On the one hand, other customers trading on the same venue are getting a bad deal: they could end up paying too much for their cryptocurrency due to artificially induced scarcity or receive less than fair-market value when selling because of inflated supply. On the other hand, when market data is used for pricing other assets—such as derivatives in the underlying asset or shares in a cryptocurrency fund— the noise added to the signal can have far reaching consequences, affecting people who had no relationship with the exchange where the manipulation occurs.

One of the more interesting cases of alleged market manipulation in bitcoin involves neither exchanges or customers. By some accounts, the stablecoin Tether had an outsized influence on price movements for much of 2017 and 2018. Some cryptocurrency businesses can not handle fiat money—their lack of compliance programs have turned these entities into pariahs of the financial system, with no correspondent bank willing to take their business. This is where Tether comes in. A digital asset designed to have one-to-one relationship to US dollar, tether exists on a blockchains and freely moved around, bypassing traditional channels such as wire transfer. Customers who could not directly deposit US dollars into their Bitfinex account could instead purchase tethers in equivalent amount, move those tethers over and place order on the BTC/USDT order book. In isolation the use of tether is not evidence of attempt to fabricate market demand— although it is arguably a lifeline for exchanges with weak/nonexistent compliance programs, since they are precisely the companies who can not establish banking relationship required to receive fiat currency. But suppose tethers could be printed out of thin air, without receiving the requisite US dollars to back the digital assets? Then Tether (the issuing company) could fabricate “demand” for bitcoin since individuals buying bitcoin with tethers (the currency) are effectively working with free money.  The ballooning amount of tethers in circulation, combined with the opacity of the issuing company— Bloomberg referred to it as the $814 Million Mystery— and lack of independent audits have fueled speculation about whether it is operating a fractional reserve.  At least one research paper found that most of the meteoric rise in bitcoin prices could be attributed to tether issuance. That is an outsized amount of influence for a single company: by issuing less than three billion dollars worth of a digital asset— whether or not those were backed by USD deposits as promised— Tether helped propel bitcoin to a peak market capitalization over 100 times that amount. While the Tether story is not over and the presumption of innocence applies, it underlines another argument for regulation: left unchecked, rogue actors can have outsized influence in distorting the operation of digital currency markets.

CP

 

Proof of funds for cryptocurrency custody: getting by with limited trust (part VI)

[continued from part V]

Design sketch

Here is an overview of an approach that combines private proof-of-assets to trusted examiner with crowd-sourced verification of the ledger. The custodian publishes the ledger as a series of entries, one per customer:

<Pseudonym, balance commitment, range proof>

At a high level:

  • Pseudonyms are unique customer IDs generated for this proof and never reused in the future.
  • Balances are represented as cryptographic commitments, which hide the actual value from public eyes but allow selective disclosure
  • Range proofs are non-interactive zero-knowledge proofs  demonstrating that the committed balance lies in a sane interval, such as zero to 21 million bitcoins.

Pseudonyms must be generated deterministically such that the customer can verify it can only be referring to their account. Generating a random value and emailing that to the customer will not work; if Alice and Bob have the same balance, the custodian could fool both into believing a single entry represents their balance. Likewise a pseudonym can not be derived from an opaque internal identifier only meaningful to the custodian, such as internal database IDs assigned to each customer. While database keys must be unique, customers have no visibility into that mapping and can not detect if the custodian is cheating by assigning the same ID to multiple accounts. One option that avoids these pitfalls is to compute this identifier as a cryptographic commitment to the email address. This protects the email address from public visibility while allowing selective disclosure to that customer when desired.

Speaking of commitments, a similar construction is used for representing account balances. Here the ideal commitment scheme allows doing basic arithmetic on committed values. Specifically: given commitments to two unknown numbers, we want to compute a commitment representing their— still unknown—  sum. That would be very handy in an accounting context: given commitments of individual customer balances, the custodian would be able to produce a new commitment and show that it represents the total balance across all customers. (This is similar to the notion of homomorphic encryption. For example the Paillier public-key encryption algorithm allows working with encrypted data. Given Paillier ciphertexts of two unknown numbers, anyone can craft the Paillier encryption of their sum.) Multiple options from the literature fit the bill here, going back at least two decades including Pedersen and Fujisaki-Okamoto commitment schemes.

Avoiding integer overflows, the cryptography edition

There is still a catch: regardless of the commitment scheme chosen, they all operate in modular arithmetic. There is an upper bound to the values that can be represented, even if that limit happens to be a very large number with hundreds of digits. If we try to use the additive property in a situation where the sum exceeds that limit, the result will overflow and return incorrect results— the cryptographic equivalent of an integer overflow vulnerability. These commitments that act like negative numbers. When combined valid commitments of real accounts, they will end up subtracting from the total balances.

Left unchecked, this allows custodians to cheat. It’s no consolation that such numbers will not occur naturally for real account balances: a dishonest prover can fabricate bogus ledger entries with negative balances. Since the full list of customers is not publicly known, no one will notice the spurious accounts. The imaginary customers will not show up to challenge their misrepresented balance. Meanwhile the negative values reduce the total perceived liability of the custodian because they subtract from total balances expected when summing up the commitments.

This is where the last element of the entry comes in. A range proof (such as Boudot’s result from 2000 using FO commitments) demonstrates that the committed value belongs in a sane interval, without revealing anything more about it. Such range proofs are public: it does not require any secret material to verify. Requiring positive balances reduces any incentive for the custodian to invent bogus customers. Doing so can only inflate the liabilities side of the ledger and require more cryptocurrency on assets side to pass the solvency test. Incidentally, there is a low-tech alternative to range proofs by relaxing the privacy constraint: the custodian can open every commitment for the third-party doing the examination. While the examiner still can’t tell if alice@example.com behind the pseudonym is a real customer, they can at least confirm her alleged balance is positive.

Verification

To prove the integrity of the ledger, commitments in each entry are opened privately for the specific customer associated with that entry. This involves making available to that customer all the random values used in the construction of the commitment. That could communicated via email or displayed on a web page after the customer logs into the custodian website. Armed with this information, every customer:

  1. Verify their own balance is represented accurately in the ledger.
  2. Rest assured that the ledger entry containing their balance is exclusive to their account. It can not be reused for other customers, because the pseudonym is uniquely tied to identity.
  3. Confirm that all entries in the ledger represent positive balances. While other customer balances are hidden by commitments, the associated range proofs are publicly verifiable.
  4. Calculate a commitment to total balances across the customer base

That last property achieves consensus around a single committed value for liabilities that everyone agrees on— the custodian, all customers and any independent examiner hired by the custodian. Next the custodian verifiably opens that single commitment for the benefit of the examiner, revealing total liabilities. (Alternatively, the custodian can open it publicly if there is no privacy concern about disclosing total assets under management.)

Next the custodian executes the usual proof of assets on blockchain, by demonstrating knowledge of private keys corresponding to claimed blockchain addresses. This demonstration is not public. Only the trusted examiner gets visibility into addresses. This is where trust in the independent examiner enters the picture. The proof is only convincing to the extent that the examiner is honest and competent. Honest, in that they will not make false assertions if the accounting demonstrates a discrepancy. Competent, in that they are familiar with sound methodologies for proving control over keys. (For example, they will insist on influencing challenge messages to be signed, to avoid being fooled by recycled signatures on ancient messages.) Assuming the proof is carried out to the satisfaction of the examiner, they can produce an attestation to the effect that at a specific point in time custodian assets were approximately equal to liabilities implied in the ledger. Crucially the examiner can look beyond numbers alone and assess the design of the cryptocurrency system. Does it have appropriate physical and logical access controls? Is there enough redundancy in backups? Are there key-person risks where only person can execute critical tasks— looking at you QuadrigaCX?

Summary

To recap: liabilities are verified in a distributed, public fashion with every customer able to check their own balance. This requires no external trust assumptions. Assets on the other hand are verified privately by a trusted third-party, who is given full visibility into distribution of those assets on the blockchain. Unlike BIP127 this approach covers both assets and liabilities. It does not require public disclosure of addresses, and by implication, total assets under custody. Also unlike the Coinfloor approach, individual customer balances are not revealed, not even pseudonymously or to an independent examiner. It is not limited to P2PKH addresses; arbitrary spend scripts can be accommodated. Finally it permits going beyond simple control of addresses and demonstrating higher redundancy. For example with M-of-N multisig, instead of proving control over the minimum quorum of M keys, the custodian can be held to a higher standard and required to prove possession of all N. There is still an element of trust involved in the independent examiner, but less trust required in the custodian for performing the proof. Unlike opaque audits where the examiner can not independently verify ledger integrity, publishing the ledger turns every customer into a potential examiner.

It is easy to accommodate additional requirements with changes to the protocol. For example, if we are willing to place additional trust in the independent examiner, they can also be tasked with reviewing bank statements to check presence of fiat assets. They can reviews internal policies and procedures used by the custodian, looking for red-flags such as key person risk that appears to have plagued QuadrigaCX. We can move in the other direction, reducing trust in the examiner while giving up some privacy for the custodian. Suppose we insist that the exchange publicly open the commitment to its total liabilities and publish the proof of control over keys. That would disclose all blockchain addresses used by the custodian but in return take the examiner out of the trust equation for digital assets.

CP