Bitcoin’s meta problem: governance (part I)


Layer 9: you are here

Bitcoin has room for improvement. Putting aside regulatory uncertainty, there is the unsustainable waste of electricity consumed by mining operations, unclear profitability for miners as block rewards decrease and last but not least, difficulty scaling beyond its Lilluputian capacity of handling only a few transactions per second globally. (You want to pay for something using Bitcoin? Better hope not many other people have that same idea in the next 10 minutes or so.) In theory all of these problems can be solved. What stands in the way of a solution is not the hard reality of mathematics; this is not a case of trying to circle the square or solve the halting problem. Neither are they insurmountable engineering problems. Unlike calls for designing “secure” systems with built-in backdoors accessible only to good guys, there is plenty of academic research and some real-world experience building trusted, distributed systems to show the way. Instead Bitcoin the protocol is running into problems squarely at “layer 9:” politics and governance.

This last problem of scaling has occupied the public agenda recently and festered into a full-fledged PR crisis last year with predictions of the end of Bitcoin. Much of the conflict  focusing on the so-called “block-size”- the maximum size of each virtual page added to the global ledger of all transactions maintained by the system. More space in that page, more transactions can be squeezed in. That matters for throughput because the protocol also fixes the rate at which pages can be added, to roughly one every 10 minutes. But TANSTAAFL still holds: there are side-effects to increasing this limit, which was first put in place by Satoshi himself/herself/themselves to mitigate denial-of-service attacks against the protocol.

Game of chicken

Two former Bitcoin Core developers found this out the hard way last summer when they tried to force the issue. They created a fork of the popular open-source implementation  of bitcoin (Bitcoin Core) called BitcoinXT with support for expanded block size. The backlash came swift and loud. XT did not go anywhere, its supporters were banned from Reddit forums and the main developer rage-quit Bitcoin entirely with a scathing farewell. But that was not the end of the scaling experiment. Take #2 followed shortly afterwards as a new fork dubbed Bitcoin Classic, with more modest and incremental changes to block-size to address criticisms in XT. As of this writing, Classic has more traction than XT ever managed but remains far from reaching the 75% threshold required to trigger a permanent change in protocol dynamics.

Magic numbers and arbitrary decisions

This is a good time to step back and ask the obvious question: why is it so difficult to change the Bitcoin protocol? There are many arbitrary “magic numbers” and design choices hard-coded in the design:

  • Money supply is fixed at 21 million bitcoins.
  • Each block rewards the miner 50 bitcoins, but that reward halves periodically with the next decrease expected around June of this year
  • Mining uses a proof-of-work algorithm based on the SHA2 hash function
  • Proof-of-work construction encourages the creation of special-purpose ASIC chips, because they have significant efficiency advantages over using ordinary CPUs or GPUs that ship with off-the-shelf PCs/servers.
  • That same design is “pool-friendly:” its design permits the creation of mining pools, where a centralized pool operator coordinates work by thousands of independent contributors and distributes rewards based on share of work coordinated.
  • Difficulty level for that proof-of-work is adjusted roughly around ~2000 blocks, with the goal of making the interval between blocks 10 minutes
  • Transactions are signed using ECDSA algorithm over one specific elliptic-curve secp256k1
  • And of course, blocks are limited to 1MB in size

Where did all of these decisions come from? To what extent are they fundamental aspects of Bitcoin—it would not be “Bitcoin” as we understand it without that property— as opposed to arbitrary decisions made by Satoshi that could have gone a different way? What is sacred about the number 21 million? (It is half of 42, the answer to the meaning of life?) Each of the decisions can be questioned, and in fact many have been challenged. For example, proof-of-stake has been offered as an alternative to proof-of-work to halt runaway costs and CO2 emissions of electricity wasted on mining. Meanwhile later designs such as Ethereum tailor their proof-of-work system explicitly to discourage ASIC mining, by reducing the advantage such custom hardware would have over vanilla hardware. Other researchers proposed discouraging mining by making it possible for the participant who solves the PoW puzzle to keep the reward, instead of having it automatically returned to the pool operator for distribution. One core developer even proposed (and later withdrew) a special-case adjustment to block difficulty for upcoming change to block rewards. It was motivated by the observation that many mining operations will become unprofitable when rewards are cut in half, powering off their rigs and resulting in a significant drop in total mining power that will remain uncorrected for a significant time as blocks are mined at a slower rate.

Some of these numbers reflect limitations or trade-offs necessitated by current infrastructure. For example, one can imagine a version of Bitcoin that runs twice as fast, generating blocks every 5 minutes instead of 10. But that version would require each node running the software to exchange data twice as fast among themselves, because Bitcoin relies on a peer-to-peer network for distributing transactions and mined blocks. This goes back to the same objection levied against large-block proposals such as XT and Classic. Many miners are based in countries with high-latency, low-bandwidth connections such as China, a situation not helped by economics that drive mining operations to locate to the middle of nowhere, close to cheap source of power such as dams, but away from fiber. There is a legitimate concern that if bandwidth requirements escalate- either because blocks sizes go up or alternatively because blocks are minted more frequently- they will not be able to keep up But what happens when those limitations go away, when multi-gigabit pipes are available to even the most remote locations and the majority of mining power is no longer constrained by networking?

Planning for change

Once we acknowledged that change is necessary, the question becomes how such changes are made. This is as much a question of governance as it is of technology. Who gets to make the decision? Who gets veto power? Does everyone have to agree? What happens to participants who are not on board with the new plan?

Systems can be limited because of a failure in either domain. Some protocols were designed with insufficient versioning and forwards-compatibility; that means it is very difficult for them to operate in a heterogeneous environment consisting of “old” and “new” versions existing side-by-side. That makes it very difficult to introduce upgrades, because everyone must coordinate on a “flag-day” to upgrade everything at once. In other cases, the design is flexible enough to allow small, local improvements, but the incentives for upgrading are absent. Perhaps the benefits for upgrade are not compelling enough or there is no single entity in charge of the system capable of forcing all participants to go along.

For example, credit-card networks have long been aware of the vulnerabilities associated with magnetic-stripe cards. Yet it has been a slow uphill battle to get issuing-banks to replace existing cards and especially merchants to upgrade their point-of-sale terminals to support EMV. Incidentally that is a relatively centralized system: card-networks such as Visa and MasterCard sit in the middle of every transaction, mediating the movement of funds from the bank that issued the credit-card to the merchant. Visa/MC call the shots around who gets to participate in this network and under what conditions, with some limits defined by regulatory watch-dogs worried about concentration in this space. In fact it was their considerable leverage over banks/merchants which allowed card networks to push for EMV upgrade in the US, by dangling economic incentives/penalties in front of both sides. Capitalizing on the climate of panic in the aftermath of Target data-breach, these networks were able to move forward with their upgrade objectives.

[continued in part II]

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s