Bitcoin and the C-programmer’s disease


Revenge of the C programmer

The Jargon File, a compendium of colorful terminology from the early days of computing later compiled into “The New Hacker’s Dictionary” defines the C programmer’s disease as the tendency of software written in that particular programming language to feature arbitrary limits on its functionality:

C Programmer’s Disease: noun.
The tendency of the undisciplined C programmer to set arbitrary but supposedly generous static limits on table sizes (defined, if you’re lucky, by constants in header files) rather than taking the trouble to do proper dynamic storage allocation. If an application user later needs to put 68 elements into a table of size 50, the afflicted programmer reasons that he or she can easily reset the table size to 68 (or even as much as 70, to allow for future expansion) and recompile. This gives the programmer the comfortable feeling of having made the effort to satisfy the user’s (unreasonable) demands, …

Imagine spreadsheets limited to 50 columns, word-processors that assume no document will exceed 500 pages or a social network that only lets you have one thousand friends. What makes such upper bounds capricious—earning a place in the jargon and casting aspersions on the judgment of C programmers everywhere— is that they are not derived from any inherent limitation of the underlying hardware itself. Certainly handling a larger document takes more memory or disk space. Even the most powerful machine will max out eventually. But software afflicted with this problem pays no attention to how much of either resource the hardware happens to possess. Instead the software designers in their infinite wisdom decided that no sane user needs more pages/columns/friends than what they have seen fit to define as universal limit.

It is easy to look back and make fun of these decisions with the passage of time, because they look incredibly short-sighted. “640 kilobytes ought to be enough for anybody!” Bill Gates allegedly said in reference to the initial memory limit of MS-DOS (although the veracity of this quote is often disputed.) Software engineering has thankfully evolved beyond using C for everything. High-level languages these days make it much easier to do proper dynamic resource allocation, obviating the need for guessing at limits in advance. Yet more subtle instances of hardwired limits keep cropping up in surprising places.

Blocked on a scaling solution

The scaling debate in Bitcoin is one of them. There is a fundamental parameter in the system, the so-called block-size, which has been capped at a magic number of 1MB. That number has a profound effect on how many transactions can take place, in other words how many times funds can be moved from one person to another, the sine qua non When there are more transactions available than blocks, congestion results: transactions take longer to appear in a block and miners can become more picky about which transactions to include.

Each transaction includes a small fee paid to miners. In the early days of the network, these fees were so astronomically low that Bitcoin was being touted as the killer-app for any number of problems with entrenched middlemen. (In one example, someone moved $80M paying only cents in fees.) Losing too much of your profit margin to credit-card processing fees? Accept Bitcoin and skip the 2-3% “tax” charged by Visa/Mastercard. No viable alternative to intrusive advertising to support original content online? Use Bitcoin micro-payments to contribute a few cents to your favorite blogger each time you share one of their articles. Today transactions can exceed ~$1 on average. No one is seriously suggesting paying for coffee in Bitcoin any longer, but some other scenarios such as cross-border remittances where much larger amounts are typically transferred with near-usurious rates charged by incumbents like Western Union remain economically competitive.

Magic numbers and arbitrary decisions

Strictly speaking the block-size cap is not a case of the C programmer’s disease. Bitcoin Core having been authored in C++ has nothing to do with the existence of this limit. To wit, many other parameters are fully configurable or scale automatically to utilize available resources on the machine where the code runs. The blocksize is not an incidental property of the implementation, it is a deliberate decision built into the protocol. Even alternative implementations written in other languages are required to follow it. The seemingly-innocuous limit was introduced to prevent disruption to the network caused by excessive blocks. In other words, there are solid technical reasons for introducing some limit. Propagating blocks over the network gets harder as their size increases, a problem acutely experienced by the majority of mining power which happens to be based in China and relying on high-latency networks behind the Great Firewall. Even verifying that a block is correctly produced is a problem, due to some design flaws in how Bitcoin transactions are signed. In the worst-case scenario the complexity of block verification scales quadratically: a transaction twice as large can take four times as much CPU time to verify. (A pathological block containing such a giant transaction was mined at least once, in what appears to have been a good-intentioned attempt by a miner to clean up previous errant transactions. Creating such a transaction is much easier than verifying it.)

In another sense, there is a case of the C programmer attitude at work here. Someone, somewhere made an “executive decision” that 1MB blocks are enough to sustain the Bitcoin network. Whether they intended that as a temporary stop-gap measure to an ongoing incident, to be revisited later with a better solution, or as an absolute ceiling for now and ever is open to interpretation. But one thing is clear: that number is arbitrary. From the fact that a limit must exist, it does not follow that 1MB is that hollowed number. There is nothing magical about this quantity to confer an aura of inevitable finality on the status quo. It is a nice, round number pulled out of thin-air. There was no theoretical model built to estimate the effect of block-size on system properties such as propagation time, orphaned blocks,  bandwidth required for a viable mining operation— the last one being critical to the idea of decentralization. No one solved a complex optimization problem involving varying block-sizes and determined that an even 1000000 bytes is the ideal number. That was not even done in 2010, much less in the present moment where presumably different, better network conditions exist around bandwidth and latency. If anything, when academic attention turned to this problem, initial results based on simulation suggested that the present population of nodes can accommodate larger blocks.

Blocksize and its discontents

Discontent around the blocksize limit grew louder in 2015, opening the door to one of the more acrimonious episodes in Bitcoin history. The controversy eventually coalesced around two camps. The opening salvo came from a group of developers who pushed for creating an incompatible version called Bitcoin XT, with a much higher limit: initially 20MB, later “negotiated” down to 8MB. Activating this version would require a disruptive upgrade process across the board, a hard-fork where the network risks splintering into two unless the vast majority of nodes upgrade. Serious disruption can result if a sizable splinter faction continues to run the previous version which rejects large blocks. Transactions appearing in these super-sized blocks would not be recognized by this group. In effect Bitcoin an asset itself would splinter into two. For each Bitcoin there would have been one“Bitcoin XT” you own on the extended ledger with large blocks and one garden-variety old-school Bitcoin owned on the original ledger. These two ledgers would start out identical but later evolve as parallel universes, diverging further with each transaction that appears on one chain without being mirrored in the other.

To fork or not to fork

If the XT logic for automatically activating a hard-fork sounds like a reckless ultimatum to the network, the experience of the Ethereum project removed any doubts on just how disruptive and unpredictable such inflection points can get. An alternative crypto-currency built around smart contracts, Ethereum had to undertake its own emergency hard-fork to bailout the too-big-to-fail DAO. The DAO (Distributed Autonomous Organization) was an ambitious project to create a venture capital firm as a smart-contract running on Ethereum with direct voting on proposal by investors. It had amassed $150M in funds until an enterprising crook noticed that the contract contained a security bug and exploited it to siphon funds away. The Ethereum Foundation sprung into action, arranging for a hard-fork to undo the security breach and restore stolen funds back to the DAO participants. But the rest of the community was unimpressed. Equating this action to crony-capitalism and bailout of failed institutions common in fiat currencies—precisely the interventionist streak that crypto-currencies were supposed to leave behind— a vocal minority declined to go along. Instead of going along with the fork, they dedicated their resources to keeping the original Ethereum ledger going, now rebranded as “Ethereum Classic.” To this day ETC survives as a crypto-currency with its own miners, its own markets for trading against other currencies (including USD) and most importantly its own blockchain. In that parallel universe, the DAO theft has never been reverted and the alternate ending of the DAO story is the thief riding off into the sunset holding bags of stolen virtual currency.

The XT proposal arrived on the scene a full year before Ethereum provided this abject lesson on the dangers of going full-speed ahead on contentious forks. But the backlash against XT was nevertheless swift. Ultimately one of its key contributors rage-quit, calling Bitcoin a failed experiment. One year after that prescient comment, Bitcoin price had tripled, proving Yogi Berra’s maxim about the difficulty of making predictions. But the scaling controversy would not go away. Blocks created by miners continued to edge closer to the absolute limit, fees required to get transactions into those blocks started to fluctuate and spike, as did confirmation times.

Meanwhile Bitcoin Core team quietly pursued a more cautious, conservative approach, opting for introducing non-disruptive scaling improvements, such as faster signature verification to improve block verification times. This path avoided creating any ticking time-bombs or implied upgrade-or-else threats for everyone in the ecosystem. But it also circumscribed limits on what types of changes could be introduced when maintaining backwards compatibility is a nonnegotiable design goal. The most significant of these improvement was segregated-witness. It moves part of transaction data outside the space allotted to transactions within a block. This also provides a scaling improvement of sorts, a virtual block-size increase without violating the sacred 1MB covenant: by slimming down the representation of transactions on the ledger, one could squeeze more of them into the same scarce space available in one block. The crucial difference: this feature could be introduced as soft-fork. No ultimatums to upgrade by a certain deadline, no risk of network-wide chaos in case of failure to upgrade. Miners indicate their intention to support segregated witness in the blocks they produce. The feature is activated when a critical threshold is reached. If anything segregated witness was too deferential to miner votes, requiring an unusually high degree of consensus at 95% before going into effect.

Beyond kicking the can down the road

At the time of writing, blocks signaling support for segregated witness plateaued around 30%. Meanwhile Bitcoin Unlimited (BU) inherited the crown from XT in pushing for disruptive hard-forks, by opening the door to miners voting on block size. It has gained enough support among miners that a contentious fork is no longer out of the question. Several exchanges have signed onto a letter describing how Bitcoin Unlimited would be handled if it does fork into a parallel universe, and at least one exchange has already started trading in futures about the fork.

Instead of trying to make predictions about how this stand-off will play out, it is better to focus on the long-term challenge of scaling Bitcoin. One-time increase in capacity  enabled by segregated witness (up to 2x, depending on assumptions about adoption rate and mix of transactions) is no less arbitrary than the original 1MB limit that all sides are railing against. Even BU with the implied of lack of limitations in the name turns out to cap blocksize at 256MB—not to mention that in a world where miners decide block size, it is far from clear that the result will be a relentless competition to increase it over time. Replacing one magic number pulled out of thin air with an equally bogus one that does not derive from any coherent reasoning built on empirical data is not a “scaling solution.” It is just an attempt to kick the can down the road. The same circumstances precipitating the current crisis—congested blocks, high and unpredictable transaction fees, periodic confirmation delays— will crop up again once network usage starts pushing against the next arbitrary limit.

Bitcoin needs a sustainable solution for scaling on-chain without playing a dangerous game of chicken with disruptive forks Neither segregated witness or Bitcoin Unlimited provides a vision for solving that problem. It is one thing to risk disruptive hard-forks once to solve the problem for good. It is irresponsible to engage in such brinkmanship as the standard operating procedure.

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s