Bitcoin for the unbanked: receding possibilities

In October 2016 the former COO of the Chinese Bitcoin exchange BTC-C generated plenty of controversy with a nonchalant tweet starting out with this casual statement:

“Bitcoin isn’t for people that live on less than $2 a day.”

Casually dropped in the midst of one of those interminable arguments about Bitcoin scaling playing out on social-media, this statement was appalling for the implied premise. It was a complete 180 degree turn from the rhetoric surrounding the rise of cryptocurrency, with Bitcoin always the torch-bearer. Virtual currencies are liberating forces we were told, breaking up the hold of entrenched institutions on finance that rigged markets and debased currencies. Arrayed against the forces of progress was a rotating cast of villains, different for each portrayal but largely interchangeable in terms of their unfair position: the Federal Reserve, too-big-to-fail banks requiring bailouts, government agencies asleep at the wheel clinging on to outdated regulations or worse enabling those bailouts in a case of regulatory capture, Visa/MasterCard duopoly controlling online payments. Bitcoin was going to be the new money system for  everyman, unequivocally on the side of with David against the faceless institutional Goliath. No need for banks, credit-card networks, payment processors, money-transmitter licenses. Anyone with an inexpensive smart phone running a Bitcoin wallet application could send and receive money directly to anyone else anywhere in the world. No army of middle-man circling for their cut of the transaction, no gatekeepers to decide who is worthy enough to partake in this network. Also no need to worry about the debasement of hard-earned currency by an out of control printing press operated at the behest of unaccountable bureaucrats.

Rhetorical excesses aside, there were plenty of solid use cases for Bitcoin in the early days suggesting that it could help the so-called “unbanked”— more than a billion people living in developing nations without access to financial services, subsisting strictly on a cash system. No bank accounts, no way to write checks or swipe credit-cards for purchases, no credit history, no way to finance large purchases. In Africa nascent systems such as M-Pesa had already demonstrated that one could leap-frog from a standard cash economy to using smartphone wallets directly. In a strange twist, these countries skipped entire generations of earlier payment systems including check clearing, ACH, wires, PIN debit & credit networks, skipping straight to mobile wallets while US tech companies struggle and flounder in their attempts to bootstrap mobile payments. Yet M-Pesa is still a centralized technology operated by the wireless carrier Vodafone. It is crucially dependent on the coverage of retail infrastructure, specifically kiosks where consumers can go to exchange cash for M-Pesa credits.

Bitcoin offers some compelling advantages over this model. With a decentralized system, no single actor has enough leverage to squeeze the ecosystem for additional profits. Safaricom can raise M-Pesa transaction fees arbitrarily. The closest analog to a pricing cartel in Bitcoin are miners, but they face relentless pressure to keep fees competitive: if one miner decides to charge too much for mining Bitcoin transactions into the ledger, a more efficient one will come along and gladly collect the fees from those transactions. While consumers still need currency exchanges to convert between BTC and local fiat money, that function can be served by private vendors competing in a transparent market instead of under complete control of Safaricom/Vodafone. For those concerned about monetary policy, BTC offers an attractive intrinsically deflationary model. The government of Kenya can churn out shillings and Safaricom can flood the market with M-Pesa credits just as easily as Hasbro can print more Monopoly money. But no one can fabricate Bitcoins out of thin air and cause runaway inflation in BTC prices.

Given all of these factors, it is not too hard to see why Bitcoin circa 2012 looked promising for developing nations. It is another case of the surprising last-mover advantage: instead of playing catch-up with developed countries by painstakingly building out “legacy” payment rails—check clearing, card networks, eventually NFC payments—emerging markets can leapfrog straight to the latest paradigm of virtual currency.

It did not work out that way. A quick look at Bitcoin exchange statistics reveals that the majority of trading in Bitcoin is concentrated in a few markets with existing, highly-developed financial systems, not emerging markets home to millions of unbanked consumers. (China is arguably the rare exception because Bitcoin was one of the few options around capital controls in place to prevent currency outflows, but that volume has evaporated almost overnight after the Central Bank cracked down on exchanges.) It is not to difficult to see why the rosy picture painted above did not play out. For starters, BTC itself has been an extremely volatile: from an all-time high near $1300USD before the implosion of Mt Gox to dipping below $200 in two years, doubling again within a year and then embarking on a stratospheric climb towards $3000. For all the talk of inflationary policies, fluctuations in the value of Bitcoin would make even the most irresponsible, interventionist central banker look restrained by comparison. That volatility makes it less than ideal to use Bitcoin for everyday commerce, much less enter into long-term contracts denominated in BTC.

One could argue that early volatility is just growing pains for a new-fangled currency as the market struggles to discover “correct” pricing. Alternatively it can be blamed on hoards of speculators with no actual use for BTC chasing this coveted asset simply because other people are also trying to buy it—a classic case of an asset bubble. Either way, optimists expect such speculative activity will eventually diminish in scale compared to the overall volume of cryptocurrency trading, resulting in a steady state with relatively stable exchange rates. But there is one more assumption built into this lofty vision of Bitcoin, as an democratizing force that helps millions of consumers in developing countries fully participate in markets: low transaction costs. That premise looked solid in 2012 and anchored many other presumed use-cases for Bitcoin such as paying for that cup of coffee. All of these scenarios have been called into question by recent developments. In contrast with the problem of exchange-rate volatility which may well improve as the market matures, the vision of low-cost efficient payments is becoming less realistic.

The Bitcoin fee model is unusual to say the least. Most payment systems charge costs that are at least in part proportional to the value transferred. This follows a natural assumption: someone moving large amounts of money derives greater utility from that transaction than a person moving a modest amount. It follows that they would be willing to pay higher costs for the privilege of executing that transaction. (This is an oversimplification; it is also common to have fixed costs and discounts that kick-in for high amounts.) Bitcoin throws that logic out the window, charging instead based on approximate “complexity” of transactions. That complexity is indirectly measured by amount of space required to represent the transaction on the blockchain. A transaction with a single source, straightforward redeem script (“sign with this public-key”) and single destination output takes relatively few bytes to encode. One that combines multiple inputs, complex redemption conditions (“signed by 3 out of 5 keys”) and distributes those funds to multiple destinations takes up more space. Yet complexity is orthogonal to value transferred. This is what makes it possible to move $80 million USD with a few cents in transaction fees—an astonishing level of efficiency unequaled by any other payment system available to consumers—or moving $5 while paying half that amount in fees, which is extremely wasteful.

It’s as if banks charged fees for cashing checks based on how much ink there is on the check instead of their notional value. Yet this model makes sense given that space in the blockchain is itself a scarce resource. Each new block miner for extending the ledger can accommodate exactly 1MB worth of transactions. That scarcity creates natural competition for transactions trying to get mined into the next block by providing sufficient incentives to miners.

That brings us back to the question of developing markets. Back in the early days when ambitious visions of everyone paying for their next cup of coffee in bitcoin were being bandied about, those fees were negligible. Bitcoin was poised to undercut credit-card networks for retail purchases, massively undercut Western Union for international remittances and even outdo Paypal for efficient peer-to-peer payments. It even looked like the first realistic option for micro-payments, where very small amounts of money change hands very frequently: visitors to a web site donating a few cents for each article read. Fast forward to 2017, blocks are full, memory pool—that waiting queue of outstanding transactions waiting to be confirmed in the ledger— has ballooned and transactions fees are no longer negligible. Bitcoin businesses that naturally attract frequent fund movements such as exchanges have resorted to policy changes for passing transaction fees directly to customers. The only fighting chance for micro-payments today rests on the deployment of additional overlays such as the Lightning Network, implemented on top of the standard Bitcoin protocol.

Given the status quo it would be difficult disagree with the statement that Bitcoin as it exists today has very little to offer citizens of developing nations looking for an alternative payment solution for everyday purchases. Indeed the network as deployed today is not capable of clearing a large number of small transactions. (They may still find some value in its deflationary nature, as with Venezuelan citizens hoarding BTC in the midst of their economic crisis.)

It did not have to be that way. The scarcity of space is an artificial consequence of the arbitrary 1MB limit, the relic from a tactical fix implemented in response to an unrelated problem without much consideration given to future consequences. One could imagine counterfactuals where the blocksize limit is allowed to float, perhaps increasing automatically over time or adjusting in response to demand in the same way mining difficulty is constantly calibrated for constant throughput. There are clear costs to increasing blocksize: additional space required for storing a larger ledger would place demands on all nodes participating in the network. By raising costs, critics contend that such unchecked growth may force some to give up, resulting in a less decentralized network. On the other hand, the ledger is expanding every time a new block is mined and “cost” of running full node does go up measured in raw disk space. So the relevant question concerns rate of increase. Is the increased burden outpacing Moore’s law to the point that running a full node becomes more expensive in real terms? Is the growth rate predictable enough for planning future capacity? (That is a strike against quantum leaps from 1MB → 8MB because it leaves little time for adjustment.)

Arguments for and against raising block limit are being advanced daily, as are alternatives that improve throughput while leaving that sacred parameter alone. The problem is not for lack of ideas on scaling; there are too many possibilities coupled with too little consensus on which ones to pursue. The community has been unable to agree on a single solution, precipitating the current crisis with miners and users playing a game of chicken that could splinter the network on August 1st. High fees and low transaction rates have sabotaged many scenarios for using bitcoin that seemed perfectly within reach in the past. Dashed hopes for getting the millions of unbanked citizens of developing nations onboard is just one part of that collateral damage.

CP

 

Wannacry: ransomware as diversionary tactic?

Digging into motives

It is conventional wisdom in information security that precise attribution for successful digital attacks is very difficult difficult. Concealing the source of malware is much easier than say the launch point of an ICBM, and sophisticated attackers can engage in false-flag operations to frame innocent bystanders for their actions. So it is unusual that persuasive evidence has already emerged linking the ransomware Wannacry to DPRK. (Democratic People’s Republic of Korea, or North Korea for short— whenever the words “democratic” and “people” appear in the name of a country, you can be certain it is neither.) Even the NSA and British NCSC confirmed as much in off-the-record statements. That degree of confidence in the conclusion is itself surprising: nation-state sponsored threat actors are expected to be among the most sophisticated of all threat actors, and therefore more likely to exercise good operational security in hiding their tracks and avoiding mistakes that point back at the responsible party.

But assuming the attribution in this case is correct, it raises two questions:

  • Are nation states now carrying out financially motivated attacks?
  • Why was Wannacry so spectacularly unsuccessful at extracting payment from its targets?

Intelligence gathering vs get-rich-quick schemes

Managing information security risks calls for an understanding of the threat actors one is likely to run up against. Motivations and capabilities of the average script-kiddie are vastly different than those of an intelligence agency, as are the appropriate defensive measures necessary to defend against them. In particular, one needs a credible motive for attack.

A good chunk of online crime is financially motivated. Large-scale breaches against payment systems, such as TJ Maxx (2006), Target (2013) or Home Depot (2014) were driven by good old-fashioned greed. The perpetrators hoped to monetize stolen payment instruments, either by using them directly to conduct fraudulent purchases, or offload the card details to other enterprising criminals who were better placed to do that. (It is not trivial to do this effectively: the crooks must find a way to purchase goods that can be resold while minimizing the risk of getting caught or triggering the fraud-detection systems operated by card networks. While data on this point is scarce, only a small fraction of the spending limit available on a card can be exploited by the attacker before it is suspended.) That end goal in turn influences choice of target based on a calculus of expected gain: probability of successfully breaching the target multiplied by amount of funds at stake. This provides the greatest flexibility in simultaneously going after multiple targets in the hopes that one will pan out; no reason to insist on trying to break into company X if it turns out company Y will be an easier target with just as much value at risk.

But financial gain is far from being the only driver. Some threat actors are motivated by ideology. Unlike crooks chasing money, “hacktivists” seek to achieve political objectives or settle scores against companies perceived as causing harm. Phineas Phisher’s exquisite 0wnage and subsequent doxing of the ethically bankrupt Hacking Team is a textbook example. These groups have a highly principled approach to picking their targets, paying less attention to the difficulty and devoting significant resources on a specific objective. Yet others are characterized by a complete lack of ideology; they are in it for the “lulz.” Targets are chosen arbitrarily and opportunistically, with no rhyme or reason other than being within range of attacker capabilities, the online equivalent of being at the wrong place at the wrong time.

On the other extreme, nation-state actors are the apex predators of the ecosystem. They combine massive arsenals of offensive tool-kits with a disciplined approach to selecting targets based on intelligence value. Here “nation-state” encompasses both offensive actions directly carried out by intelligence agencies, but also private groups funded/supported by such organizations to carry out proxy battles. No target is too small or too insigificant if there is valuable information to loot: China is equally at home going after boutique law firms defending political dissidents as going after the whole enchilada at Google.

Until now it was assumed that such groups were not after direct financial gain. There certainly is a time-honored tradition of industrial espionage carried out against foreign countries in pursuit of indirect financial gain for the home team. Yet one does not expect the NSA, GCHQ or even their less ethically-constrained brethren such as FSB to operate credit-card skimming operations on the side.

From industrial espionage to the Bangladeshi job

North Korea is now challenging that premise. The original link between Wannacry and DRPK was the similarity in its code to previously known malware used in the attack on the central bank of Bangladesh. That heist netted the perpetrators over $80 million USD even after attempted recovery of stolen funds—and it would have been a lot more profitable, to the tune $900M were it not for careless mistakes made by the attackers that blew the cover on the operation. These are significant numbers, especially for an embattled North Korea straining under the weight of economic sanctions.

This action had an undeniable profit motive; in fact such brazen theft of funds compromises any intelligence gathering mission that may have been going on in parallel. Lazarus Group had achieved persistence on systems belonging to the Bank of Bangladesh and lurked for months while building custom techniques to evade monitoring. Such entrenched presence would have supplied DPRK with a unique vantage point to spy on the movement of funds in Bangladesh for years to come—if they cared for that capability. By contrast a smash-and-grab attack that results in significant loss can not stay under the radar and predictably leads to defenders diligently working to flush out any attacker presence from the system.

Wannacry as the amateur-hour of ransomware

That brings us to the strange case of Wannacry. On paper, ransomware is the epitome of financially motivated malware with zero information-gathering value, reflecting an interesting shift in tactics. The first generation of mass malware turned Windows PCs into zombies sending out spam while completely ignoring any data that may reside on those machines; effectively only monetizing their network bandwidth to support ancillary business models such as mass marketing or distributed-denial-of-service. The second generation focused on information theft as traditionally understood, looking for special categories of data that can be monetized directly such as passwords for online banking sites or credit-card details, and shipping these off to a server controlled by the attackers. Ransomware by contrast does not attempt to steal any information; it holds information hostage from the legitimate owner via encryption.

That modus operandi means ransomware has the unusual feature of having to negotiate with its victims for successful monetization. A spam bot operates quietly in the background; it does not show users a dialog offering to uninstall itself in exchange for payment. Likewise banking malware silently collects credentials for logging into financial institutions and ships these off to its operators who already have existing plans to monetize that information by selling the credentials on dark markets. That path is already preordained. Consumers do not get a first right of refusal to opt out of that transaction and keep their PayPal password secret by offering more money than prevailing underground rates. Ransomware is unique in expecting to get compensated directly by its own victims.

That in turn brings some semblance of market dynamics into the equation. While installing ransomware is not a voluntary act (unless you are a security researcher) the user still has a decision to make about paying the ransom bid. For example if they had been regularly backing up all of their files, they could always choose to wipe their machine clean, reinstall the operating system to get back to a clean slate and recover using those backups. Even if the user is faced with partial loss of data, they may still deem the ransom price too high to warrant rescuing the lost information. This is where the reliability of ransomware operation enters into the picture, because malware is effectively a market for lemons. There is no honor among thieves. Even if the price is “reasonable” there is no guarantee that successful decryption will follow after delivering the payment. (At least in the current incarnation of ransomware observed in the wild. In principle, smart-contracts enable honest ransomware with delivery of payment  contingent on the disclosure of decryption keys.) A user who does not get their files decrypted despite paying up is an unhappy customer. In this day and age of Yelp reviews, word gets around: other users facing the same decision may opt for not paying.

This is where Wannacry fails spectacularly: it is clear from reverse-engineering the binary that this operation could not possibly have supported any type of decryption based on payment. To the extent users have been able to recover their data, it has been due to fortunate design flaws in Wannacry or at least a failure by the authors to understand quirks of Windows crypto API which keeps decryption keys around longer than expected. In fact it is clear the Lazarus Group did not plan on providing a decryption service. Users were asked to send payment to exactly one of 3 Bitcoin addresses randomly selected from a list hard-coded into the binary. Given that infected systems numbered in the hundreds of thousands, it is not possible to identify which ones have paid— a prerequisite for honoring the promise that paying users receive a decryption key. The only plausible scenario would have been a global ransom: offering to release a master key that would unlock all machines once a specified amount is sent in total from all affected users. But such a collectivized demand is far less likely to find any takers compared to individual offers. There is still no guarantee of recovering your files and now your success depends on other people cooperating. If the threshold is not reached, all donations are wasted. Meanwhile everyone has an incentive for free-riding, hoping that other people will chip in and they can collect the benefit when decryption key is released.

To wit those three Bitcoin addresses have collected a modest sum of 55BTC at the time of writing, worth approximately $125K at current exchange rates. That figure is dwarfed by the take from the Bangladesh heist. That raises a significant question: if Wannacry was developed by a highly skilled threat actor with nation-state backing and yet, for all that talent, proved an abject failure at monetization, were there other motives behind it?

Unfollowing the money

We can not rule out the theory that Wannacry was a precursor of the finished product North Korea wanted to unleash, an incomplete beta version which accidentally escaped the lab setting and propagated. According to this view, the final version would have handed out individual Bitcoin addresses to every user and featured some type of service in the cloud to hand out decryption keys when payment is made. (Although it is difficult to imagine how that would work, given the enormous incentive by ISPs and law-enforcement to shutdown such a service.) Yet for unclear reasons—either by accident or perhaps to meet some arbitrary deadline— this half-baked version was unleashed and since it is self-propagating malware, could no longer be recalled.

An alternative explanation is that the whole ransomware aspect is a diversion. The true purpose of Wannacry is inflicting economic harm by destroying data and rendering systems unusable. That there is no mechanism for recovering data after payment is not a “bug” in the operation; it is 100% by design. Unlike a true extortion scheme, these perpetrators have no plans to profit from providing any relief from the harms unleashed by their own creation. The objective is imposing costs on , not obtaining additional revenue for themselves. The negligible amount of Bitcoin collected is only a side-show. Even if a few people did pay up initially, future victims would be discouraged after learning that the promised data recovery never arrive. If this theory is correct, the operational cover for Wannacry became a victim of its own success: instead of blending into the background as yet another ransomware scam, Wannacry was extensively studied and reverse-engineered, eventually unearthing the link to North Korea. The main strike against this theory is the geographic distribution of Wannacry infections: Russia, India, several former Soviet republics, China and Iran are among the top 20 countries affected. While North Korea is greatly isolated and has few allies, these are not exactly the countries that one would expect DPRK to prioritize targeting—Iran in particular has been implicated in supplying DPRK with technology for its nuclear program. While some of this may be driven by the prevalence of outdated/pirated versions of Windows not receiving security updates, it would have been trivial to design safeguards that take place after infection to selectively target specific regions. For example Wannacry could have checked timezone and language settings on the machine before proceeding to encrypt files. (Malware in the wild carrying such checks has surfaced at least as early as 2009.) On the other hand carving out such exceptions provides circumstantial evidence about the source of the attack. If malware has been tailored to avoid particular countries, the assumption is its creators are affiliated with or  at least closely allied with the nations spared from damage. Taking an equal-opportunity approach to harming friend and foe, Wannacry may have been trying to avoid giving  such geopolitical clues.

CP

Designing “honest ransomware” with Ethereum smart contracts: postscript (part III)

[continued from part II]

Postscript: Shadow Brokers auction done right

Incidentally this protocol also works for the type of auction Shadow Brokers attempted in 2016— if they intended it in earnest, which does not appear to have been the case for the stash of NSA exploits they planned to dump. To recap: this threat-actor had gotten its hand on an exploit kit associated with the NSA and initially offered to sell it to the highest bidder. This “auction” however was curiously designed to mirror the dollar auction from game theory, featuring a winner-takes-all property. Bids are placed by sending funds to a Bitcoin address, highest-bidder gets the stash of exploits while everyone else gets nothing— they forfeit any Bitcoin sent. Not surprisingly there were few takers for this model, given the odds that even the winner may end up with nothing.

While the auction setting introduces additional complexity, the protocols sketched earlier can help with a simpler version of the problem: a shady group claims to have a lucrative stash of documents up for sale in exchange for cryptocurrency. Given the nature of such underground transactions, both sides are concerned about the risk of being defrauded. The seller worries about delivering the stash without getting paid and the buyer is concerned about paying for worthless information. A variant of the fair-exchange payment protocol can be built on Ethereum smart contracts:

  1. Seller encrypts all documents using a hybrid-scheme with a fixed public-key and makes all ciphertexts available to the buyer. (The granularity of encryption need not be at the level of individual documents. Each page or 10K chunk of source code could be individually encrypted, as long as each fragment contains enough information for the buyer to make a judgment on its veracity.)
  2. Buyer and seller jointly select a random subset of ciphertexts to be opened by the seller, to verify that they conform to the uniform encryption format expected for the entire batch. This assumes the seller has some way to validate the authenticity of individual fragments.
  3. Buyer launches an Ethereum smart-contract, designed to release payment on delivery of the private-key. He funds the contract with an amount of ether corresponding to the sale price.
  4. Seller invokes the contract method disclosing the private-key and collects the proceeds.

There is of course one last optional step for the buyer: call up the Ethereum Foundation and demand a hard-fork to reverse the payment. After all, the fairness expressed in a smart contract is only as reliable as the immutability of the blockchain that contract executes on.

CP

PS: Similar ideas are explored in a blog post on “The future of ransomware”, which in turns references the notion of zero-knowledge of contingent payments first demonstrated in Bitcoin. That approach front-loads the work into developing cryptographic proofs systems to verify that the encrypted data has the right structure, such as being the solution to a particular puzzle which can be verified by operating on encrypted data. It relies on disclosing the preimage for a hash (as opposed to a private-key) which can be expressed even with the limited scripting capabilities of Bitcoin. But that approach runs into the same problem as verifiable encryption when applied to ransomware: the plaintext has no particular structure, and we only assume the user has access to an oracle that can answer thumbs up/down on whether the result of a decryption corresponds to an authentic file which had been hijacked by malware.

 

Designing “honest ransomware” with Ethereum smart contracts (part II)

[continued from part I]

There are special-case solutions for common public-key algorithms to prove that a given ciphertext decrypts to an alleged plaintext. For example, with RSA the key-holder can return the plaintext without removing its padding. Recall that RSA is typically used in conjunction with a padding scheme such as PKCS1.5 or OAEP. These schemes apply randomized transformation to plaintext prior to encryption and undo that transformation after decryption. Without knowing the padding used during encryption, it is not possible to check that a given unpadded plaintext corresponds to some known ciphertext. But once the fully padded version is revealed, the user can both check that it encrypts to the original ciphertext and that removing the padding results in expected plaintext, because both of those steps are fully deterministic.

This straightforward approach does not carry over to other algorithms. For example ElGamal is a randomized public-key encryption scheme—each encryption operation uses a randomly chosen nonce, so reencrypting the exact same message can yield different ciphertexts. Yet it is not possible to recover the random nonce used after the fact, even with knowledge of the private key.  Unless the private-key owner takes additional steps to stash that nonce someplace, they are stuck in a strange position: they can decrypt any ciphertext but they can not simply prove the decryption is correct by sharing the result. That is not the case for RSA: if you can decrypt, you can also recover the transformed input complete with randomized padding. ElGamal calls for something more complex, such as a zero-knowledge proof of discrete logarithm equality to show that the exponentiation of the random masking value (first part of the ciphertext) by the private-key was done correctly.

For a more generic solution, we can leverage blind decryption: ask the private-key holder to decrypt a ciphertext without that party realizing which ciphertext is being decrypted. This is typically achieved by exploiting homomorphic properties of cryptosystems. For instance raw RSA—without any padding— has a simple multiplicative relationship: encryption of a product is the product of encryptions.

RSA-ENC(a·b) = RSA-ENC(a) * RSA-ENC(b)

where all multiplication is done modulo N associated with the key. A similar property also holds true for decryption where X and Y are ciphertexts:

RSA-DEC(X·Y) = RSA-DEC(X) * RSA-DEC(Y)

Looked another way, we can decrypt a ciphertext indirectly by decrypting a different ciphertext with known relationship to the original:

RSA-DEC(X) = RSA-DEC(X·Y) * RSA-DEC⁻¹(Y)

To decrypt X, we ask the ransomware operator to instead decrypt X·Y as long we know the decryption of Y. (Typically because we obtained Y by encrypting a random plaintext m of our choice to begin with.) Note the original encryption can still be using a strict padding mode such as OAEP, as long as the decryption side is willing to handle arbitrary ciphertext that results in plaintext with no specific padding format.

This additional step prevents the ransomware author from cheating: X·Y looks like a random ciphertext, completely unrelated to the original encryption X of the symmetric key. Even if there is a cheat sheet listing every such X and its associated plaintext to answer chalenges, there is no way to correlate the particular challenge presented to one of these entries. Each challenge is equally likely to be derived from any ciphertext in the collection. The intuition is that if files were not encrypted according to a consistent scheme with the same RSA public-key, it would not be possible to produce the correct answer when presented with such a random-looking ciphertext.

This step can be repeated for a small subset of files to achieve high-confidence that most files have been encrypted according to the expected scheme. As long as the challenges are selected randomly, even a small number of successful tests provides high assurance against cheating. For example, suppose ransomware encrypted a million files but replaced 2% of them with random data instead of following the claimed hybrid-encryption process. If 100 files out of that collection are randomly selected for spot-checking, odds are 87% that this departure from protocol will be caught. The downside is diminishing returns with increased coverage. While a handful of challenges can rule out cheating on a wide scale— for example every other file being corrupted— it is not feasible to prove that 100% of data is recoverable. If ransomware deliberately mangled exactly 1 file in the collection, it would take a great deal of luck to discover that; one must pick that specific file as a challenge. That also suggests a strategy of picking the files you care about most for the challenge phase, given that for every other files there is a small but non-zero probability of failure to recover. (Incidentally the ransomware author has a diametrically opposed interest in making sure users do not get to choose the challenge subset unilaterally. Otherwise they could pick the 100 files they care about most and abandon the rest.)

There is one more subtlety here: the decryption challenge returns a symmetric key, which is then used to undo the bulk symmetric-encryption applied to the file. But there is still the problem of deciding whether the outcome of that decryption is “correct” in the sense that the plaintext was a file originally belonging to this user. Neither authenticated encryption or integrity checks help with that. After all ransomware could have replaced an MP3 music file with the “correct” AES-GCM encryption of random noise. Given the right AES key, that file will decrypt successfully, including validation of the GCM tag. It will not result in recovery of the original data, which is the only outcome the user cares about. To work around this we have to posit that users have access to an oracle that can examine a file (complete with all meta-data such as directory path and timestamps) and determine whether it is one of the documents originally belonging to their collection. In practice this could be based on a catalog of hashes or digitally signatures over documents, or in the worst-case scenario, manual inspection of files.

Once the user is convinced all files are indeed encrypted correctly, the next step is crafting a smart-contract to release payment conditional on the private-key being revealed. This contract will have a function intended to be invoked by the key-holder. After checking that the parameters supplied reveal enough information to recover the private-key, the function sends funds to an address agreed upon in advance. There is one subtlety here: specifying the destination address as part of the function call or inferring it from the message sender results in a race condition. Since Ethereum contract calls are broadcast on the network before they are mind in, anyone could copy the disclosed private-key and craft an alternative method invocation to shuttle funds some place else, hoping to get mined in first. Fixing the destination during contract creation solves this problem. Even if someone else succeeds in preempting the original method invocation with a different one sending the exact same information, the funds are still delivered to the intended address.

Just in case the private-key holder never shows up, the contract also needs an “escape hatch.” That is a second, time-locked method that can be invoked by the user after a mandatory delay to recover unclaimed funds. It is critical that these are the only ways of withdrawing any funds from the contract. If there was some other avenue for getting funds out, it would result in a race condition. When the private-key holder invokes the contract to collect payment, the contract owner can observe that transaction (including the disclosed private-key) and try to race them with a different transaction that siphons funds from the contract before it can pay out.

In terms of disclosing a private-key in a manner that can be verified by Ethereum smart-contracts, there are two natural solutions:

  1. Staying in the RSA setting, the smart-contract method that receives two large integers p and q, and verifies that their product is equal to the modulus N. While doable in principle, this requires implementing arbitrary precision integer multiplication in Solidity, which does not natively support such operations. The underlying Ethereum Virtual Machine (EVM) has 256-bit integer primitives and supports multiplication of words into 256-bit product, discarding more significant bits. One could write a big-number library to multiple large numbers by splitting them into 128-bit chunks. But this runs into another practical issue around gas costs: Ethereum smart-contract execution costs money proportional to the complexity of operations. Trying to check whether the product of two 1024-bit factors is equal to a known RSA modulus can become an “expensive” operation.
  2. Switch to an elliptic-curve setting and leverage the fragility of ECDSA for deliberately disclosing private keys. Because Ethereum natively supports verification of ECDSA signatures over the secp256k1 curve, this makes for a straightforward implementation. So instead of trying to make RSA operations work in Ethereum, we alter file-encryption model. ECIES or ElGamal can both be adapted to work over secp256k1. ElGamal in particular has a simple homomorphism that can be used to mask ciphertexts: multiplying both components of an ElGamal ciphertext by a scalar produces the encryption of the original message multiplied by that scalar. As with ECDSA, the public-key is a point on the curve and the private-key is the discrete logarithm of that with respect to the generator point. Since that key-pair is equally usable for ECDSA signatures, built-in EVM operations are sufficient to check for private key disclosure. As before, the function expects two valid ECDSA signatures over fixed messages such as “hello” and “world.” But in addition verifying these signatures— which is a primitive operation built into EVM— the contract also confirms these signatures share an identical nonce.

Summary

To recap: an honest ransomware scheme— or “third-party backup encryption service”— can be implemented using smart-contracts to make data recovery contingent on payment. We encrypt all files of interest using a hybrid encryption scheme with a single public-key. When it is time to recover the data, the user and TBES execute an interactive challenge protocol to decrypt randomly selected files, verifying that the encryption followed the expected format. (This step is redundant if encryption was done by the user, unless the integrity of encrypted backups is itself in question.) Assuming the proofs check out, the next step is for the user to create a smart-contract on the Ethereum blockchain. This contract is parametrized by an amount of Ether agreed upon, public-key used for encryption and address chosen by the TBES. Once the contract is setup and funded, TBES can invoke one of its methods to disclose the private-key associated with that public-key and collect the funds.

The logic of Ethereum smart-contract execution guarantees this exchange will be fair to both sides: payment only in exchange for valid private-key and no way to get out of payment once private-key is delivered.

[continued]

CP

Updated: 6/19, to describe alternative solution for ElGamal.

Designing “honest ransomware” with Ethereum smart contracts (part I)

“But to live outside the law, you must be honest” – Bob Dylan

By all accounts Wannacry ransomware made quite the splash, bringing thousands of systems to a standstill and forcing victims to shell out Bitcoin with little hope of recovery. In fact the ransomware aspect may well have been a diversion. Attribution for such attacks is tricky but both Kaspersky Labs and Symantec research groups have linked Wannacry to the Lazarus Group, a threat actor associated with the DPRK. (WaPo recently reported that the NSA concurs.) Their previous claim to fame: massive theft of funds from the Bangladesh central bank by exploiting the SWIFT network. That heist netted somewhere in the neighborhood of $80M, but the take would have been much higher were it not for the attackers’ rookie mistakes that resulted in even larger transfers being stopped or reversed. By comparison Wannacry earned a pittance, less than $100K at the time of writing.

Worse, this malware is not even  capable of living up to its raison d’etre: decrypting files after the attackers are paid off. For starters, only a handful of Bitcoin addresses were hard-coded in the binary, as opposed to unique deposit addresses for each victim. That makes it difficult to distinguish between different victims paying the bounty, which in turn violates the cardinal rule of ransomware: only users who pay the ransom get their data back. This is not so much a principle of fairness— there is no honor among thieves— as it is one of economic competitiveness: ransomware can only scale if victims are convinced that by paying the ransom they can recover their files. This is why successful ransomware campaigns in the past went so far as to feature helpful instructions to educate users about Bitcoin and staff customer-support operations to help “customers” recovery their data. Wannacry seems to have taken little interest in living up to such lofty standards of customer service.

But this incident does raise a question: are users at the mercy of ransomware authors when it comes to recovering their data? Is there a way to guarantee that payment will result in disclosure of the decryption key? After all, the crooks are demanding payment in cryptocurrency. Even Bitcoin with its relatively modest scripting language can express complex conditions for payment. Is it possible to design a fair-exchange protocol where decryption key is released if and only if corresponding payment is made?

This scenario is admittedly contrived and unlikely to be implemented. If there is no honor among thieves, certainly there is no desire to adapt more transparent and fair payment mechanisms to protect consumers from getting ripped-off by unscrupulous ransomware operators. But one can imagine more legitimate use-cases such as backup/escrow services that assist users encrypt their data for long-term storage. To avoid a single point of failure, the encryption key would be split using a threshold secret-sharing scheme. Specifically it is shared into N shares such that any quorum of M can reconstitute the original secret. Each share is in turn encrypted to the public-key of one trustee. When the time comes to decrypt this data, the consumer asks some subset of trustees to decrypt their share. This interaction calls for a fair-exchange protocol where the consumer receives the decryption result if and only if the trustee gets paid for its assistance.

Ethereum can solve this problem using the same idea behind fair-exchange of cryptocurrency across different blockchains, with a few caveats. The smart contract sketched out in previous blog-posts is designed to send funds when a caller discloses a specific private-key. But there are is a deeper problem around knowing which private-key to look for. In theoretical cryptography this falls under the rubric of “verifiable encryption” where it is possible to prove that some ciphertext is the encryption of an unspecified plaintext value that meets certain properties. Typically these constructs operate on abstract mathematical properties of plaintext, such as proving that it is an even number. In the more concrete setting of ransomware, plaintext under consideration are not mathematical structures but large complex data formats such as PDF documents. This model lends itself better to less-efficient statistical approach for verifying that the encryption process has been followed according to specification.

Let’s assume all files are encrypted using a hybrid-encryption scheme:

  • For each file/object to be encrypted, a random symmetric key is generated and bulk data is encrypted using a symmetric block-cipher such as AES.
  • The symmetric key is in turn encrypted using a fixed public-key cryptosystem such as RSA using the public-key of the trustee. This “wrapped” key is saved with the output of the bulk encryption.

This is similar to the format used by email encryption standards such as PGP and S/MIME. It is also how Wannacry operates, with an additional level of indirection. It generates a unique 2048-bit RSA keypair on each infected target and then encrypts that private-key using a different, fixed RSA public-key, presumably held by the ransomware crooks.  That means revealing the private-key used in step #2 is sufficient to decrypt all ciphertexts created according to this recipe.

There is one major difference between ransomware and the more legitimate, voluntary backup scenarios sketched out earlier: in the latter case the user can be certain of the public-key used for the encryption—because they performed the encryption themselves. In the former situation, they have just stumbled open a collection of ciphertext along with a ransom note asserting that all data has been encrypted using the process above with a private-key held by the author. Some proof is required that this claim is legitimate and the author is in possession of private-key required to recover their data. (Similar to fake DDoS threats, one can imagine fake ransomware authors reaching out to users to offer assistance, with no real capability to decrypt anything.)

A naive solution is to challenge the private-key holder to decrypt a handful of ciphertexts, effectively asking for free samples. But such a protocol can be cheated if it works by sending the full ciphertext or even the wrapped symmetric-key produced in step #2. For all we know, the ransomware encrypts each file using random symmetric keys and then stores those keys in a database. (In other words, it does not use a single public-key to wrap each of the symmetric keys; that part of the ciphertext is a decoy.) This operation could still respond to every challenge query successfully with the correct symmetric keys, by doing database lookups. But the encryption does not conform to the expected pattern above; there is no single private-key to unlock all files. In effect the user would be paying for a bogus key that has no bearing on the ability to decrypt ciphertexts of interest.

[continued]

CP

Two-factor authentication: a matter of time

How policy change in Russia locked Google users out of their account

Two-factor authentication is increasingly becoming common for online services to improve the protection afforded to customer accounts. Google started down this road in 2009, and its efforts were greatly accelerated when the company got 0wned by China in the Aurora attacks. [Full disclosure: this blogger worked on Google security team 2007-2013] At the time, and to a large extent today, the most popular way of augmenting an existing password-based system with an additional factor involved one-time passcodes or “OTP” for short. Unlike passwords, OTPs changed each time and could not be captured once for indefinite access going forward. (Note this is not equivalent to saying they can not be phished—nothing prevents a crook from creating fake login pages which ask for both password and OTP, and in fact such attacks have been observed in the wild.)

Counters and ticking seconds

There are many ways to generate OTPs but they all follow a similar pattern: there is a secret key, often referred to as a “seed” and this secret is combined with a variable factor such as current time or an incrementing sequence number using a one-way function. This one-way property is critical: it is a security requirement that even if an adversary can observe multiple OTPs and know the conditions they were generated in (such as exact timing) they can not recover the secret-seed required to generate future codes.

Perhaps the most well-known 2FA solution and one of the oldest is a proprietary offering from RSA called SecurID. It  was initially implemented only on hardware tokens sold by the company. SecurID is a decidedly closed ecosystem, at least in its original incarnation: not only did customers have to buy the hardware from RSA Inc but you also had to integrate with a service operated by the same company in order to verify if OTP codes submitted by users. That is because only RSA knew the secret-seed embedded in each token; customers did not receive these secrets or have any way to reprogram the token with their own secrets.

HOTP

Such closed models may have been great for customer lock-in but would clearly not fly in a world accustomed to open standards, interoperability and transparency. (Not to mention the wisdom of relying on a third-party for your authentication, a lesson that many companies including Lockheed-Martin would learn the hard way when RSA was breached by threat-actors linked to China in 2010.) Other industry players began pushing for an open standard, eventually resulting in a new design called HOTP being published as an RFC. The letter “H” stands for HMAC, a modern cryptographic primitive with a sound security model, compared to the home-brew Rube-Goldberg contraption used in SecurID. HOTP uses an incrementing counter as the internal “state” of the token. Each time an OTP is generated, this counter is incremented by one.

That model relies on synchronization beween the side generating OTP codes and the side responsible for verifying them. Both must use the same sequence number in order to arrive at the same OTP value. The sides can get out of sync due to any number of problems: imagine that you generated an OTP (bumping up your own sequence number) but your attempt to login to a website failed because of a network timeout. The server never received the OTP and therefore its sequence number is one behind yours. In practice this is solved by checking a submitted OTP not just against the current sequence number N but a range of values {N, N+1, N+2, …, N + t} for some tolerance value t. If a given OTP checks out against one of the later values in this sequence, the server updates its own copy of the counter, on the assumption that the client skipped past a few values. Even then things can go awry since collisions are possible: a user can accidentally mistype an OTP which then happens to match one of these later numbers, incorrectly advancing the counter.

TOTP

An alternative to maintaining a counter is using implicit state both sides independently have access to without having to synchronize. Time is the most obvious example: as long as both the client generating OTP value and server verifying it have access to an accurate clock, they can agree on the state of the OTP generator. Again in practice there is some clock drift permitted; instead of using a very accurate time down the millisecond, it is instead quantized into intervals of say 30 or 60 seconds. OTP codes are then generated by applying the HMAC function to the secret seed and this time-interval count. This was standardized in an open standard called TOTP, or Time-Based One-Time Password Algorithm.

There is one catch with TOTP: both sides must have an accurate source of time. This is easier on the server-side verifying OTP codes, but more difficult client-side where OTP generation takes place. The reason HOTP and sequence-numbers historically came first is that they could be implemented offline, using compact hardware without network connectivity. While embedded devices can have a clock, the challenge is those clocks require a battery to remain powered 24/7 and more importantly they eventually start drifting, running slower/faster than true time. A token that runs one second too fast every day will be a full 6 minutes ahead after a year.

Fast forward to 2009 with the smartphone revolution already underway, our model envisioned mobile apps handling OTP generation. Unlike stand-alone tokens, apps running on a smartphone have access to a system clock that is constantly being synchronized as long as the device is online. That take cares of time drift—even when the phone is only sporadically connected to the internet— making TOTP more appealing.

Which time?

One of the first questions that comes up about TOTP is the effect of time-zones. What happens when a user sitting in New York computes a TOTP that is submitted to a service in California for verification? In this case client and server separated by three time-zones. At first glance it looks like since they disagree on the current time, time-based OTP would break down in this model. Luckily that is not the case: as with most protocols relying on an accurate clock, TOTP calls on both participants to use an absolute frame of reference, namely the UNIX epoch time. Defined as the number of seconds elapsed since midnight January 1st, 1970 on Greenwich timezone, it does not depend on current location or daylight saving adjustments. That means TOTP generators only need to worry about having an approximately correct clock, a problem that is easily solved when the 2FA app runs on a mobile device with internet connectivity periodically checking some server in the cloud for authoritative time.

Daylight saving time considered harmful

Never underestimate the ability of the real world to throw a wrench in the plans. In 2011 Russia announced that it would not adjust back from daylight saving in the fall:

“President Medvedev has announced that Russia will not come off daylight saving time starting autumn 2011. Medvedev argued that switching clocks twice a year is harmful for people’s health and triggers stress. “

While the medical profession may continue to debate the effects of switching clocks on the general population, this change caused a good deal of frustration for software engineers. Typically “local time” displayed to users is determined by starting from a reference time such as GMT, making adjustments for local timezone and seasonal factors such as daylight saving. In the case of the Android operating system, those adjustments were hard-coded. If Russia did not going switch back to DST as scheduled, those devices would end up displaying the “wrong” time, even when they have perfectly accurate internal clocks.

In principle, this is only a matter of updating the operating system to follow the new, health-conscious Russian regime. In reality of course updating Android devices in the field has been a public quagmire of indifference and mutual hostility amongst device manufacturers, wireless carriers and Google. While the picture has improved drastically with Google moving to assert greater control over the update pipeline, in 2011 the situation was dire. Except for the handful of users on “pure” Google-experience devices such as Nexus S, everyone else was at the mercy of their wireless carrier for receiving software updates and those carriers were far more interested in locking users into 2-3 year contracts by selling another subsidized device than supporting existing units in the field. Critical security updates? Maybe, if you are lucky. Bug fixes and feature improvements? Forget about it.

Given that abject negligence from carriers and handset manufacturers, what is the average user to do when their phone displays 3’o clock when every one else is convinced it is 4’o clock? This user will take matters into their own hands and fix it somehow. The “correct” way to do that is shifting the timezone over by one, effectively going from Kaliningrad to Moscow. But this is far from obvious: the more intuitive fix given this predicament is to manually adjust the system clock forward by an hour. (One soon discovers that automatic time adjustments must also be disabled, or the next check-in against an authoritative time-server on the Internet will promptly restore the “correct” time.) Problem solved, the phone now reports that the local-time is 4’o clock as expected.

Off-by-one (hour)

Except for the unintended interaction with two-factor authentication, specifically TOTP which uses the current time to generate temporary codes. Overriding the system clock will shift the UNIX epoch time too. Now the TOTP generator is being fed from a clock with full one-hour skew. Garbage-in, garbage out. Most TOTP implementations will try to correct for slight clock drifts by checking a few adjacent intervals around current time, where each “interval” is typically 30 or 60 seconds. But no sane deployment is going to look back/forward as far as one hour, on the assumption that if your local clock is that far off, you are going to have many other problems.

Sure enough, reports started trickling in that users in Russia were getting locked out of their Google accounts because the 2FA codes generated by Google Authenticator on Android were not working. (In this blogger’s recollection, our response was adding a special-case check for a handful intervals around the +1 hour mark measured from current time— not all intervals between now and +1 hour mark. This has the effect of slightly lowering security, by increasing the number of “valid” OTP codes accepted, since each interval typically corresponds to a different OTP code modulo collisions.)

Real world deployments have a way of rudely bringing about the “impossible” condition. Until this incident, it was commonplace to assert that the reliability of TOTP based two-factor authentication is not affected by timezones or quirks of daylight saving time. In a narrow sense that statement is still true but that would have been no consolation to the customers in Russia locked out of their own accounts. Looking for a pace to pointer fingers, this is decidedly not a case of PEBKAC. Confronted with an obvious bug in their software, those Android users picked the most intuitive way of solving it. Surely they are not responsible for understanding the intricacies of local-time computation or the repercussions of shifting epoch time. Two other culprits emerge. Is the entire Android ecosystem to blame for not being able to deliver software upgrades to users, a problem the platform is still struggling with today? After all the announcement in Russia came months ahead of the actual change and iOS devices were not affected to the same extent. Or is the root-cause a design flaw in Android, having hard-coded rules about when daylight saving time kick-in? This information could have been retrieved from the cloud periodically, allowing the platform to respond gracefully when the powers-that-be declare DST a threat to the well-being of their citizenry.

Either way this incident is a great example of a security feature going awry because of decisions made in a completely different policy sphere affecting factors originally considered irrelevant to the system.

CP

 

Bitcoin and the ship of Theseus

Change and identity in a decentralized system

The Ship of Theseus is a philosophical conundrum about the continuity of identity in the face of change. Theseus and his ship sail the wide-open seas. Natural wear-and-tear takes its toll on the vessel, requiring its components to be replaced gradually over time. One day it is a few planks in the hull, the next season one of the masts are swapped out, followed by the sails. Eventually there comes a point where not a single nail or piece of fabric is left from the original build, and some parts have been replaced several times over. But unbeknownst to Theseus, a mysterious collector of maritime souvenirs has carefully preserved every component from the original ship taken out during repairs. (This is a variation on the original paradox, due to Hobbes.) In what may be the first case of retro-design, this person meticulously reassembles the original components into their original configuration. The riddle on which much ink has been spilled: which one is the true ship of Theseus? The one that has been sailing the seas all this time or the carefully restored one in the docks, which contains every last nut, bolt and rope from the original?

Bitcoin has been confronting a version of this riddle, most acutely during a few days in March when a hard-fork of the network appeared imminent. To recap: Bitcoin is a distributed ledger recording transactions and ownership of funds. This ledger is organized into “blocks,” with miners competing to tack on new blocks to the ledger, one block on average every 10 minutes. The catch is there is a limit on the size of blocks, which constrains how many transactions can be processed. Currently that stands at 1 megabyte—a gratuitous and arbitrary limit which may have seemed generous back in 2011 when it was first introduced, with plenty of spare room left in blocks to solve the problem. But kicking the can down the road predictably ends exactly as one would expect: increasing popularity of the network all but guaranteed that ceiling would be hit. Results of scarcity follow: transactions both became slower and more expensive. The time expected for a transaction to appear in a block increased and the fees paid to miners for that privilege sky-rocketed.

It is clear the situation calls for some form of scaling improvement. But there is no governance framework for Bitcoin. A system marvelously effective at bringing about distributed consensus at the technology level—everyone agrees on who owns what and which payments were sent—turns out to be terrible at producing consensus at the political level among its participants. Even the existence of a scaling crisis has been disputed. Some argue that Bitcoin excels as a settlement layer or store of value, and there is no reason to increase its on-chain capacity to handle everyday payment scenarios.

After much internecine fighting and several false-starts, the community coalesced around two opposing camps. One side mobilized under the Bitcoin Unlimited (BU) banner seeks to increase block size with a disruptive change, ratcheting up the arbitrary 1MB cap to some other, equally arbitrary but higher limit. On the other side is a group pushing for segregated-witness, a more complex proposal that solves multiple problems (including transaction malleability) but conveniently has the side-effect of providing an effective capacity increase. At the time of writing, this controversy remains in a deadlock. Segregated witness must reach roughly 75% miner support to activate. It has stalled at 30%, prompting supporters to give up on miners and seek an alternative approach called user-activated soft-fork or UASF. Meanwhile BU has been plagued by code-quality problems and DDoS attacks.

At some point in March, the miner support for BU was hovering dangerously close to the magic 50% mark. If that threshold is crossed, those miners could realistically start producing large blocks. While anyone can mine a large block any time, such blocks would be ignored by miners following the 1MB limit. It makes no sense to start producing them when they are only recognized by a minority. Such additions to the ledger would be quickly crowded-out and discarded in favor of alternative blocks obeying the 1MB restriction. But suppose BU exceeds 50%. (With some safety margin thrown in; otherwise there is the risk of block reorganization, where the original chain catches up and results in the entire history of big-blocks getting overwritten.) What happens if BU miners in the majority start producing those large blocks?
This was the question on everyone’s mind in March. It would result in a spit or “forking” of the Bitcoin ledger. Instead of one ledger there would be two parallel ledgers, maintained according to different rules. All blocks up to the point of the fork would be identical. If you owned 1 bitcoin, you still have 1 bitcoin according to both ledgers. But new transactions after the fork point can result in divergence, appearing on only one ledger. In effect two parallel universes emerge, where the same funds are owned by different people.

That brings us full-circle to the philosophical riddle of Theseus: which one of these is “Bitcoin”? Major cryptocurrency exchanges opted for a pragmatic answer: the original chain is Bitcoin-proper. According to this interpretation, the alternate ledger with large blocks will be considered an alternative cryptocurrency traded under its own ticker symbol BTU. (Reuse of the acronym for “British Thermal Unit,” a measure of heat, provides unintended irony for those who consider BU to be a dumpster-fire.)

While that tactical response addressed the uncertainty in markets, it did not provide a coherent definition of what exactly counts as Bitcoin. In effects the signatories were declaring that the chain with the large blocks would be relegated to “alternate coin” status regardless of hash power. For a system where security is derived from miners’ hash power to issue a blanket declaration of the irrelevance of hash power is extraordinary. Arguably the harshest criticism came from left field: the Ethereum community. Ethereum itself had taken flak in the past for taking exactly the same stance of ignoring miner choices during the 2016 DAO bailout. Orchestrated by the Ethereum Foundation and widely panned as crony-capitalism, this intervention resulted in a permanent, with a minority chain “Ethereum Classic” continuing at ~10% of hash power. In a case Orwellian terminology, this alternative chain is in reality the true continuation of the original Ethereum blockchain. What is now referred to as “Ethereum” incorporates the deus ex machina of the DAO intervention. But from a governance perspective, the most troubling aspect of the DAO debacle concerns how the legitimacy of the fork was ordained. When faced with a faction of the community expressing doubts about the wisdom of intervention, the Foundation insisted that regardless of what miners do, the branch reversing the DAO theft would become the official Ethereum branch. Such a priori declarations of the “correct chain” go against the design principle of miners providing integrity of the blockchain through costly, energy-intensive proof-of-work. Why waste all that electricity if you can just ask the Ethereum Foundation what the correct ledger is? In anointing a winner of the hard-fork by fiat without regard for hash power, the Bitcoin community had exhibited precisely the same disregard for market preferences.

Once unmoored from the economics of hash power, arguments about which chain is “legitimate” quickly devolves into philosophical questions about identity. If you hold that 1MB block-size is the sine quo non of Bitcoin, then any hard-fork modifying that property is by definition not Bitcoin. Yet some of the same individuals arguing that changing to 2MB blocks would results in complete loss of identity have also proposed  changing the proof-of-work function, after evidence emerged suggesting that mining hardware from a particular vendor may be exploiting a quirk of the existing PoW function to optimize their hardware. Changing the PoW is arguably a far more disruptive change than tweaking block size.

So it remains an open question what exactly defines Bitcoin and to what extent the system can evolve over time while unambiguously retaining its identity as Bitcoin. Is it still Bitcoin if block sizes are allowed to increase based on demand? If the proof-of-work function is replaced by a different one? Or if the environmentally wasteful proof-of-work model is abandoned entirely in favor of proof-of-stake approach? What if the distribution of coinbase rewards is altered to decrease continuously instead of having abrupt “halving” moments? Or to take a more extreme example, if the deflationary model with money supply capped at 21 million bitcoin is lifted, allowing the money supply to continue expanding indefinitely? Is it still “Bitcoin” or does that system deserve to be relegated to alt-coin status with an adjective attached to its name? A related question is who gets to make the branding determination? When Ethereum went through its hard-fork to bail to the DAO, it was the altered chain bestowed with the privilege of carrying the Ethereum name; the original, unmodified chain got relegated to second-class citizen as “Ethereum Classic.” Would the situation have been reversed if the Ethereum Foundation was instead opposed to intervention and the hard-fork was instead driven by a grassroots community effort to rescue the DAO at all costs?

Having been pronounced for dead multiple times, Bitcoin continues to defy the odds. It is already up more than 50% against the USD for 2017 at the time of writing, having survived a crack-down on capital controls in China. Yet the contentious scaling debate shows no signs of slowing down. BU proponents continue to lobby for a disruptive hard-fork, while segregated-witness adherents play a a game-of-chicken with user-activated soft forks.  Will the resulting system—or one of the resulting systems, in case the contentious fork results in a proliferation of incompatible blockchains— still qualify as “Bitcoin”? Beyond the crisis du jour, it remains unclear if Bitcoin is capable of improving by incorporating new ideas, especially when these ideas call for a disruptive change  breaking backwards compatibility. If the community interprets every hard-fork as an identity crisis that calls into question the meaning of “Bitcoin,” the resulting stasis will  place Bitcoin at a disadvantage compared to alternative cryptocurrencies which are more responsive to market demand. (To wit, the so-called “Bitcoin dominance index” which measures the market capitalization of BTC as a fraction of all cryptocurrencies is now at an all-time low, having dipped below 50% mark.) There is something to be said about stability and consistency. Whimsical changes and excessive interventionism of the type demonstrated during the Ethereum DAO hard-fork do not inspire confidence in the long-term reliability of a currency either. Bitcoin so far has stubbornly occupied the opposite end of the spectrum, clinging to a literal, originalist interpretation of its identity defined by Satoshi.

That is one way of dodging the paradox of Theseus: this ship may be taking on water, but at least every single one of its planks is original.

CP