The optimal Ethereum heist: attacking the Parity wallet (part III)

[continued from part II]

To recap: there are several mysterious questions around how the attacker(s) who discovered a critical vulnerability in a popular Ethereum smart-contract went about exploiting that flaw to steal funds:

  • Only exploited some of the vulnerable contracts, even though all targets are equally easy to locate
  • Skipped contracts containing more funds in favor of going after lower-value targets with lower returns

At first, this does not seem consistent with an attacker driven by profit motivation. Armed with a 0-day exploit, the optimal strategy is systematically plundering the richest targets first— on the assumption that once people get wind of the vulnerability, they will race to defend their contracts, reducing the chances of successful exploitation.
But on second thoughts, the attacker may have been operating with another constraint in mind: potential PR backlash against the theft. Consider the three contracts that were targeted:

  • SwarmCity: a self-described “decentralized commerce platform” which raised funds with a resale of its SWT tokens.
  • Edgeless Casino: Online gambling website operating on the ethereum blockchain, which crowdfunded itself by issuing EDG tokens.
  • æternity: a platform for “scalable smart contracts interfacing with real world data” according to its website. The development of this project was also funded by, you guessed it, an initial coin offering. [ https://www.coinstaker.com/initial-coin-offering/aeternity ]

Compare this to some examples of wallets that were left untouched in spite of having significant holdings:

  • BAT: Basic Attention Token, affiliated with the Brave browser project. Brave aims to shift the dominant revenue model for websites away from advertising (which leads to a race to the bottom in privacy, with increasingly invasive data collection on all users) and towards voluntary contributions powered by micro-payments.
  • ICONOMI: A meta-platform for managing other cryptocurrency assets
  • cofound.it: This one has a buzzword salad of “distributed global platform that connects exceptional startups, experts and investors worldwide”

There are at least two theories on why this group was spared and the former unlucky group exploited, and both hinge on the same premise: the possibility of a hard-fork that would reverse all theft transactions.

Recall that this drama played out once before: the DAO contract had raised close to $150M in ether at the prevailing exchange rates when it was successfully attacked, with the perpetrator walking away with $80M of those funds collected from investors. Or more precisely, they would have walked away with that tidy sum were it not for the Ethereum Foundation stepping in with a deus ex machina. In an ironic echo of the 2008 crisis which partly inspired Satoshi’s development of Bitcoin— too-big-to-fail institutions on the verge of collapse bailed out by intervention-happy regulators eager to rescue the ecosystem at all costs— the Foundation engineered the blockchain version of historical revisionism, returning stolen funds back to the DAO. At least that was the objective, but it did not exactly result in a clean undo. While the majority of hash power went along with this act of deliberate tampering with the ledger, a splinter faction to continue running with the original version which became the altcoin Ethereum Classic.

That history raises an important question for any enterprising criminal contemplating large-scale mayhem on the Ethereum blockchain: what is the threshold for a hard-fork? At what point does the Foundation deem a particular address as too-big-to-fail? Is there a version of the Federal Reserve “systemically important financial institution” criteria that decides which ones merit a bail-out in case of security breaches? There are three plausible theories:

  1. Value-at-risk. DAO breach resulted in the loss of $60M before the funds were returned with the bailout. The Parity attack netted ~$30M, suggesting the attacker stopped about half-way to that previous mark. But these numbers are deceptive because the price of Ether in USD has appreciated more than 10x since the DAO debacle. So measured in native currency, the current theft is dwarfed by the DAO.
  2. Public perception of the affected entities. The Brave browser presumably enjoys grass-roots support, because it is an underdog battling commercial behemoths (MSFT IE/Edge & Google Chrome) on behalf of users in a bid to improve user privacy online. This is exactly the type of project everyone wants to cheer on to a successful launch. Edgeless Casino is a dodgy gambling site sitting in a murky area of regulation that no one will shed any tear over. Taking away money used to fund Brave development would likely result in a public outcry. Robbing the casino would merit a shrug or even inspire schadenfreude.
  3. Old-school crony capitalism: only individuals with close connections to the Ethereum Foundation get bailed out. Unlike previous cases where public information about wallet owners can be used to make an informed assessment, this relationship is more difficult for an attacker to gauge ahead of time.

Either way, the attacker succeeded by this criteria. No one is seriously contemplating a hard-fork to reverse this particular theft. (In fairness, the DAO had a built-in safeguard to stop funds movement for several weeks after the theft. That feature greatly simplified the intervention: because the funds could not move around or taint other addresses, required blockchain edits to undo this action were limited to a few transactions. By comparison, the Parity wallet thief is free to move funds around. Correctly undoing the breach may involve reversing not only the original theft transactions but also every other transaction dependent on it in a ripple effect.

So it turns out that crime on the Ethereum blockchain does pay after all—but only when the perpetrator has the good sense to stop short of crossing the line that would trigger a hard-fork, even if that means deliberately following a seemingly “suboptimal” attack strategy. The optimal Ethereum theft is one that walks away with an amount just below the threshold of value/significance/popularity that would invite intervention by means of a hard-fork.

CP

 

 

The optimal Ethereum heist: attacking the Parity wallet (part II)

[continued from part I]

A suboptimal heist?

Looking at how the attacker exploited the vulnerability shows that it was far from being the most elegant operation. First notice the delays between the two calls made to each vulnerable contract. Here is another look at the pairs of calls made for exploiting three different vulnerable wallets:

Screen Shot 2017-07-26 at 16.29.29

Pairs of calls used to exploit the Parity multi-signature wallet vulnerability against three different contracts

In the first attack, there is a gap of 10 Ethereum blocks between the calls, corresponding to ~3 minutes based on current block frequency. For the second attack, the calls are separated by 7 blocks. For the final victim, they are only 2 blocks apart suggesting that the perpetrator improved their execution over time. Still these wide gaps suggest that at least the first two attacks were being crafted manually, likely using an interactive REPL such as the geth Javascript console that allows invoking arbitrary methods on any contract. In principle there is no reason to wait between calls: as long as a miner processes them in the correct order, they could even take place in the same block. Even if the perpetrator is proceeding cautiously, waiting to confirm that the first step has achieved the intended effect, there is no reason to wait more than one block before pulling the trigger on the second one. (Note that unlike Bitcoin, Ethereum network does not have a perennial backlog of transactions or arbitrary fee hikes—unless an ICO is going on. Block timestamp is a close proxy for actual attacker moves.) In other words the attacker did not bother with automating the exploit by writing a script to make the calls in quick succession. Each victim appears to have been exploited with a hand-crafted sequence of calls.

That brings up the second mystery: choice of targets. The first successful attack is followed by several hours of inactivity, two more attacks in quick succession and finally, complete silence. There is more activity from the attacker address in days following the heist; but those are all for distributing the stolen Ether into other addresses, likely in preparation for cashing out. There are no other instances of the pair of calls to a vulnerable contract. While it is conceivable this actor switched accounts to better cover its tracks, there are no other instances of theft reported besides the original attack and the white-hat response. Assuming the white-hat group is indeed a distinct actor and not simply another front for the attacker, the surprising conclusion is the perpetrator operated on a leisurely schedule and stopped after targeting just three vulnerable contracts.

The next mystery is that 12 hour pause between the first and second attacks. That initial salvo netted a respectable 26793 ETH in spoils—over $5M. But the perpetrator was clearly not content to rest on those laurels, as evidenced by the following two encores. Why wait that long? Zero-day exploits have “half-lives,” especially when their affects are noisy; and it is difficult to imagine a more noisy demonstration of an exploit than burglarizing large sums on a public blockchain. Once the first vulnerable contract is hit, the clock starts ticking for everyone else to rediscover that vulnerability independently. In general such rediscovery poses two problems for the offense. First, other crooks will get into the game, racing the original finder to exploit remaining vulnerable targets. Second, defenders will be alerted to the existence of a vulnerability. They can craft a patch that closes the window of exploitation for everyone.

Granted that second response only applies if it is possible to upgrade an application in place without losing its state. While this is true for applications running on desktop and mobile platforms (you can patch your web-browser or operating system with nothing more than a couple minutes lost rebooting) it is not true for all platforms. For instance smart-card architectures compliant with the Global Platform standard have no concept of “upgrading” an existing application— applets can only be deleted and reinstalled, with all associated data lost in the process. (And that is a security feature: it protects applications from malicious updates, even by the software publisher.) It turns out Ethereum contracts are far more similar to smart-card applets than they are to vanilla Windows apps: Ethereum has no concept of replacing the code behind a contract with new code. In fact such a capability would run counter to the ideal of immutable contracts. If the code governing a contract can be modified after the fact, it can not be counted on to behave according to predefined rules in perpetuity. But all is not lost for the defenders: in an echo of the proverb “if you can’t beat them, join them,” white-hats can co-opt the exploit, using it preemptively to rescue funds from vulnerable contracts. This is exactly what happened during the DAO attack and again with the Parity wallet in this case.

Bottom line: once the existence of a vulnerability becomes public knowledge, the odds of successful exploitation for any one actor declines down over time. (Paradoxically the chances that any given vulnerable target will be exploited goes up; there are many more threat actors armed with the required capabilities. But they are all competing against each other for the same pool of victims.) A rational attacker being aware of these dynamics would pic off as many targets as possible in the shortest time to preempt the competition. Instead there is a 12 hour self-imposed ceasefire after the first successful exploit and only two more thefts after that. Why?

It is certainly not the difficulty of locating vulnerable contracts that is holding up the attacker. They have the exact same code; they are only differentiated by arguments to the constructor specified at creation time. That means a standard blockchain explorer can help locate other contracts sharing the same code. For example Etherscan has a web interface to search for similar contracts to one specified by address:

Screen Shot 2017-07-26 at 23.34.14

Locating contracts with identical or very similar code to a given contract

Given a security flaw in a widely used smart-contract, locating all instances of that vulnerable contract is shooting fish in a barrel. So why stop with three? The white-hat group has allegedly rescued 375K+ ether, more than twice the ~150K ETH the perpetrator managed to purloin. That is a lot of money left on the table.

Even more puzzling, the wallets attacked did not even represent those with highest balances. Given the increasing probability of rediscovery over time, the optimal strategy is going after wallets with highest balances first. But among contracts rescued by the white-hat group are three with balances of 120K, 56K and 47K respectively, each one alone higher than the take from the first victim. Why pass on these more lucrative targets when the same level of effort redirected elsewhere—doing nothing beyond copy/pasting a different contract address into the exploit sequence—could have yielded higher returns?

By all indications, the execution of this attack looks sloppy:

  • No automation for delivering the exploit
  • Long pause between the first and second wave, giving defenders precious time to organize a counter-strike
  • Worst of all, suboptimal choice of victims that fails to maximize profit given complete freedom to choose targets

Yet viewed in another light, that last one may have been a deliberate decision to optimize for a different criteria: the chances of actually getting away with the theft, walking away with the ill-gotten-gains.

CP

[continued]

The optimal Ethereum heist: attacking the Parity wallet (part I)

On Jul 19 news started circulating on social media that a critical vulnerability existed in the Parity multi-signature wallet. This was quickly followed by even more startling update that the vulnerability was being actively exploited in the wild to steal funds from vulnerable wallets. By the time the dust settled, more than $30M (at the prevailing ETH/USD exchange rates) had been stolen. Meanwhile a “white-hat” group emerged, racing to use the exploit themselves  to rescue another $85M held in other vulnerable contracts from another strike by the perpetrator, sowing further confusion around who exactly are the good and bad guys in this episode. That would have been unprecedented in any other setting— akin to a vigilante neighborhood watch group preemptively looting a bank to secure its funds, knowing that a group of professional bank-robbers were going around hitting other branches. (Except it already happened once before in Ethereum: the summer of 2016 witnessed another self-appointed white-hat coalition deliberately exploiting a known smart-contract vulnerability to rescue funds remaining in the DAO during another high-profile Ethereum contract debacle.) This post examines the nature of the vulnerability, the modus operandi used by the attacker and some unresolved questions around why they did not maximize the monetary gains from this exploit.

Reverse engineering the vulnerability

While the Parity team put out an emergency PSA, they did not disclose the nature of the vulnerability, perhaps out of the mistaken notion that doing so would facilitate additional attacks. (Ironically they also committed the fix to a public repository in Github, unwittingly 0-daying themselves.) But it turns out that the vulnerability was so blatant it would have been easy to identify from the pattern of exploits. Let’s start with one of the first pieces of information that emerged during this episode: the thief hit an Ethereum contract at address 0xbec591de75b8699a3ba52f073428822d0bfc0d7e and siphoned funds to his/her own address at 0xb3764761e297d6f121e79c32a65829cd1ddb4d32. This turns out to be all the information required to work backwards and reverse engineer the bug.

Looking at the transactions originating from the attacker address around this time, we see 2 function calls into the victim contract, closely spaced in time. In fact this same pattern is repeated for two other vulnerable contracts that were exploited:
The “value” is displayed as 0 ethers in this blockchain explorer, which is misleading— it means that no funds were transferred from the attacker to the victim as part of making this function call into the smart-contract. (That makes sense; when you are trying to rob an establishment, you usually do not send them more funds.) But if we were to look at the victim view, we would find that the second, later transaction in fact resulted in the vulnerable contract transferring 82000ETH to the attacker—more than $15M at prevailing exchange rates, not bad for two function calls:

Screen Shot 2017-07-19 at 13.38.39.png

Across all three victims, the coup de grâce is delivered in this second call, resulting in transfer of funds from the targeted contract to the attacker. We will keep this in mind while diving into the call details.

Looking at the second call in a blockchain explorer, it is a call to the execute() method of the contract. If these function arguments look familiar, that’s because the first one is the Ethereum network address of the attacker. The second one is the amount in Weis to transfer to that destination address:

Screen Shot 2017-07-19 at 16.49.59.png

Why this call succeeded in transferring funds is mysterious. The source code clearly designates the function with the modifier “onlyowner,” suggesting that the developer intended for the function to be only callable by one of the contract owners. Surely the attacker is not already an owner of the contract? (Otherwise this is just an ordinary legal dispute involving insider malfeasance, not a critical vulnerability in the contract logic.) Of course it is one thing to intend for that outcome, another to achieve it; machines can only execute code, not good intentions. But looking at the implementation of the modifier and following the call chains, everything appears to be in order. By all indications, if the caller is not one of the contract owners, the execute() function will not actually execute.

Solving this puzzle requires going back to that preceding function call before the theft. We can theorize that perhaps that first call is a case of “prepping the battlefield” by placing the contract state into a vulnerable state such that the second call will succeed. Looking at the details in a blockchain explorer provides the missing clue:

Screen Shot 2017-07-19 at 13.50.07.png

That initWallet() function is supposed to be used for initializing the contract when it is originally created. It is called by the constructor and records the set of owners, the quorum required to authorize funds transfer and daily withdrawal limit. Looking at the parameters, there is that familiar attacker address again 0xb3764761e297d6f121e79c32a65829cd1ddb4d32 at parameter #5. There is also the number 1 passed in as argument. So we can posit that the attacker called this function to overwrite the contract state, listing that address as the sole owner and indicating that approval from just one owner is enough to authorize release of funds. That explains why the second call succeeded: by the time the “onlyowner” modifier was being checked to authorize the funds transfer, ownership information had already been corrupted.

So there is the vulnerability: a sensitive function that should have been only callable internally—and only during initialization of the contract— was left exposed to external calls from anyone on the blockchain. In fairness the reason for that unexpected reachability  is subtle: the Parity wallet is structured as a thin-wrapper that delegates the bulk of implementation to a much larger, shared wallet library. The vulnerable function is part of that shared library and can not be invoked directly. But the fallback method in the the outer wrapper forwards arbitrary calls to the library, effectively exposing even seemingly “internal” methods. Calling that particular function allows overwriting the ownership information, effectively redefining who controls the funds managed by the contract. Sure enough a commit to the public repo on Github shortly after the announcement confirms this theory: the fix adds a new modifier to protect the function from being called a second time after contract is already initialized.

Making a solid case for bad language design

So that is the cause in effect. But as with most vulnerabilities, it is more instructive to search for a systemic root cause— intrinsic properties of the system that made this class of error likely. Absent root causes, every vulnerability looks like bad luck: someone, somewhere made an unfortunate error or committed an oversight that they promise will never happen again. In this case a closer look at the programming language Solidity used for authoring smart-contracts suggests that some design choices in the language increased the likelihood for these errors. Specifically, Solidity defaults to allowing all methods in a contract to be invoked publicly by default. This fails the criteria of being secure by default: public methods are exposed to hostile inputs from anyone with access to the blockchain, in the same way that a service listening on a network port invites increases attack surface. Parity developers were also quick to blame Solidity in their own post-mortem:

Fourth, some blame for this bug lies with the Solidity language and, in its current incarnation, the difficulty with which one can understand the execution permissions over functions. […] We believe one or both of two ideas would help. One would be to change the default access mode of functions to “private”, rather than the eminently insecure “public”.

Ironically the bug was introduced during a refactoring, which moved the initialization code out of the constructor and into its own function. Refactoring is “a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.” It turns out in the case of Solidity, repackaging a few lines of code into a separate function does alter its semantics: those lines of code could become externally reachable outside of the original call path. (Note that it is not possible to invoke the constructor twice; there was an implicit guarantee of one-time execution present in the original version that is missing from the rearrange code.)

Language design aside, the contract itself contains some questionable logic. First notice the complexity around managing ownership: this contract allows dynamically modifying the ownership of an existing contract, voting out members or introducing new ones. How often is that logic exercised in the wild? Is it worth introducing this complexity? If it is an edge case, there are much simpler solutions: since the quorum for membership changes is identical to that for authorizing funds transfer, why not simply launch a contract with new membership and transfer all the funds there? But adding insult to injury, Solidity does not have a notion of “constantness” for object fields, the same way that “const” modifier can be applied to C++ or Java members. Confusingly there is the concept of constant state variables, but such fields must be initialized at compile time via immediate values—in contrast to the more powerful C++ version where they can be initialized using variable inputs supplied at construction time. That makes it difficult to express the notion that specific contract properties such as ownership list are immutable for the lifetime of the contract once constructed.

Given this background on why these smart-contracts were vulnerable to theft, the next post will look at some curious details around how attackers capitalized on the flaw for profit.

CP

[continued]

Bitcoin for the unbanked: receding possibilities

In October 2016 the former COO of the Chinese Bitcoin exchange BTC-C generated plenty of controversy with a nonchalant tweet starting out with this casual statement:

“Bitcoin isn’t for people that live on less than $2 a day.”

Casually dropped in the midst of one of those interminable arguments about Bitcoin scaling playing out on social-media, this statement was appalling for the implied premise. It was a complete 180 degree turn from the rhetoric surrounding the rise of cryptocurrency, with Bitcoin always the torch-bearer. Virtual currencies are liberating forces we were told, breaking up the hold of entrenched institutions on finance that rigged markets and debased currencies. Arrayed against the forces of progress was a rotating cast of villains, different for each portrayal but largely interchangeable in terms of their unfair position: the Federal Reserve, too-big-to-fail banks requiring bailouts, government agencies asleep at the wheel clinging on to outdated regulations or worse enabling those bailouts in a case of regulatory capture, Visa/MasterCard duopoly controlling online payments. Bitcoin was going to be the new money system for  everyman, unequivocally on the side of with David against the faceless institutional Goliath. No need for banks, credit-card networks, payment processors, money-transmitter licenses. Anyone with an inexpensive smart phone running a Bitcoin wallet application could send and receive money directly to anyone else anywhere in the world. No army of middle-man circling for their cut of the transaction, no gatekeepers to decide who is worthy enough to partake in this network. Also no need to worry about the debasement of hard-earned currency by an out of control printing press operated at the behest of unaccountable bureaucrats.

Rhetorical excesses aside, there were plenty of solid use cases for Bitcoin in the early days suggesting that it could help the so-called “unbanked”— more than a billion people living in developing nations without access to financial services, subsisting strictly on a cash system. No bank accounts, no way to write checks or swipe credit-cards for purchases, no credit history, no way to finance large purchases. In Africa nascent systems such as M-Pesa had already demonstrated that one could leap-frog from a standard cash economy to using smartphone wallets directly. In a strange twist, these countries skipped entire generations of earlier payment systems including check clearing, ACH, wires, PIN debit & credit networks, skipping straight to mobile wallets while US tech companies struggle and flounder in their attempts to bootstrap mobile payments. Yet M-Pesa is still a centralized technology operated by the wireless carrier Vodafone. It is crucially dependent on the coverage of retail infrastructure, specifically kiosks where consumers can go to exchange cash for M-Pesa credits.

Bitcoin offers some compelling advantages over this model. With a decentralized system, no single actor has enough leverage to squeeze the ecosystem for additional profits. Safaricom can raise M-Pesa transaction fees arbitrarily. The closest analog to a pricing cartel in Bitcoin are miners, but they face relentless pressure to keep fees competitive: if one miner decides to charge too much for mining Bitcoin transactions into the ledger, a more efficient one will come along and gladly collect the fees from those transactions. While consumers still need currency exchanges to convert between BTC and local fiat money, that function can be served by private vendors competing in a transparent market instead of under complete control of Safaricom/Vodafone. For those concerned about monetary policy, BTC offers an attractive intrinsically deflationary model. The government of Kenya can churn out shillings and Safaricom can flood the market with M-Pesa credits just as easily as Hasbro can print more Monopoly money. But no one can fabricate Bitcoins out of thin air and cause runaway inflation in BTC prices.

Given all of these factors, it is not too hard to see why Bitcoin circa 2012 looked promising for developing nations. It is another case of the surprising last-mover advantage: instead of playing catch-up with developed countries by painstakingly building out “legacy” payment rails—check clearing, card networks, eventually NFC payments—emerging markets can leapfrog straight to the latest paradigm of virtual currency.

It did not work out that way. A quick look at Bitcoin exchange statistics reveals that the majority of trading in Bitcoin is concentrated in a few markets with existing, highly-developed financial systems, not emerging markets home to millions of unbanked consumers. (China is arguably the rare exception because Bitcoin was one of the few options around capital controls in place to prevent currency outflows, but that volume has evaporated almost overnight after the Central Bank cracked down on exchanges.) It is not to difficult to see why the rosy picture painted above did not play out. For starters, BTC itself has been an extremely volatile: from an all-time high near $1300USD before the implosion of Mt Gox to dipping below $200 in two years, doubling again within a year and then embarking on a stratospheric climb towards $3000. For all the talk of inflationary policies, fluctuations in the value of Bitcoin would make even the most irresponsible, interventionist central banker look restrained by comparison. That volatility makes it less than ideal to use Bitcoin for everyday commerce, much less enter into long-term contracts denominated in BTC.

One could argue that early volatility is just growing pains for a new-fangled currency as the market struggles to discover “correct” pricing. Alternatively it can be blamed on hoards of speculators with no actual use for BTC chasing this coveted asset simply because other people are also trying to buy it—a classic case of an asset bubble. Either way, optimists expect such speculative activity will eventually diminish in scale compared to the overall volume of cryptocurrency trading, resulting in a steady state with relatively stable exchange rates. But there is one more assumption built into this lofty vision of Bitcoin, as an democratizing force that helps millions of consumers in developing countries fully participate in markets: low transaction costs. That premise looked solid in 2012 and anchored many other presumed use-cases for Bitcoin such as paying for that cup of coffee. All of these scenarios have been called into question by recent developments. In contrast with the problem of exchange-rate volatility which may well improve as the market matures, the vision of low-cost efficient payments is becoming less realistic.

The Bitcoin fee model is unusual to say the least. Most payment systems charge costs that are at least in part proportional to the value transferred. This follows a natural assumption: someone moving large amounts of money derives greater utility from that transaction than a person moving a modest amount. It follows that they would be willing to pay higher costs for the privilege of executing that transaction. (This is an oversimplification; it is also common to have fixed costs and discounts that kick-in for high amounts.) Bitcoin throws that logic out the window, charging instead based on approximate “complexity” of transactions. That complexity is indirectly measured by amount of space required to represent the transaction on the blockchain. A transaction with a single source, straightforward redeem script (“sign with this public-key”) and single destination output takes relatively few bytes to encode. One that combines multiple inputs, complex redemption conditions (“signed by 3 out of 5 keys”) and distributes those funds to multiple destinations takes up more space. Yet complexity is orthogonal to value transferred. This is what makes it possible to move $80 million USD with a few cents in transaction fees—an astonishing level of efficiency unequaled by any other payment system available to consumers—or moving $5 while paying half that amount in fees, which is extremely wasteful.

It’s as if banks charged fees for cashing checks based on how much ink there is on the check instead of their notional value. Yet this model makes sense given that space in the blockchain is itself a scarce resource. Each new block miner for extending the ledger can accommodate exactly 1MB worth of transactions. That scarcity creates natural competition for transactions trying to get mined into the next block by providing sufficient incentives to miners.

That brings us back to the question of developing markets. Back in the early days when ambitious visions of everyone paying for their next cup of coffee in bitcoin were being bandied about, those fees were negligible. Bitcoin was poised to undercut credit-card networks for retail purchases, massively undercut Western Union for international remittances and even outdo Paypal for efficient peer-to-peer payments. It even looked like the first realistic option for micro-payments, where very small amounts of money change hands very frequently: visitors to a web site donating a few cents for each article read. Fast forward to 2017, blocks are full, memory pool—that waiting queue of outstanding transactions waiting to be confirmed in the ledger— has ballooned and transactions fees are no longer negligible. Bitcoin businesses that naturally attract frequent fund movements such as exchanges have resorted to policy changes for passing transaction fees directly to customers. The only fighting chance for micro-payments today rests on the deployment of additional overlays such as the Lightning Network, implemented on top of the standard Bitcoin protocol.

Given the status quo it would be difficult disagree with the statement that Bitcoin as it exists today has very little to offer citizens of developing nations looking for an alternative payment solution for everyday purchases. Indeed the network as deployed today is not capable of clearing a large number of small transactions. (They may still find some value in its deflationary nature, as with Venezuelan citizens hoarding BTC in the midst of their economic crisis.)

It did not have to be that way. The scarcity of space is an artificial consequence of the arbitrary 1MB limit, the relic from a tactical fix implemented in response to an unrelated problem without much consideration given to future consequences. One could imagine counterfactuals where the blocksize limit is allowed to float, perhaps increasing automatically over time or adjusting in response to demand in the same way mining difficulty is constantly calibrated for constant throughput. There are clear costs to increasing blocksize: additional space required for storing a larger ledger would place demands on all nodes participating in the network. By raising costs, critics contend that such unchecked growth may force some to give up, resulting in a less decentralized network. On the other hand, the ledger is expanding every time a new block is mined and “cost” of running full node does go up measured in raw disk space. So the relevant question concerns rate of increase. Is the increased burden outpacing Moore’s law to the point that running a full node becomes more expensive in real terms? Is the growth rate predictable enough for planning future capacity? (That is a strike against quantum leaps from 1MB → 8MB because it leaves little time for adjustment.)

Arguments for and against raising block limit are being advanced daily, as are alternatives that improve throughput while leaving that sacred parameter alone. The problem is not for lack of ideas on scaling; there are too many possibilities coupled with too little consensus on which ones to pursue. The community has been unable to agree on a single solution, precipitating the current crisis with miners and users playing a game of chicken that could splinter the network on August 1st. High fees and low transaction rates have sabotaged many scenarios for using bitcoin that seemed perfectly within reach in the past. Dashed hopes for getting the millions of unbanked citizens of developing nations onboard is just one part of that collateral damage.

CP

 

Wannacry: ransomware as diversionary tactic?

Digging into motives

It is conventional wisdom in information security that precise attribution for successful digital attacks is very difficult difficult. Concealing the source of malware is much easier than say the launch point of an ICBM, and sophisticated attackers can engage in false-flag operations to frame innocent bystanders for their actions. So it is unusual that persuasive evidence has already emerged linking the ransomware Wannacry to DPRK. (Democratic People’s Republic of Korea, or North Korea for short— whenever the words “democratic” and “people” appear in the name of a country, you can be certain it is neither.) Even the NSA and British NCSC confirmed as much in off-the-record statements. That degree of confidence in the conclusion is itself surprising: nation-state sponsored threat actors are expected to be among the most sophisticated of all threat actors, and therefore more likely to exercise good operational security in hiding their tracks and avoiding mistakes that point back at the responsible party.

But assuming the attribution in this case is correct, it raises two questions:

  • Are nation states now carrying out financially motivated attacks?
  • Why was Wannacry so spectacularly unsuccessful at extracting payment from its targets?

Intelligence gathering vs get-rich-quick schemes

Managing information security risks calls for an understanding of the threat actors one is likely to run up against. Motivations and capabilities of the average script-kiddie are vastly different than those of an intelligence agency, as are the appropriate defensive measures necessary to defend against them. In particular, one needs a credible motive for attack.

A good chunk of online crime is financially motivated. Large-scale breaches against payment systems, such as TJ Maxx (2006), Target (2013) or Home Depot (2014) were driven by good old-fashioned greed. The perpetrators hoped to monetize stolen payment instruments, either by using them directly to conduct fraudulent purchases, or offload the card details to other enterprising criminals who were better placed to do that. (It is not trivial to do this effectively: the crooks must find a way to purchase goods that can be resold while minimizing the risk of getting caught or triggering the fraud-detection systems operated by card networks. While data on this point is scarce, only a small fraction of the spending limit available on a card can be exploited by the attacker before it is suspended.) That end goal in turn influences choice of target based on a calculus of expected gain: probability of successfully breaching the target multiplied by amount of funds at stake. This provides the greatest flexibility in simultaneously going after multiple targets in the hopes that one will pan out; no reason to insist on trying to break into company X if it turns out company Y will be an easier target with just as much value at risk.

But financial gain is far from being the only driver. Some threat actors are motivated by ideology. Unlike crooks chasing money, “hacktivists” seek to achieve political objectives or settle scores against companies perceived as causing harm. Phineas Phisher’s exquisite 0wnage and subsequent doxing of the ethically bankrupt Hacking Team is a textbook example. These groups have a highly principled approach to picking their targets, paying less attention to the difficulty and devoting significant resources on a specific objective. Yet others are characterized by a complete lack of ideology; they are in it for the “lulz.” Targets are chosen arbitrarily and opportunistically, with no rhyme or reason other than being within range of attacker capabilities, the online equivalent of being at the wrong place at the wrong time.

On the other extreme, nation-state actors are the apex predators of the ecosystem. They combine massive arsenals of offensive tool-kits with a disciplined approach to selecting targets based on intelligence value. Here “nation-state” encompasses both offensive actions directly carried out by intelligence agencies, but also private groups funded/supported by such organizations to carry out proxy battles. No target is too small or too insigificant if there is valuable information to loot: China is equally at home going after boutique law firms defending political dissidents as going after the whole enchilada at Google.

Until now it was assumed that such groups were not after direct financial gain. There certainly is a time-honored tradition of industrial espionage carried out against foreign countries in pursuit of indirect financial gain for the home team. Yet one does not expect the NSA, GCHQ or even their less ethically-constrained brethren such as FSB to operate credit-card skimming operations on the side.

From industrial espionage to the Bangladeshi job

North Korea is now challenging that premise. The original link between Wannacry and DRPK was the similarity in its code to previously known malware used in the attack on the central bank of Bangladesh. That heist netted the perpetrators over $80 million USD even after attempted recovery of stolen funds—and it would have been a lot more profitable, to the tune $900M were it not for careless mistakes made by the attackers that blew the cover on the operation. These are significant numbers, especially for an embattled North Korea straining under the weight of economic sanctions.

This action had an undeniable profit motive; in fact such brazen theft of funds compromises any intelligence gathering mission that may have been going on in parallel. Lazarus Group had achieved persistence on systems belonging to the Bank of Bangladesh and lurked for months while building custom techniques to evade monitoring. Such entrenched presence would have supplied DPRK with a unique vantage point to spy on the movement of funds in Bangladesh for years to come—if they cared for that capability. By contrast a smash-and-grab attack that results in significant loss can not stay under the radar and predictably leads to defenders diligently working to flush out any attacker presence from the system.

Wannacry as the amateur-hour of ransomware

That brings us to the strange case of Wannacry. On paper, ransomware is the epitome of financially motivated malware with zero information-gathering value, reflecting an interesting shift in tactics. The first generation of mass malware turned Windows PCs into zombies sending out spam while completely ignoring any data that may reside on those machines; effectively only monetizing their network bandwidth to support ancillary business models such as mass marketing or distributed-denial-of-service. The second generation focused on information theft as traditionally understood, looking for special categories of data that can be monetized directly such as passwords for online banking sites or credit-card details, and shipping these off to a server controlled by the attackers. Ransomware by contrast does not attempt to steal any information; it holds information hostage from the legitimate owner via encryption.

That modus operandi means ransomware has the unusual feature of having to negotiate with its victims for successful monetization. A spam bot operates quietly in the background; it does not show users a dialog offering to uninstall itself in exchange for payment. Likewise banking malware silently collects credentials for logging into financial institutions and ships these off to its operators who already have existing plans to monetize that information by selling the credentials on dark markets. That path is already preordained. Consumers do not get a first right of refusal to opt out of that transaction and keep their PayPal password secret by offering more money than prevailing underground rates. Ransomware is unique in expecting to get compensated directly by its own victims.

That in turn brings some semblance of market dynamics into the equation. While installing ransomware is not a voluntary act (unless you are a security researcher) the user still has a decision to make about paying the ransom bid. For example if they had been regularly backing up all of their files, they could always choose to wipe their machine clean, reinstall the operating system to get back to a clean slate and recover using those backups. Even if the user is faced with partial loss of data, they may still deem the ransom price too high to warrant rescuing the lost information. This is where the reliability of ransomware operation enters into the picture, because malware is effectively a market for lemons. There is no honor among thieves. Even if the price is “reasonable” there is no guarantee that successful decryption will follow after delivering the payment. (At least in the current incarnation of ransomware observed in the wild. In principle, smart-contracts enable honest ransomware with delivery of payment  contingent on the disclosure of decryption keys.) A user who does not get their files decrypted despite paying up is an unhappy customer. In this day and age of Yelp reviews, word gets around: other users facing the same decision may opt for not paying.

This is where Wannacry fails spectacularly: it is clear from reverse-engineering the binary that this operation could not possibly have supported any type of decryption based on payment. To the extent users have been able to recover their data, it has been due to fortunate design flaws in Wannacry or at least a failure by the authors to understand quirks of Windows crypto API which keeps decryption keys around longer than expected. In fact it is clear the Lazarus Group did not plan on providing a decryption service. Users were asked to send payment to exactly one of 3 Bitcoin addresses randomly selected from a list hard-coded into the binary. Given that infected systems numbered in the hundreds of thousands, it is not possible to identify which ones have paid— a prerequisite for honoring the promise that paying users receive a decryption key. The only plausible scenario would have been a global ransom: offering to release a master key that would unlock all machines once a specified amount is sent in total from all affected users. But such a collectivized demand is far less likely to find any takers compared to individual offers. There is still no guarantee of recovering your files and now your success depends on other people cooperating. If the threshold is not reached, all donations are wasted. Meanwhile everyone has an incentive for free-riding, hoping that other people will chip in and they can collect the benefit when decryption key is released.

To wit those three Bitcoin addresses have collected a modest sum of 55BTC at the time of writing, worth approximately $125K at current exchange rates. That figure is dwarfed by the take from the Bangladesh heist. That raises a significant question: if Wannacry was developed by a highly skilled threat actor with nation-state backing and yet, for all that talent, proved an abject failure at monetization, were there other motives behind it?

Unfollowing the money

We can not rule out the theory that Wannacry was a precursor of the finished product North Korea wanted to unleash, an incomplete beta version which accidentally escaped the lab setting and propagated. According to this view, the final version would have handed out individual Bitcoin addresses to every user and featured some type of service in the cloud to hand out decryption keys when payment is made. (Although it is difficult to imagine how that would work, given the enormous incentive by ISPs and law-enforcement to shutdown such a service.) Yet for unclear reasons—either by accident or perhaps to meet some arbitrary deadline— this half-baked version was unleashed and since it is self-propagating malware, could no longer be recalled.

An alternative explanation is that the whole ransomware aspect is a diversion. The true purpose of Wannacry is inflicting economic harm by destroying data and rendering systems unusable. That there is no mechanism for recovering data after payment is not a “bug” in the operation; it is 100% by design. Unlike a true extortion scheme, these perpetrators have no plans to profit from providing any relief from the harms unleashed by their own creation. The objective is imposing costs on , not obtaining additional revenue for themselves. The negligible amount of Bitcoin collected is only a side-show. Even if a few people did pay up initially, future victims would be discouraged after learning that the promised data recovery never arrive. If this theory is correct, the operational cover for Wannacry became a victim of its own success: instead of blending into the background as yet another ransomware scam, Wannacry was extensively studied and reverse-engineered, eventually unearthing the link to North Korea. The main strike against this theory is the geographic distribution of Wannacry infections: Russia, India, several former Soviet republics, China and Iran are among the top 20 countries affected. While North Korea is greatly isolated and has few allies, these are not exactly the countries that one would expect DPRK to prioritize targeting—Iran in particular has been implicated in supplying DPRK with technology for its nuclear program. While some of this may be driven by the prevalence of outdated/pirated versions of Windows not receiving security updates, it would have been trivial to design safeguards that take place after infection to selectively target specific regions. For example Wannacry could have checked timezone and language settings on the machine before proceeding to encrypt files. (Malware in the wild carrying such checks has surfaced at least as early as 2009.) On the other hand carving out such exceptions provides circumstantial evidence about the source of the attack. If malware has been tailored to avoid particular countries, the assumption is its creators are affiliated with or  at least closely allied with the nations spared from damage. Taking an equal-opportunity approach to harming friend and foe, Wannacry may have been trying to avoid giving  such geopolitical clues.

CP

Designing “honest ransomware” with Ethereum smart contracts: postscript (part III)

[continued from part II]

Postscript: Shadow Brokers auction done right

Incidentally this protocol also works for the type of auction Shadow Brokers attempted in 2016— if they intended it in earnest, which does not appear to have been the case for the stash of NSA exploits they planned to dump. To recap: this threat-actor had gotten its hand on an exploit kit associated with the NSA and initially offered to sell it to the highest bidder. This “auction” however was curiously designed to mirror the dollar auction from game theory, featuring a winner-takes-all property. Bids are placed by sending funds to a Bitcoin address, highest-bidder gets the stash of exploits while everyone else gets nothing— they forfeit any Bitcoin sent. Not surprisingly there were few takers for this model, given the odds that even the winner may end up with nothing.

While the auction setting introduces additional complexity, the protocols sketched earlier can help with a simpler version of the problem: a shady group claims to have a lucrative stash of documents up for sale in exchange for cryptocurrency. Given the nature of such underground transactions, both sides are concerned about the risk of being defrauded. The seller worries about delivering the stash without getting paid and the buyer is concerned about paying for worthless information. A variant of the fair-exchange payment protocol can be built on Ethereum smart contracts:

  1. Seller encrypts all documents using a hybrid-scheme with a fixed public-key and makes all ciphertexts available to the buyer. (The granularity of encryption need not be at the level of individual documents. Each page or 10K chunk of source code could be individually encrypted, as long as each fragment contains enough information for the buyer to make a judgment on its veracity.)
  2. Buyer and seller jointly select a random subset of ciphertexts to be opened by the seller, to verify that they conform to the uniform encryption format expected for the entire batch. This assumes the seller has some way to validate the authenticity of individual fragments.
  3. Buyer launches an Ethereum smart-contract, designed to release payment on delivery of the private-key. He funds the contract with an amount of ether corresponding to the sale price.
  4. Seller invokes the contract method disclosing the private-key and collects the proceeds.

There is of course one last optional step for the buyer: call up the Ethereum Foundation and demand a hard-fork to reverse the payment. After all, the fairness expressed in a smart contract is only as reliable as the immutability of the blockchain that contract executes on.

CP

PS: Similar ideas are explored in a blog post on “The future of ransomware”, which in turns references the notion of zero-knowledge of contingent payments first demonstrated in Bitcoin. That approach front-loads the work into developing cryptographic proofs systems to verify that the encrypted data has the right structure, such as being the solution to a particular puzzle which can be verified by operating on encrypted data. It relies on disclosing the preimage for a hash (as opposed to a private-key) which can be expressed even with the limited scripting capabilities of Bitcoin. But that approach runs into the same problem as verifiable encryption when applied to ransomware: the plaintext has no particular structure, and we only assume the user has access to an oracle that can answer thumbs up/down on whether the result of a decryption corresponds to an authentic file which had been hijacked by malware.

 

Designing “honest ransomware” with Ethereum smart contracts (part II)

[continued from part I]

There are special-case solutions for common public-key algorithms to prove that a given ciphertext decrypts to an alleged plaintext. For example, with RSA the key-holder can return the plaintext without removing its padding. Recall that RSA is typically used in conjunction with a padding scheme such as PKCS1.5 or OAEP. These schemes apply randomized transformation to plaintext prior to encryption and undo that transformation after decryption. Without knowing the padding used during encryption, it is not possible to check that a given unpadded plaintext corresponds to some known ciphertext. But once the fully padded version is revealed, the user can both check that it encrypts to the original ciphertext and that removing the padding results in expected plaintext, because both of those steps are fully deterministic.

This straightforward approach does not carry over to other algorithms. For example ElGamal is a randomized public-key encryption scheme—each encryption operation uses a randomly chosen nonce, so reencrypting the exact same message can yield different ciphertexts. Yet it is not possible to recover the random nonce used after the fact, even with knowledge of the private key.  Unless the private-key owner takes additional steps to stash that nonce someplace, they are stuck in a strange position: they can decrypt any ciphertext but they can not simply prove the decryption is correct by sharing the result. That is not the case for RSA: if you can decrypt, you can also recover the transformed input complete with randomized padding. ElGamal calls for something more complex, such as a zero-knowledge proof of discrete logarithm equality to show that the exponentiation of the random masking value (first part of the ciphertext) by the private-key was done correctly.

For a more generic solution, we can leverage blind decryption: ask the private-key holder to decrypt a ciphertext without that party realizing which ciphertext is being decrypted. This is typically achieved by exploiting homomorphic properties of cryptosystems. For instance raw RSA—without any padding— has a simple multiplicative relationship: encryption of a product is the product of encryptions.

RSA-ENC(a·b) = RSA-ENC(a) * RSA-ENC(b)

where all multiplication is done modulo N associated with the key. A similar property also holds true for decryption where X and Y are ciphertexts:

RSA-DEC(X·Y) = RSA-DEC(X) * RSA-DEC(Y)

Looked another way, we can decrypt a ciphertext indirectly by decrypting a different ciphertext with known relationship to the original:

RSA-DEC(X) = RSA-DEC(X·Y) * RSA-DEC⁻¹(Y)

To decrypt X, we ask the ransomware operator to instead decrypt X·Y as long we know the decryption of Y. (Typically because we obtained Y by encrypting a random plaintext m of our choice to begin with.) Note the original encryption can still be using a strict padding mode such as OAEP, as long as the decryption side is willing to handle arbitrary ciphertext that results in plaintext with no specific padding format.

This additional step prevents the ransomware author from cheating: X·Y looks like a random ciphertext, completely unrelated to the original encryption X of the symmetric key. Even if there is a cheat sheet listing every such X and its associated plaintext to answer chalenges, there is no way to correlate the particular challenge presented to one of these entries. Each challenge is equally likely to be derived from any ciphertext in the collection. The intuition is that if files were not encrypted according to a consistent scheme with the same RSA public-key, it would not be possible to produce the correct answer when presented with such a random-looking ciphertext.

This step can be repeated for a small subset of files to achieve high-confidence that most files have been encrypted according to the expected scheme. As long as the challenges are selected randomly, even a small number of successful tests provides high assurance against cheating. For example, suppose ransomware encrypted a million files but replaced 2% of them with random data instead of following the claimed hybrid-encryption process. If 100 files out of that collection are randomly selected for spot-checking, odds are 87% that this departure from protocol will be caught. The downside is diminishing returns with increased coverage. While a handful of challenges can rule out cheating on a wide scale— for example every other file being corrupted— it is not feasible to prove that 100% of data is recoverable. If ransomware deliberately mangled exactly 1 file in the collection, it would take a great deal of luck to discover that; one must pick that specific file as a challenge. That also suggests a strategy of picking the files you care about most for the challenge phase, given that for every other files there is a small but non-zero probability of failure to recover. (Incidentally the ransomware author has a diametrically opposed interest in making sure users do not get to choose the challenge subset unilaterally. Otherwise they could pick the 100 files they care about most and abandon the rest.)

There is one more subtlety here: the decryption challenge returns a symmetric key, which is then used to undo the bulk symmetric-encryption applied to the file. But there is still the problem of deciding whether the outcome of that decryption is “correct” in the sense that the plaintext was a file originally belonging to this user. Neither authenticated encryption or integrity checks help with that. After all ransomware could have replaced an MP3 music file with the “correct” AES-GCM encryption of random noise. Given the right AES key, that file will decrypt successfully, including validation of the GCM tag. It will not result in recovery of the original data, which is the only outcome the user cares about. To work around this we have to posit that users have access to an oracle that can examine a file (complete with all meta-data such as directory path and timestamps) and determine whether it is one of the documents originally belonging to their collection. In practice this could be based on a catalog of hashes or digitally signatures over documents, or in the worst-case scenario, manual inspection of files.

Once the user is convinced all files are indeed encrypted correctly, the next step is crafting a smart-contract to release payment conditional on the private-key being revealed. This contract will have a function intended to be invoked by the key-holder. After checking that the parameters supplied reveal enough information to recover the private-key, the function sends funds to an address agreed upon in advance. There is one subtlety here: specifying the destination address as part of the function call or inferring it from the message sender results in a race condition. Since Ethereum contract calls are broadcast on the network before they are mind in, anyone could copy the disclosed private-key and craft an alternative method invocation to shuttle funds some place else, hoping to get mined in first. Fixing the destination during contract creation solves this problem. Even if someone else succeeds in preempting the original method invocation with a different one sending the exact same information, the funds are still delivered to the intended address.

Just in case the private-key holder never shows up, the contract also needs an “escape hatch.” That is a second, time-locked method that can be invoked by the user after a mandatory delay to recover unclaimed funds. It is critical that these are the only ways of withdrawing any funds from the contract. If there was some other avenue for getting funds out, it would result in a race condition. When the private-key holder invokes the contract to collect payment, the contract owner can observe that transaction (including the disclosed private-key) and try to race them with a different transaction that siphons funds from the contract before it can pay out.

In terms of disclosing a private-key in a manner that can be verified by Ethereum smart-contracts, there are two natural solutions:

  1. Staying in the RSA setting, the smart-contract method that receives two large integers p and q, and verifies that their product is equal to the modulus N. While doable in principle, this requires implementing arbitrary precision integer multiplication in Solidity, which does not natively support such operations. The underlying Ethereum Virtual Machine (EVM) has 256-bit integer primitives and supports multiplication of words into 256-bit product, discarding more significant bits. One could write a big-number library to multiple large numbers by splitting them into 128-bit chunks. But this runs into another practical issue around gas costs: Ethereum smart-contract execution costs money proportional to the complexity of operations. Trying to check whether the product of two 1024-bit factors is equal to a known RSA modulus can become an “expensive” operation.
  2. Switch to an elliptic-curve setting and leverage the fragility of ECDSA for deliberately disclosing private keys. Because Ethereum natively supports verification of ECDSA signatures over the secp256k1 curve, this makes for a straightforward implementation. So instead of trying to make RSA operations work in Ethereum, we alter file-encryption model. ECIES or ElGamal can both be adapted to work over secp256k1. ElGamal in particular has a simple homomorphism that can be used to mask ciphertexts: multiplying both components of an ElGamal ciphertext by a scalar produces the encryption of the original message multiplied by that scalar. As with ECDSA, the public-key is a point on the curve and the private-key is the discrete logarithm of that with respect to the generator point. Since that key-pair is equally usable for ECDSA signatures, built-in EVM operations are sufficient to check for private key disclosure. As before, the function expects two valid ECDSA signatures over fixed messages such as “hello” and “world.” But in addition verifying these signatures— which is a primitive operation built into EVM— the contract also confirms these signatures share an identical nonce.

Summary

To recap: an honest ransomware scheme— or “third-party backup encryption service”— can be implemented using smart-contracts to make data recovery contingent on payment. We encrypt all files of interest using a hybrid encryption scheme with a single public-key. When it is time to recover the data, the user and TBES execute an interactive challenge protocol to decrypt randomly selected files, verifying that the encryption followed the expected format. (This step is redundant if encryption was done by the user, unless the integrity of encrypted backups is itself in question.) Assuming the proofs check out, the next step is for the user to create a smart-contract on the Ethereum blockchain. This contract is parametrized by an amount of Ether agreed upon, public-key used for encryption and address chosen by the TBES. Once the contract is setup and funded, TBES can invoke one of its methods to disclose the private-key associated with that public-key and collect the funds.

The logic of Ethereum smart-contract execution guarantees this exchange will be fair to both sides: payment only in exchange for valid private-key and no way to get out of payment once private-key is delivered.

[continued]

CP

Updated: 6/19, to describe alternative solution for ElGamal.