Chain-switching attacks and Bitcoin Cash: improving the odds

Incentives for withholding hash power

Can it make economical sense for a Bitcoin miner to strategically withdraw hash-power from the Bitcoin network? It is easy to imagine situations when a miner may choose to power down obsolete hardware that has become too inefficient, when the expected gain from operating those rigs does not justify the electricity costs. But could it make sense to power down perfectly competitive mining rigs suddenly and turn them back on at a different time in the future?

Suppose Bob has 20% of mining power and decides to completely quit mining at the end of the current adjustment period. Assuming that is the only change in the mining landscape—and this is a big leap of faith, considering that other participants may have spare capacity waiting on the sidelines— the result is that for the next adjustment period of 2016 blocks, the remaining miners will experience a slowdown. Difficulty level has not adjusted but only 80% of mining capacity remains, with the result that blocks will arrive 25% slower than average. The adjustment period will take correspondingly longer than the usual two weeks, after which point the difficulty is reduced to 80% of the original. Now if there were no other changes to overall mining capacity, the system would revert to its usual pace of producing blocks, since difficulty and hash-rate have once again achieved parity. But in comes Bob out-of-left field, powering up all of his mining rigs once again. Now difficulty parameter is too low by 20%, or equivalently the network runs a 25% surplus hash-rate over the last estimate. Blocks will arrive once every 8 minutes on average and this state of affairs will continue for a shorter-than-usual adjustment period of ~11 days. At that point difficulty could have adjusted back to observed hash-rate, except Bob is not done: once again he powers down his entire operation, leaving the network with an overestimate of difficulty. This results in a cycle of alternating periods where difficulty is overestimated and underestimated, with blocks arriving too fast or too slow. This is the scenario outlined in a SecureList/Kaspersky blog post, which goes on to argue that  miners have an incentive to game the system by strategically moving hash-power in and out in this fashion.

Economics of timing the (mining) market

Putting aside the question of whether other miners waiting on the sidelines would jump in to compensate if they saw difficulty levels fluctuate, let’s analyze this situation from the economic perspective of the individual miner (more likely, mining pool) we will call Bob. During the 17.5 days when his operation goes quiet, Bob collects no mining rewards at all. This is a lost opportunity cost. Mining has both fixed costs in the specialized rigs and data-center infrastructure, as well as marginal costs in electricity powering all that energy-hungry hardware.  The good news for Bob is that during the next period when he is back to business-as-usual, he still has 20% share of overall network capacity. He collects the exact same proportion of rewards as before. The only difference is that those rewards arrive faster: all of those blocks come in 11.2 days instead of 14. Accordingly his economic competitiveness measured in bitcoins earned per second has improved.

So far, so good. But how does that compare against the idle time? That turns out to be the main drawback. It may look like Bob is screwing over the competition by temporarily withdrawing: everyone else is left to mine at artificially high difficulty for longer than usual. But this is illusory: the remaining players are also collecting higher share of rewards than usual. (To take one example, if hash-rate was equally distributed between 5 mining pools, after one quits each of the remaining pools will see their share increase from 20% 25%.) While the network is producing blocks 25% slower than usual—from the standpoint of miners, their block rewards are correspondingly arriving less frequently— all of the miners left in the game are also collecting 25% more reward when they successfully mine a block. Their economic productivity measured in bitcoins collected per second has not changed at all. That means profitability has not changed either: regardless of what they are paying for electricity, continuing to mine at the artificially higher difficulty produces the same gain/loss per unit of time as before. (Again, assuming everything else is held constant: while block rewards are fixed, mining fees are variable and could go either way. A congested blockchain with slower confirmation times could result in fewer people attempting to transact, putting downward pressure on fees. But conversely it could mean that there is ever more urgency to get mined into the next block, resulting in higher fees.)

Meanwhile Bob receives diddly-squat during the entire time his rigs sit idle. That represents a complete loss of productivity for an even longer period of time; recall that the next adjustment period of artificially high-difficulty lasts longer than 14 days as remaining miners are struggling to produce blocks. That extended stretch of 0 bitcoins per second more than counters any advantage sustained during the easy-mining period. Given only the Bitcoin blockchain, it is difficult to see how strategically timing entry and exit out of the mining operation could result in improved economics for the miner playing that game.

An alternative chain is the playground of idle miners

Unless of course, the miner can find something better to do with all of that idle hardware sitting around? Enter Bitcoin Cash. Given two blockchains with compatible proof-of-work functions (in the broadest sense, meaning that hardware designed to mine for one can be converted to mine for the other without significant loss of efficiency) a new strategy becomes available to miners: instead of idling their rigs, they can simply switch over to the other chain.

The effectiveness of that strategy depends on several variables. Perhaps the most obvious one is the relative profitability of mining the alt-coin compared to Bitcoin itself. In general there is no a priori rule that mining alt-coins must be equally profitable in terms of rewards to hash-rate invested. While cost of hash-rate is relatively predictable in hardware costs and electricity, the rewards are highly volatile given the sharp fluctuations in cryptocurrency markets. More importantly, given that other blockchains such as Ethereum employ a different proof-of-work function than Bitcoin, it is often not possible to simply redirect existing hardware to mine an entirely different blockchain.

Restricting our attention to chains with compatible proof-of-work functions, there is a market dynamic to keep their profitability at approximately the same level in steady state. If a significant disparity exists, miners can jump ship to mine on the other chain. That incentive applies even to miners ideologically committed to a particular currency. If a miner only cares about Bitcoin but sees an opportunity to earn the equivalent of 10% more USD on a different chain with sufficient liquidity. The rational choice is:

  • Mine that alternative currency
  • Immediately cash out to all rewards to USD or other fiat currency
  • Use the proceeds to buy back bitcoin on the open market

Provided the discrepancy exceeds transaction costs of making those trades, that strategy still earns more bitcoin for the adaptable miner. In the case of Bitcoin and Bitcoin Cash, relative profitability has been fluctuating wildly, partly because BCH difficulty adjustments have been very slow to kick-in after the fork. But it is plausible that in steady state—assuming the forked chain, survives that is— profitability will remain in the same neighborhood, if not exactly equal.

The other wildcard is how difficulty adjustments work on the alternative chain. Recall that sudden introduction/departure of hash-power from the network has a transient effect on the overall profitability of that blockchain. This effect is short-lived as long as the difficulty level is adjusted to account for the total mining capacity expected. With some exceptions around edge-cases (such as the rapid, emergency adjustments to deal with the massive drop in the afterwork of the hard-fork) Bitcoin Cash has similar rules to Bitcoin around scheduling: the network aims for 10-minute intervals between blocks and adjusts difficulty after every 2016 blocks to hit that target. However these two chains will not necessarily agree on when each adjustment period starts or ends. Even in steady state, they are likely to appear as two out-of-phase waves with the same frequency rather than perfectly synchronized ones.

Luckily for hypothetical miner Bob engaged in a chain-switching attack, synchronization is not necessary. After withdrawing from mining capacity from BTC, the miner can simply switch to mining BCH starting at some arbitrary point in the middle of BCH adjustment period. (Note at the time of writing, overall BCH hash-rate is much lower compared to BTC. If we continue using the 20% figure as before, that amount of BTC capacity switching sides would completely dominate the minority chain, effectively constituting a 51% attack.) Until that period completes, the network is operating at increased pace, doling out rewards too fast because the presence of the new actor has not been factored into the difficulty calculation yet. Everyone, including Bob responsible for this temporary spike, is collecting BCH at a rate faster than what one would predict given their share of mining power. After the adjustment, target block-time is restored and expected reward per unit time also gets trimmed for everyone. But miner Bob is continuing to earn some reward, albeit in a different cryptocurrency. He can continue doing that until BTC chain readjusts its difficulty to create an environment where jumping back in is appealing again. This improves the economics of chain switching compared to simply idling all rigs for an adjustment period.

But counteracting that is another factor: the act of Bob completely withdrawing from BTC and redirecting all capacity at BCH has lowered the relative profitability of BCH. There is now more hash rate chasing the same amount of block rewards in BCH, while BTC is experiencing a competitive vacuum. (This assumes BCH and BTC prices have not magically moved in the opposite direction to correct for hash-rate. While hash-power naturally follows valuation by miners voting with their rigs, it is unclear what causal mechanism exists for market-price of a cryptocurrency traded on multiple exchanges to adjust to gain/loss of hash-power.) That may in turn create incentives for hash-power to naturally defect from BCH and switch back to BTC, compensating for the original drop.

To summarize, while it appears that each miner can cause network instability & fluctuation in block times by strategically exiting/reentering the market, that behavior is unlikely to be more profitable than the simpler strategy of always mining at the same rate 24/7. The introduction of a second compatible chain however does alter the dynamics, by compensating for lost income during the idle time. But even in that scenario,  it is unclear how other actors would respond because a natural equilibrium exists for the distribution of hash power between compatible chains, given rational miners with freedom to choose based on pricing.  More research is necessary to shed light on these competitive dynamics.



Tabs, spaces and the straitjacket of coding conventions

(An attempt to account for StackOverflow survey results)

In June StackOverflow published one of the more surprising results from their developer survey: “Developers Who Use Spaces Make More Money Than Those Who Use Tabs.” This is a counterintuitive result, considering that the question of spaces and tabs has always been considered a matter of personal preference, with no impact on the actual quality of code written. So much that this debate made it into an episode of “Silicon Valley,” a show predicated on repackaging the follies of our technology sector for popular consumption. Yet here is a survey from highly-regarded website with a statistically significant number of respondents, suggesting that this superficial stylistic difference affects career prospects. It would be akin to learning that traders who wore suspenders generated 10% more profits than their colleagues wearing belts. What is going on here? After all, switching from tabs to spaces is trivial and can be automated with a text editor. Should engineers around the world embark on a global search-and-replace across every file to give themselves a raise?

To their credit, StackOverflow authors realize this claim sounds absurd on the surface and try to find alternative explanations that could account for the observed difference in terms of variables that . Going back to our previous parallel situation drawn from finance: if there was a convention that people trading commodities wear belts while those speculating in equities prefer suspends, that would account for the difference in observed returns. In effect the belt/suspenders question is not a casual factor; it is just a proxy for an underlying “hidden” variable which influences the outcome in reality. If we could perform a controlled experiment where fixed-income traders were all given a wardrobe make-over and switched to suspenders, they would sadly not turn into rainmakers for their employer.

Yet in the case of tabs vs spaces, that observed difference in salary persists after controlling for obvious variables such as choice of programming language and specific area of development such as web/mobile/embedded. Across every category, the difference is in the same direction: those using spaces earn more. More importantly the StackOverflow data is looking at median salary instead of averages. Compared to an average, that statistic is much less susceptible to being skewed by a handful of “space” fanatics in a particular niche (such as ICO development) with outsized numbers.

Here is an alternative explanation: for a large number of developers, the choice of tabs vs spaces is not a reflection of personal preference but dictated by the coding conventions of their project. Observed tabs/spaces difference would then follows naturally from intrinsic differences in compensation between organizations who adopt strict formatting guidelines (specifically, mandating spaces) and those who are relatively liberal about formatting.

Coding style guides are common in large organizations to maintain consistent standards across their coding base. These standards can cover everything from very high-level/substantial topics such as language features permitted to trivial ones such as the placement of braces and line-breaks. For example, here is the Google C++ guide which opines on everything from use of namespaces to the evils of virtual inheritance and operator overloading—verboten at Google. (In fact by disallowing most advanced C++ features, this style of coding effectively turns C++ development at Google into a glorified C-with-classes circa 1990s.) The goal of consistency is to make it as easy as possible for engineer Alice to dive into a new code-base and review/improve code written by engineer Bob. If Bob was allowed to use arcane language features or esoteric design patterns in his work, it would be a lot more difficult for Alice to come up to speed and contribute. A more cynical interpretation is that conventions make engineers interchangeable, helping the organization at the expense of individual employees and overall productivity.

Indentation falls into that second category of “cosmetic” changes that have no impact on the semantics of code: how far each nested block of code is offset from the enclosing block, and whether that layout effect is achieved by using a single tab character or some fixed number of spaces. (Granted there are languages such as Python where indentation does change the meaning of  what code does. But even in that case, it remains immaterial whether that information is conveyed using spaces or tabs.)

Now if Alice is employed by a large company that mandates spaces while Bob is working for a scrappy start-up with laissez faire approach to formatting, they may end up using different indentation style even if Alice started out with no preference either way. The difference in compensation could simply reflect the underlying difference in the type of organizations that adopt coding conventions, assuming such conventions are more likely to favor spaces over tabs— as in the case of C++/Java/Python at Google or C# at MSFT. In that case the higher earnings for the spaces camp could be an artifact of large, established employers relying on high cash compensation while small start-ups lean heavily on equity such as stock options to attract candidates.

There is an ambiguity here in the phrasing of the survey question: whether it is asking for preference or status quo. The question “spaces or tabs” can be interpreted two ways:

  • Are you currently using spaces or tabs?
  • If you had your druthers, would you prefer to use spaces or tabs?

The first question is influenced by choice of employer, and the apparent correlation to salary could then be explained away as an artifact of a handful of large, successful tech companies having settled on coding conventions with spaces. The second is strictly a matter of individual preference; it would indeed be a very surprising result if it turns out that developers who had an innate preference for using spaces, absent any organizational mandate, somehow proved to be better compensated. (Of course it is also possible that company-mandated coding conventions are over time embraced and internalized by individuals subject to abiding by those rules. Coders who start out without any preference may later come to decide that there is “one-true-way” of indentation, namely the one they have been using all along.)


The optimal Ethereum heist: attacking the Parity wallet (part III)

[continued from part II]

To recap: there are several mysterious questions around how the attacker(s) who discovered a critical vulnerability in a popular Ethereum smart-contract went about exploiting that flaw to steal funds:

  • Only exploited some of the vulnerable contracts, even though all targets are equally easy to locate
  • Skipped contracts containing more funds in favor of going after lower-value targets with lower returns

At first, this does not seem consistent with an attacker driven by profit motivation. Armed with a 0-day exploit, the optimal strategy is systematically plundering the richest targets first— on the assumption that once people get wind of the vulnerability, they will race to defend their contracts, reducing the chances of successful exploitation.
But on second thoughts, the attacker may have been operating with another constraint in mind: potential PR backlash against the theft. Consider the three contracts that were targeted:

  • SwarmCity: a self-described “decentralized commerce platform” which raised funds with a resale of its SWT tokens.
  • Edgeless Casino: Online gambling website operating on the ethereum blockchain, which crowdfunded itself by issuing EDG tokens.
  • æternity: a platform for “scalable smart contracts interfacing with real world data” according to its website. The development of this project was also funded by, you guessed it, an initial coin offering. [ ]

Compare this to some examples of wallets that were left untouched in spite of having significant holdings:

  • BAT: Basic Attention Token, affiliated with the Brave browser project. Brave aims to shift the dominant revenue model for websites away from advertising (which leads to a race to the bottom in privacy, with increasingly invasive data collection on all users) and towards voluntary contributions powered by micro-payments.
  • ICONOMI: A meta-platform for managing other cryptocurrency assets
  • This one has a buzzword salad of “distributed global platform that connects exceptional startups, experts and investors worldwide”

There are at least two theories on why this group was spared and the former unlucky group exploited, and both hinge on the same premise: the possibility of a hard-fork that would reverse all theft transactions.

Recall that this drama played out once before: the DAO contract had raised close to $150M in ether at the prevailing exchange rates when it was successfully attacked, with the perpetrator walking away with $80M of those funds collected from investors. Or more precisely, they would have walked away with that tidy sum were it not for the Ethereum Foundation stepping in with a deus ex machina. In an ironic echo of the 2008 crisis which partly inspired Satoshi’s development of Bitcoin— too-big-to-fail institutions on the verge of collapse bailed out by intervention-happy regulators eager to rescue the ecosystem at all costs— the Foundation engineered the blockchain version of historical revisionism, returning stolen funds back to the DAO. At least that was the objective, but it did not exactly result in a clean undo. While the majority of hash power went along with this act of deliberate tampering with the ledger, a splinter faction to continue running with the original version which became the altcoin Ethereum Classic.

That history raises an important question for any enterprising criminal contemplating large-scale mayhem on the Ethereum blockchain: what is the threshold for a hard-fork? At what point does the Foundation deem a particular address as too-big-to-fail? Is there a version of the Federal Reserve “systemically important financial institution” criteria that decides which ones merit a bail-out in case of security breaches? There are three plausible theories:

  1. Value-at-risk. DAO breach resulted in the loss of $60M before the funds were returned with the bailout. The Parity attack netted ~$30M, suggesting the attacker stopped about half-way to that previous mark. But these numbers are deceptive because the price of Ether in USD has appreciated more than 10x since the DAO debacle. So measured in native currency, the current theft is dwarfed by the DAO.
  2. Public perception of the affected entities. The Brave browser presumably enjoys grass-roots support, because it is an underdog battling commercial behemoths (MSFT IE/Edge & Google Chrome) on behalf of users in a bid to improve user privacy online. This is exactly the type of project everyone wants to cheer on to a successful launch. Edgeless Casino is a dodgy gambling site sitting in a murky area of regulation that no one will shed any tear over. Taking away money used to fund Brave development would likely result in a public outcry. Robbing the casino would merit a shrug or even inspire schadenfreude.
  3. Old-school crony capitalism: only individuals with close connections to the Ethereum Foundation get bailed out. Unlike previous cases where public information about wallet owners can be used to make an informed assessment, this relationship is more difficult for an attacker to gauge ahead of time.

Either way, the attacker succeeded by this criteria. No one is seriously contemplating a hard-fork to reverse this particular theft. (In fairness, the DAO had a built-in safeguard to stop funds movement for several weeks after the theft. That feature greatly simplified the intervention: because the funds could not move around or taint other addresses, required blockchain edits to undo this action were limited to a few transactions. By comparison, the Parity wallet thief is free to move funds around. Correctly undoing the breach may involve reversing not only the original theft transactions but also every other transaction dependent on it in a ripple effect.

So it turns out that crime on the Ethereum blockchain does pay after all—but only when the perpetrator has the good sense to stop short of crossing the line that would trigger a hard-fork, even if that means deliberately following a seemingly “suboptimal” attack strategy. The optimal Ethereum theft is one that walks away with an amount just below the threshold of value/significance/popularity that would invite intervention by means of a hard-fork.




The optimal Ethereum heist: attacking the Parity wallet (part II)

[continued from part I]

A suboptimal heist?

Looking at how the attacker exploited the vulnerability shows that it was far from being the most elegant operation. First notice the delays between the two calls made to each vulnerable contract. Here is another look at the pairs of calls made for exploiting three different vulnerable wallets:

Screen Shot 2017-07-26 at 16.29.29

Pairs of calls used to exploit the Parity multi-signature wallet vulnerability against three different contracts

In the first attack, there is a gap of 10 Ethereum blocks between the calls, corresponding to ~3 minutes based on current block frequency. For the second attack, the calls are separated by 7 blocks. For the final victim, they are only 2 blocks apart suggesting that the perpetrator improved their execution over time. Still these wide gaps suggest that at least the first two attacks were being crafted manually, likely using an interactive REPL such as the geth Javascript console that allows invoking arbitrary methods on any contract. In principle there is no reason to wait between calls: as long as a miner processes them in the correct order, they could even take place in the same block. Even if the perpetrator is proceeding cautiously, waiting to confirm that the first step has achieved the intended effect, there is no reason to wait more than one block before pulling the trigger on the second one. (Note that unlike Bitcoin, Ethereum network does not have a perennial backlog of transactions or arbitrary fee hikes—unless an ICO is going on. Block timestamp is a close proxy for actual attacker moves.) In other words the attacker did not bother with automating the exploit by writing a script to make the calls in quick succession. Each victim appears to have been exploited with a hand-crafted sequence of calls.

That brings up the second mystery: choice of targets. The first successful attack is followed by several hours of inactivity, two more attacks in quick succession and finally, complete silence. There is more activity from the attacker address in days following the heist; but those are all for distributing the stolen Ether into other addresses, likely in preparation for cashing out. There are no other instances of the pair of calls to a vulnerable contract. While it is conceivable this actor switched accounts to better cover its tracks, there are no other instances of theft reported besides the original attack and the white-hat response. Assuming the white-hat group is indeed a distinct actor and not simply another front for the attacker, the surprising conclusion is the perpetrator operated on a leisurely schedule and stopped after targeting just three vulnerable contracts.

The next mystery is that 12 hour pause between the first and second attacks. That initial salvo netted a respectable 26793 ETH in spoils—over $5M. But the perpetrator was clearly not content to rest on those laurels, as evidenced by the following two encores. Why wait that long? Zero-day exploits have “half-lives,” especially when their affects are noisy; and it is difficult to imagine a more noisy demonstration of an exploit than burglarizing large sums on a public blockchain. Once the first vulnerable contract is hit, the clock starts ticking for everyone else to rediscover that vulnerability independently. In general such rediscovery poses two problems for the offense. First, other crooks will get into the game, racing the original finder to exploit remaining vulnerable targets. Second, defenders will be alerted to the existence of a vulnerability. They can craft a patch that closes the window of exploitation for everyone.

Granted that second response only applies if it is possible to upgrade an application in place without losing its state. While this is true for applications running on desktop and mobile platforms (you can patch your web-browser or operating system with nothing more than a couple minutes lost rebooting) it is not true for all platforms. For instance smart-card architectures compliant with the Global Platform standard have no concept of “upgrading” an existing application— applets can only be deleted and reinstalled, with all associated data lost in the process. (And that is a security feature: it protects applications from malicious updates, even by the software publisher.) It turns out Ethereum contracts are far more similar to smart-card applets than they are to vanilla Windows apps: Ethereum has no concept of replacing the code behind a contract with new code. In fact such a capability would run counter to the ideal of immutable contracts. If the code governing a contract can be modified after the fact, it can not be counted on to behave according to predefined rules in perpetuity. But all is not lost for the defenders: in an echo of the proverb “if you can’t beat them, join them,” white-hats can co-opt the exploit, using it preemptively to rescue funds from vulnerable contracts. This is exactly what happened during the DAO attack and again with the Parity wallet in this case.

Bottom line: once the existence of a vulnerability becomes public knowledge, the odds of successful exploitation for any one actor declines down over time. (Paradoxically the chances that any given vulnerable target will be exploited goes up; there are many more threat actors armed with the required capabilities. But they are all competing against each other for the same pool of victims.) A rational attacker being aware of these dynamics would pic off as many targets as possible in the shortest time to preempt the competition. Instead there is a 12 hour self-imposed ceasefire after the first successful exploit and only two more thefts after that. Why?

It is certainly not the difficulty of locating vulnerable contracts that is holding up the attacker. They have the exact same code; they are only differentiated by arguments to the constructor specified at creation time. That means a standard blockchain explorer can help locate other contracts sharing the same code. For example Etherscan has a web interface to search for similar contracts to one specified by address:

Screen Shot 2017-07-26 at 23.34.14

Locating contracts with identical or very similar code to a given contract

Given a security flaw in a widely used smart-contract, locating all instances of that vulnerable contract is shooting fish in a barrel. So why stop with three? The white-hat group has allegedly rescued 375K+ ether, more than twice the ~150K ETH the perpetrator managed to purloin. That is a lot of money left on the table.

Even more puzzling, the wallets attacked did not even represent those with highest balances. Given the increasing probability of rediscovery over time, the optimal strategy is going after wallets with highest balances first. But among contracts rescued by the white-hat group are three with balances of 120K, 56K and 47K respectively, each one alone higher than the take from the first victim. Why pass on these more lucrative targets when the same level of effort redirected elsewhere—doing nothing beyond copy/pasting a different contract address into the exploit sequence—could have yielded higher returns?

By all indications, the execution of this attack looks sloppy:

  • No automation for delivering the exploit
  • Long pause between the first and second wave, giving defenders precious time to organize a counter-strike
  • Worst of all, suboptimal choice of victims that fails to maximize profit given complete freedom to choose targets

Yet viewed in another light, that last one may have been a deliberate decision to optimize for a different criteria: the chances of actually getting away with the theft, walking away with the ill-gotten-gains.



The optimal Ethereum heist: attacking the Parity wallet (part I)

On Jul 19 news started circulating on social media that a critical vulnerability existed in the Parity multi-signature wallet. This was quickly followed by even more startling update that the vulnerability was being actively exploited in the wild to steal funds from vulnerable wallets. By the time the dust settled, more than $30M (at the prevailing ETH/USD exchange rates) had been stolen. Meanwhile a “white-hat” group emerged, racing to use the exploit themselves  to rescue another $85M held in other vulnerable contracts from another strike by the perpetrator, sowing further confusion around who exactly are the good and bad guys in this episode. That would have been unprecedented in any other setting— akin to a vigilante neighborhood watch group preemptively looting a bank to secure its funds, knowing that a group of professional bank-robbers were going around hitting other branches. (Except it already happened once before in Ethereum: the summer of 2016 witnessed another self-appointed white-hat coalition deliberately exploiting a known smart-contract vulnerability to rescue funds remaining in the DAO during another high-profile Ethereum contract debacle.) This post examines the nature of the vulnerability, the modus operandi used by the attacker and some unresolved questions around why they did not maximize the monetary gains from this exploit.

Reverse engineering the vulnerability

While the Parity team put out an emergency PSA, they did not disclose the nature of the vulnerability, perhaps out of the mistaken notion that doing so would facilitate additional attacks. (Ironically they also committed the fix to a public repository in Github, unwittingly 0-daying themselves.) But it turns out that the vulnerability was so blatant it would have been easy to identify from the pattern of exploits. Let’s start with one of the first pieces of information that emerged during this episode: the thief hit an Ethereum contract at address 0xbec591de75b8699a3ba52f073428822d0bfc0d7e and siphoned funds to his/her own address at 0xb3764761e297d6f121e79c32a65829cd1ddb4d32. This turns out to be all the information required to work backwards and reverse engineer the bug.

Looking at the transactions originating from the attacker address around this time, we see 2 function calls into the victim contract, closely spaced in time. In fact this same pattern is repeated for two other vulnerable contracts that were exploited:
The “value” is displayed as 0 ethers in this blockchain explorer, which is misleading— it means that no funds were transferred from the attacker to the victim as part of making this function call into the smart-contract. (That makes sense; when you are trying to rob an establishment, you usually do not send them more funds.) But if we were to look at the victim view, we would find that the second, later transaction in fact resulted in the vulnerable contract transferring 82000ETH to the attacker—more than $15M at prevailing exchange rates, not bad for two function calls:

Screen Shot 2017-07-19 at 13.38.39.png

Across all three victims, the coup de grâce is delivered in this second call, resulting in transfer of funds from the targeted contract to the attacker. We will keep this in mind while diving into the call details.

Looking at the second call in a blockchain explorer, it is a call to the execute() method of the contract. If these function arguments look familiar, that’s because the first one is the Ethereum network address of the attacker. The second one is the amount in Weis to transfer to that destination address:

Screen Shot 2017-07-19 at 16.49.59.png

Why this call succeeded in transferring funds is mysterious. The source code clearly designates the function with the modifier “onlyowner,” suggesting that the developer intended for the function to be only callable by one of the contract owners. Surely the attacker is not already an owner of the contract? (Otherwise this is just an ordinary legal dispute involving insider malfeasance, not a critical vulnerability in the contract logic.) Of course it is one thing to intend for that outcome, another to achieve it; machines can only execute code, not good intentions. But looking at the implementation of the modifier and following the call chains, everything appears to be in order. By all indications, if the caller is not one of the contract owners, the execute() function will not actually execute.

Solving this puzzle requires going back to that preceding function call before the theft. We can theorize that perhaps that first call is a case of “prepping the battlefield” by placing the contract state into a vulnerable state such that the second call will succeed. Looking at the details in a blockchain explorer provides the missing clue:

Screen Shot 2017-07-19 at 13.50.07.png

That initWallet() function is supposed to be used for initializing the contract when it is originally created. It is called by the constructor and records the set of owners, the quorum required to authorize funds transfer and daily withdrawal limit. Looking at the parameters, there is that familiar attacker address again 0xb3764761e297d6f121e79c32a65829cd1ddb4d32 at parameter #5. There is also the number 1 passed in as argument. So we can posit that the attacker called this function to overwrite the contract state, listing that address as the sole owner and indicating that approval from just one owner is enough to authorize release of funds. That explains why the second call succeeded: by the time the “onlyowner” modifier was being checked to authorize the funds transfer, ownership information had already been corrupted.

So there is the vulnerability: a sensitive function that should have been only callable internally—and only during initialization of the contract— was left exposed to external calls from anyone on the blockchain. In fairness the reason for that unexpected reachability  is subtle: the Parity wallet is structured as a thin-wrapper that delegates the bulk of implementation to a much larger, shared wallet library. The vulnerable function is part of that shared library and can not be invoked directly. But the fallback method in the the outer wrapper forwards arbitrary calls to the library, effectively exposing even seemingly “internal” methods. Calling that particular function allows overwriting the ownership information, effectively redefining who controls the funds managed by the contract. Sure enough a commit to the public repo on Github shortly after the announcement confirms this theory: the fix adds a new modifier to protect the function from being called a second time after contract is already initialized.

Making a solid case for bad language design

So that is the cause in effect. But as with most vulnerabilities, it is more instructive to search for a systemic root cause— intrinsic properties of the system that made this class of error likely. Absent root causes, every vulnerability looks like bad luck: someone, somewhere made an unfortunate error or committed an oversight that they promise will never happen again. In this case a closer look at the programming language Solidity used for authoring smart-contracts suggests that some design choices in the language increased the likelihood for these errors. Specifically, Solidity defaults to allowing all methods in a contract to be invoked publicly by default. This fails the criteria of being secure by default: public methods are exposed to hostile inputs from anyone with access to the blockchain, in the same way that a service listening on a network port invites increases attack surface. Parity developers were also quick to blame Solidity in their own post-mortem:

Fourth, some blame for this bug lies with the Solidity language and, in its current incarnation, the difficulty with which one can understand the execution permissions over functions. […] We believe one or both of two ideas would help. One would be to change the default access mode of functions to “private”, rather than the eminently insecure “public”.

Ironically the bug was introduced during a refactoring, which moved the initialization code out of the constructor and into its own function. Refactoring is “a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.” It turns out in the case of Solidity, repackaging a few lines of code into a separate function does alter its semantics: those lines of code could become externally reachable outside of the original call path. (Note that it is not possible to invoke the constructor twice; there was an implicit guarantee of one-time execution present in the original version that is missing from the rearrange code.)

Language design aside, the contract itself contains some questionable logic. First notice the complexity around managing ownership: this contract allows dynamically modifying the ownership of an existing contract, voting out members or introducing new ones. How often is that logic exercised in the wild? Is it worth introducing this complexity? If it is an edge case, there are much simpler solutions: since the quorum for membership changes is identical to that for authorizing funds transfer, why not simply launch a contract with new membership and transfer all the funds there? But adding insult to injury, Solidity does not have a notion of “constantness” for object fields, the same way that “const” modifier can be applied to C++ or Java members. Confusingly there is the concept of constant state variables, but such fields must be initialized at compile time via immediate values—in contrast to the more powerful C++ version where they can be initialized using variable inputs supplied at construction time. That makes it difficult to express the notion that specific contract properties such as ownership list are immutable for the lifetime of the contract once constructed.

Given this background on why these smart-contracts were vulnerable to theft, the next post will look at some curious details around how attackers capitalized on the flaw for profit.



Bitcoin for the unbanked: receding possibilities

In October 2016 the former COO of the Chinese Bitcoin exchange BTC-C generated plenty of controversy with a nonchalant tweet starting out with this casual statement:

“Bitcoin isn’t for people that live on less than $2 a day.”

Casually dropped in the midst of one of those interminable arguments about Bitcoin scaling playing out on social-media, this statement was appalling for the implied premise. It was a complete 180 degree turn from the rhetoric surrounding the rise of cryptocurrency, with Bitcoin always the torch-bearer. Virtual currencies are liberating forces we were told, breaking up the hold of entrenched institutions on finance that rigged markets and debased currencies. Arrayed against the forces of progress was a rotating cast of villains, different for each portrayal but largely interchangeable in terms of their unfair position: the Federal Reserve, too-big-to-fail banks requiring bailouts, government agencies asleep at the wheel clinging on to outdated regulations or worse enabling those bailouts in a case of regulatory capture, Visa/MasterCard duopoly controlling online payments. Bitcoin was going to be the new money system for  everyman, unequivocally on the side of with David against the faceless institutional Goliath. No need for banks, credit-card networks, payment processors, money-transmitter licenses. Anyone with an inexpensive smart phone running a Bitcoin wallet application could send and receive money directly to anyone else anywhere in the world. No army of middle-man circling for their cut of the transaction, no gatekeepers to decide who is worthy enough to partake in this network. Also no need to worry about the debasement of hard-earned currency by an out of control printing press operated at the behest of unaccountable bureaucrats.

Rhetorical excesses aside, there were plenty of solid use cases for Bitcoin in the early days suggesting that it could help the so-called “unbanked”— more than a billion people living in developing nations without access to financial services, subsisting strictly on a cash system. No bank accounts, no way to write checks or swipe credit-cards for purchases, no credit history, no way to finance large purchases. In Africa nascent systems such as M-Pesa had already demonstrated that one could leap-frog from a standard cash economy to using smartphone wallets directly. In a strange twist, these countries skipped entire generations of earlier payment systems including check clearing, ACH, wires, PIN debit & credit networks, skipping straight to mobile wallets while US tech companies struggle and flounder in their attempts to bootstrap mobile payments. Yet M-Pesa is still a centralized technology operated by the wireless carrier Vodafone. It is crucially dependent on the coverage of retail infrastructure, specifically kiosks where consumers can go to exchange cash for M-Pesa credits.

Bitcoin offers some compelling advantages over this model. With a decentralized system, no single actor has enough leverage to squeeze the ecosystem for additional profits. Safaricom can raise M-Pesa transaction fees arbitrarily. The closest analog to a pricing cartel in Bitcoin are miners, but they face relentless pressure to keep fees competitive: if one miner decides to charge too much for mining Bitcoin transactions into the ledger, a more efficient one will come along and gladly collect the fees from those transactions. While consumers still need currency exchanges to convert between BTC and local fiat money, that function can be served by private vendors competing in a transparent market instead of under complete control of Safaricom/Vodafone. For those concerned about monetary policy, BTC offers an attractive intrinsically deflationary model. The government of Kenya can churn out shillings and Safaricom can flood the market with M-Pesa credits just as easily as Hasbro can print more Monopoly money. But no one can fabricate Bitcoins out of thin air and cause runaway inflation in BTC prices.

Given all of these factors, it is not too hard to see why Bitcoin circa 2012 looked promising for developing nations. It is another case of the surprising last-mover advantage: instead of playing catch-up with developed countries by painstakingly building out “legacy” payment rails—check clearing, card networks, eventually NFC payments—emerging markets can leapfrog straight to the latest paradigm of virtual currency.

It did not work out that way. A quick look at Bitcoin exchange statistics reveals that the majority of trading in Bitcoin is concentrated in a few markets with existing, highly-developed financial systems, not emerging markets home to millions of unbanked consumers. (China is arguably the rare exception because Bitcoin was one of the few options around capital controls in place to prevent currency outflows, but that volume has evaporated almost overnight after the Central Bank cracked down on exchanges.) It is not to difficult to see why the rosy picture painted above did not play out. For starters, BTC itself has been an extremely volatile: from an all-time high near $1300USD before the implosion of Mt Gox to dipping below $200 in two years, doubling again within a year and then embarking on a stratospheric climb towards $3000. For all the talk of inflationary policies, fluctuations in the value of Bitcoin would make even the most irresponsible, interventionist central banker look restrained by comparison. That volatility makes it less than ideal to use Bitcoin for everyday commerce, much less enter into long-term contracts denominated in BTC.

One could argue that early volatility is just growing pains for a new-fangled currency as the market struggles to discover “correct” pricing. Alternatively it can be blamed on hoards of speculators with no actual use for BTC chasing this coveted asset simply because other people are also trying to buy it—a classic case of an asset bubble. Either way, optimists expect such speculative activity will eventually diminish in scale compared to the overall volume of cryptocurrency trading, resulting in a steady state with relatively stable exchange rates. But there is one more assumption built into this lofty vision of Bitcoin, as an democratizing force that helps millions of consumers in developing countries fully participate in markets: low transaction costs. That premise looked solid in 2012 and anchored many other presumed use-cases for Bitcoin such as paying for that cup of coffee. All of these scenarios have been called into question by recent developments. In contrast with the problem of exchange-rate volatility which may well improve as the market matures, the vision of low-cost efficient payments is becoming less realistic.

The Bitcoin fee model is unusual to say the least. Most payment systems charge costs that are at least in part proportional to the value transferred. This follows a natural assumption: someone moving large amounts of money derives greater utility from that transaction than a person moving a modest amount. It follows that they would be willing to pay higher costs for the privilege of executing that transaction. (This is an oversimplification; it is also common to have fixed costs and discounts that kick-in for high amounts.) Bitcoin throws that logic out the window, charging instead based on approximate “complexity” of transactions. That complexity is indirectly measured by amount of space required to represent the transaction on the blockchain. A transaction with a single source, straightforward redeem script (“sign with this public-key”) and single destination output takes relatively few bytes to encode. One that combines multiple inputs, complex redemption conditions (“signed by 3 out of 5 keys”) and distributes those funds to multiple destinations takes up more space. Yet complexity is orthogonal to value transferred. This is what makes it possible to move $80 million USD with a few cents in transaction fees—an astonishing level of efficiency unequaled by any other payment system available to consumers—or moving $5 while paying half that amount in fees, which is extremely wasteful.

It’s as if banks charged fees for cashing checks based on how much ink there is on the check instead of their notional value. Yet this model makes sense given that space in the blockchain is itself a scarce resource. Each new block miner for extending the ledger can accommodate exactly 1MB worth of transactions. That scarcity creates natural competition for transactions trying to get mined into the next block by providing sufficient incentives to miners.

That brings us back to the question of developing markets. Back in the early days when ambitious visions of everyone paying for their next cup of coffee in bitcoin were being bandied about, those fees were negligible. Bitcoin was poised to undercut credit-card networks for retail purchases, massively undercut Western Union for international remittances and even outdo Paypal for efficient peer-to-peer payments. It even looked like the first realistic option for micro-payments, where very small amounts of money change hands very frequently: visitors to a web site donating a few cents for each article read. Fast forward to 2017, blocks are full, memory pool—that waiting queue of outstanding transactions waiting to be confirmed in the ledger— has ballooned and transactions fees are no longer negligible. Bitcoin businesses that naturally attract frequent fund movements such as exchanges have resorted to policy changes for passing transaction fees directly to customers. The only fighting chance for micro-payments today rests on the deployment of additional overlays such as the Lightning Network, implemented on top of the standard Bitcoin protocol.

Given the status quo it would be difficult disagree with the statement that Bitcoin as it exists today has very little to offer citizens of developing nations looking for an alternative payment solution for everyday purchases. Indeed the network as deployed today is not capable of clearing a large number of small transactions. (They may still find some value in its deflationary nature, as with Venezuelan citizens hoarding BTC in the midst of their economic crisis.)

It did not have to be that way. The scarcity of space is an artificial consequence of the arbitrary 1MB limit, the relic from a tactical fix implemented in response to an unrelated problem without much consideration given to future consequences. One could imagine counterfactuals where the blocksize limit is allowed to float, perhaps increasing automatically over time or adjusting in response to demand in the same way mining difficulty is constantly calibrated for constant throughput. There are clear costs to increasing blocksize: additional space required for storing a larger ledger would place demands on all nodes participating in the network. By raising costs, critics contend that such unchecked growth may force some to give up, resulting in a less decentralized network. On the other hand, the ledger is expanding every time a new block is mined and “cost” of running full node does go up measured in raw disk space. So the relevant question concerns rate of increase. Is the increased burden outpacing Moore’s law to the point that running a full node becomes more expensive in real terms? Is the growth rate predictable enough for planning future capacity? (That is a strike against quantum leaps from 1MB → 8MB because it leaves little time for adjustment.)

Arguments for and against raising block limit are being advanced daily, as are alternatives that improve throughput while leaving that sacred parameter alone. The problem is not for lack of ideas on scaling; there are too many possibilities coupled with too little consensus on which ones to pursue. The community has been unable to agree on a single solution, precipitating the current crisis with miners and users playing a game of chicken that could splinter the network on August 1st. High fees and low transaction rates have sabotaged many scenarios for using bitcoin that seemed perfectly within reach in the past. Dashed hopes for getting the millions of unbanked citizens of developing nations onboard is just one part of that collateral damage.



Wannacry: ransomware as diversionary tactic?

Digging into motives

It is conventional wisdom in information security that precise attribution for successful digital attacks is very difficult difficult. Concealing the source of malware is much easier than say the launch point of an ICBM, and sophisticated attackers can engage in false-flag operations to frame innocent bystanders for their actions. So it is unusual that persuasive evidence has already emerged linking the ransomware Wannacry to DPRK. (Democratic People’s Republic of Korea, or North Korea for short— whenever the words “democratic” and “people” appear in the name of a country, you can be certain it is neither.) Even the NSA and British NCSC confirmed as much in off-the-record statements. That degree of confidence in the conclusion is itself surprising: nation-state sponsored threat actors are expected to be among the most sophisticated of all threat actors, and therefore more likely to exercise good operational security in hiding their tracks and avoiding mistakes that point back at the responsible party.

But assuming the attribution in this case is correct, it raises two questions:

  • Are nation states now carrying out financially motivated attacks?
  • Why was Wannacry so spectacularly unsuccessful at extracting payment from its targets?

Intelligence gathering vs get-rich-quick schemes

Managing information security risks calls for an understanding of the threat actors one is likely to run up against. Motivations and capabilities of the average script-kiddie are vastly different than those of an intelligence agency, as are the appropriate defensive measures necessary to defend against them. In particular, one needs a credible motive for attack.

A good chunk of online crime is financially motivated. Large-scale breaches against payment systems, such as TJ Maxx (2006), Target (2013) or Home Depot (2014) were driven by good old-fashioned greed. The perpetrators hoped to monetize stolen payment instruments, either by using them directly to conduct fraudulent purchases, or offload the card details to other enterprising criminals who were better placed to do that. (It is not trivial to do this effectively: the crooks must find a way to purchase goods that can be resold while minimizing the risk of getting caught or triggering the fraud-detection systems operated by card networks. While data on this point is scarce, only a small fraction of the spending limit available on a card can be exploited by the attacker before it is suspended.) That end goal in turn influences choice of target based on a calculus of expected gain: probability of successfully breaching the target multiplied by amount of funds at stake. This provides the greatest flexibility in simultaneously going after multiple targets in the hopes that one will pan out; no reason to insist on trying to break into company X if it turns out company Y will be an easier target with just as much value at risk.

But financial gain is far from being the only driver. Some threat actors are motivated by ideology. Unlike crooks chasing money, “hacktivists” seek to achieve political objectives or settle scores against companies perceived as causing harm. Phineas Phisher’s exquisite 0wnage and subsequent doxing of the ethically bankrupt Hacking Team is a textbook example. These groups have a highly principled approach to picking their targets, paying less attention to the difficulty and devoting significant resources on a specific objective. Yet others are characterized by a complete lack of ideology; they are in it for the “lulz.” Targets are chosen arbitrarily and opportunistically, with no rhyme or reason other than being within range of attacker capabilities, the online equivalent of being at the wrong place at the wrong time.

On the other extreme, nation-state actors are the apex predators of the ecosystem. They combine massive arsenals of offensive tool-kits with a disciplined approach to selecting targets based on intelligence value. Here “nation-state” encompasses both offensive actions directly carried out by intelligence agencies, but also private groups funded/supported by such organizations to carry out proxy battles. No target is too small or too insigificant if there is valuable information to loot: China is equally at home going after boutique law firms defending political dissidents as going after the whole enchilada at Google.

Until now it was assumed that such groups were not after direct financial gain. There certainly is a time-honored tradition of industrial espionage carried out against foreign countries in pursuit of indirect financial gain for the home team. Yet one does not expect the NSA, GCHQ or even their less ethically-constrained brethren such as FSB to operate credit-card skimming operations on the side.

From industrial espionage to the Bangladeshi job

North Korea is now challenging that premise. The original link between Wannacry and DRPK was the similarity in its code to previously known malware used in the attack on the central bank of Bangladesh. That heist netted the perpetrators over $80 million USD even after attempted recovery of stolen funds—and it would have been a lot more profitable, to the tune $900M were it not for careless mistakes made by the attackers that blew the cover on the operation. These are significant numbers, especially for an embattled North Korea straining under the weight of economic sanctions.

This action had an undeniable profit motive; in fact such brazen theft of funds compromises any intelligence gathering mission that may have been going on in parallel. Lazarus Group had achieved persistence on systems belonging to the Bank of Bangladesh and lurked for months while building custom techniques to evade monitoring. Such entrenched presence would have supplied DPRK with a unique vantage point to spy on the movement of funds in Bangladesh for years to come—if they cared for that capability. By contrast a smash-and-grab attack that results in significant loss can not stay under the radar and predictably leads to defenders diligently working to flush out any attacker presence from the system.

Wannacry as the amateur-hour of ransomware

That brings us to the strange case of Wannacry. On paper, ransomware is the epitome of financially motivated malware with zero information-gathering value, reflecting an interesting shift in tactics. The first generation of mass malware turned Windows PCs into zombies sending out spam while completely ignoring any data that may reside on those machines; effectively only monetizing their network bandwidth to support ancillary business models such as mass marketing or distributed-denial-of-service. The second generation focused on information theft as traditionally understood, looking for special categories of data that can be monetized directly such as passwords for online banking sites or credit-card details, and shipping these off to a server controlled by the attackers. Ransomware by contrast does not attempt to steal any information; it holds information hostage from the legitimate owner via encryption.

That modus operandi means ransomware has the unusual feature of having to negotiate with its victims for successful monetization. A spam bot operates quietly in the background; it does not show users a dialog offering to uninstall itself in exchange for payment. Likewise banking malware silently collects credentials for logging into financial institutions and ships these off to its operators who already have existing plans to monetize that information by selling the credentials on dark markets. That path is already preordained. Consumers do not get a first right of refusal to opt out of that transaction and keep their PayPal password secret by offering more money than prevailing underground rates. Ransomware is unique in expecting to get compensated directly by its own victims.

That in turn brings some semblance of market dynamics into the equation. While installing ransomware is not a voluntary act (unless you are a security researcher) the user still has a decision to make about paying the ransom bid. For example if they had been regularly backing up all of their files, they could always choose to wipe their machine clean, reinstall the operating system to get back to a clean slate and recover using those backups. Even if the user is faced with partial loss of data, they may still deem the ransom price too high to warrant rescuing the lost information. This is where the reliability of ransomware operation enters into the picture, because malware is effectively a market for lemons. There is no honor among thieves. Even if the price is “reasonable” there is no guarantee that successful decryption will follow after delivering the payment. (At least in the current incarnation of ransomware observed in the wild. In principle, smart-contracts enable honest ransomware with delivery of payment  contingent on the disclosure of decryption keys.) A user who does not get their files decrypted despite paying up is an unhappy customer. In this day and age of Yelp reviews, word gets around: other users facing the same decision may opt for not paying.

This is where Wannacry fails spectacularly: it is clear from reverse-engineering the binary that this operation could not possibly have supported any type of decryption based on payment. To the extent users have been able to recover their data, it has been due to fortunate design flaws in Wannacry or at least a failure by the authors to understand quirks of Windows crypto API which keeps decryption keys around longer than expected. In fact it is clear the Lazarus Group did not plan on providing a decryption service. Users were asked to send payment to exactly one of 3 Bitcoin addresses randomly selected from a list hard-coded into the binary. Given that infected systems numbered in the hundreds of thousands, it is not possible to identify which ones have paid— a prerequisite for honoring the promise that paying users receive a decryption key. The only plausible scenario would have been a global ransom: offering to release a master key that would unlock all machines once a specified amount is sent in total from all affected users. But such a collectivized demand is far less likely to find any takers compared to individual offers. There is still no guarantee of recovering your files and now your success depends on other people cooperating. If the threshold is not reached, all donations are wasted. Meanwhile everyone has an incentive for free-riding, hoping that other people will chip in and they can collect the benefit when decryption key is released.

To wit those three Bitcoin addresses have collected a modest sum of 55BTC at the time of writing, worth approximately $125K at current exchange rates. That figure is dwarfed by the take from the Bangladesh heist. That raises a significant question: if Wannacry was developed by a highly skilled threat actor with nation-state backing and yet, for all that talent, proved an abject failure at monetization, were there other motives behind it?

Unfollowing the money

We can not rule out the theory that Wannacry was a precursor of the finished product North Korea wanted to unleash, an incomplete beta version which accidentally escaped the lab setting and propagated. According to this view, the final version would have handed out individual Bitcoin addresses to every user and featured some type of service in the cloud to hand out decryption keys when payment is made. (Although it is difficult to imagine how that would work, given the enormous incentive by ISPs and law-enforcement to shutdown such a service.) Yet for unclear reasons—either by accident or perhaps to meet some arbitrary deadline— this half-baked version was unleashed and since it is self-propagating malware, could no longer be recalled.

An alternative explanation is that the whole ransomware aspect is a diversion. The true purpose of Wannacry is inflicting economic harm by destroying data and rendering systems unusable. That there is no mechanism for recovering data after payment is not a “bug” in the operation; it is 100% by design. Unlike a true extortion scheme, these perpetrators have no plans to profit from providing any relief from the harms unleashed by their own creation. The objective is imposing costs on , not obtaining additional revenue for themselves. The negligible amount of Bitcoin collected is only a side-show. Even if a few people did pay up initially, future victims would be discouraged after learning that the promised data recovery never arrive. If this theory is correct, the operational cover for Wannacry became a victim of its own success: instead of blending into the background as yet another ransomware scam, Wannacry was extensively studied and reverse-engineered, eventually unearthing the link to North Korea. The main strike against this theory is the geographic distribution of Wannacry infections: Russia, India, several former Soviet republics, China and Iran are among the top 20 countries affected. While North Korea is greatly isolated and has few allies, these are not exactly the countries that one would expect DPRK to prioritize targeting—Iran in particular has been implicated in supplying DPRK with technology for its nuclear program. While some of this may be driven by the prevalence of outdated/pirated versions of Windows not receiving security updates, it would have been trivial to design safeguards that take place after infection to selectively target specific regions. For example Wannacry could have checked timezone and language settings on the machine before proceeding to encrypt files. (Malware in the wild carrying such checks has surfaced at least as early as 2009.) On the other hand carving out such exceptions provides circumstantial evidence about the source of the attack. If malware has been tailored to avoid particular countries, the assumption is its creators are affiliated with or  at least closely allied with the nations spared from damage. Taking an equal-opportunity approach to harming friend and foe, Wannacry may have been trying to avoid giving  such geopolitical clues.