Wannacry: ransomware as diversionary tactic?

Digging into motives

It is conventional wisdom in information security that precise attribution for successful digital attacks is very difficult difficult. Concealing the source of malware is much easier than say the launch point of an ICBM, and sophisticated attackers can engage in false-flag operations to frame innocent bystanders for their actions. So it is unusual that persuasive evidence has already emerged linking the ransomware Wannacry to DPRK. (Democratic People’s Republic of Korea, or North Korea for short— whenever the words “democratic” and “people” appear in the name of a country, you can be certain it is neither.) Even the NSA and British NCSC confirmed as much in off-the-record statements. That degree of confidence in the conclusion is itself surprising: nation-state sponsored threat actors are expected to be among the most sophisticated of all threat actors, and therefore more likely to exercise good operational security in hiding their tracks and avoiding mistakes that point back at the responsible party.

But assuming the attribution in this case is correct, it raises two questions:

  • Are nation states now carrying out financially motivated attacks?
  • Why was Wannacry so spectacularly unsuccessful at extracting payment from its targets?

Intelligence gathering vs get-rich-quick schemes

Managing information security risks calls for an understanding of the threat actors one is likely to run up against. Motivations and capabilities of the average script-kiddie are vastly different than those of an intelligence agency, as are the appropriate defensive measures necessary to defend against them. In particular, one needs a credible motive for attack.

A good chunk of online crime is financially motivated. Large-scale breaches against payment systems, such as TJ Maxx (2006), Target (2013) or Home Depot (2014) were driven by good old-fashioned greed. The perpetrators hoped to monetize stolen payment instruments, either by using them directly to conduct fraudulent purchases, or offload the card details to other enterprising criminals who were better placed to do that. (It is not trivial to do this effectively: the crooks must find a way to purchase goods that can be resold while minimizing the risk of getting caught or triggering the fraud-detection systems operated by card networks. While data on this point is scarce, only a small fraction of the spending limit available on a card can be exploited by the attacker before it is suspended.) That end goal in turn influences choice of target based on a calculus of expected gain: probability of successfully breaching the target multiplied by amount of funds at stake. This provides the greatest flexibility in simultaneously going after multiple targets in the hopes that one will pan out; no reason to insist on trying to break into company X if it turns out company Y will be an easier target with just as much value at risk.

But financial gain is far from being the only driver. Some threat actors are motivated by ideology. Unlike crooks chasing money, “hacktivists” seek to achieve political objectives or settle scores against companies perceived as causing harm. Phineas Phisher’s exquisite 0wnage and subsequent doxing of the ethically bankrupt Hacking Team is a textbook example. These groups have a highly principled approach to picking their targets, paying less attention to the difficulty and devoting significant resources on a specific objective. Yet others are characterized by a complete lack of ideology; they are in it for the “lulz.” Targets are chosen arbitrarily and opportunistically, with no rhyme or reason other than being within range of attacker capabilities, the online equivalent of being at the wrong place at the wrong time.

On the other extreme, nation-state actors are the apex predators of the ecosystem. They combine massive arsenals of offensive tool-kits with a disciplined approach to selecting targets based on intelligence value. Here “nation-state” encompasses both offensive actions directly carried out by intelligence agencies, but also private groups funded/supported by such organizations to carry out proxy battles. No target is too small or too insigificant if there is valuable information to loot: China is equally at home going after boutique law firms defending political dissidents as going after the whole enchilada at Google.

Until now it was assumed that such groups were not after direct financial gain. There certainly is a time-honored tradition of industrial espionage carried out against foreign countries in pursuit of indirect financial gain for the home team. Yet one does not expect the NSA, GCHQ or even their less ethically-constrained brethren such as FSB to operate credit-card skimming operations on the side.

From industrial espionage to the Bangladeshi job

North Korea is now challenging that premise. The original link between Wannacry and DRPK was the similarity in its code to previously known malware used in the attack on the central bank of Bangladesh. That heist netted the perpetrators over $80 million USD even after attempted recovery of stolen funds—and it would have been a lot more profitable, to the tune $900M were it not for careless mistakes made by the attackers that blew the cover on the operation. These are significant numbers, especially for an embattled North Korea straining under the weight of economic sanctions.

This action had an undeniable profit motive; in fact such brazen theft of funds compromises any intelligence gathering mission that may have been going on in parallel. Lazarus Group had achieved persistence on systems belonging to the Bank of Bangladesh and lurked for months while building custom techniques to evade monitoring. Such entrenched presence would have supplied DPRK with a unique vantage point to spy on the movement of funds in Bangladesh for years to come—if they cared for that capability. By contrast a smash-and-grab attack that results in significant loss can not stay under the radar and predictably leads to defenders diligently working to flush out any attacker presence from the system.

Wannacry as the amateur-hour of ransomware

That brings us to the strange case of Wannacry. On paper, ransomware is the epitome of financially motivated malware with zero information-gathering value, reflecting an interesting shift in tactics. The first generation of mass malware turned Windows PCs into zombies sending out spam while completely ignoring any data that may reside on those machines; effectively only monetizing their network bandwidth to support ancillary business models such as mass marketing or distributed-denial-of-service. The second generation focused on information theft as traditionally understood, looking for special categories of data that can be monetized directly such as passwords for online banking sites or credit-card details, and shipping these off to a server controlled by the attackers. Ransomware by contrast does not attempt to steal any information; it holds information hostage from the legitimate owner via encryption.

That modus operandi means ransomware has the unusual feature of having to negotiate with its victims for successful monetization. A spam bot operates quietly in the background; it does not show users a dialog offering to uninstall itself in exchange for payment. Likewise banking malware silently collects credentials for logging into financial institutions and ships these off to its operators who already have existing plans to monetize that information by selling the credentials on dark markets. That path is already preordained. Consumers do not get a first right of refusal to opt out of that transaction and keep their PayPal password secret by offering more money than prevailing underground rates. Ransomware is unique in expecting to get compensated directly by its own victims.

That in turn brings some semblance of market dynamics into the equation. While installing ransomware is not a voluntary act (unless you are a security researcher) the user still has a decision to make about paying the ransom bid. For example if they had been regularly backing up all of their files, they could always choose to wipe their machine clean, reinstall the operating system to get back to a clean slate and recover using those backups. Even if the user is faced with partial loss of data, they may still deem the ransom price too high to warrant rescuing the lost information. This is where the reliability of ransomware operation enters into the picture, because malware is effectively a market for lemons. There is no honor among thieves. Even if the price is “reasonable” there is no guarantee that successful decryption will follow after delivering the payment. (At least in the current incarnation of ransomware observed in the wild. In principle, smart-contracts enable honest ransomware with delivery of payment  contingent on the disclosure of decryption keys.) A user who does not get their files decrypted despite paying up is an unhappy customer. In this day and age of Yelp reviews, word gets around: other users facing the same decision may opt for not paying.

This is where Wannacry fails spectacularly: it is clear from reverse-engineering the binary that this operation could not possibly have supported any type of decryption based on payment. To the extent users have been able to recover their data, it has been due to fortunate design flaws in Wannacry or at least a failure by the authors to understand quirks of Windows crypto API which keeps decryption keys around longer than expected. In fact it is clear the Lazarus Group did not plan on providing a decryption service. Users were asked to send payment to exactly one of 3 Bitcoin addresses randomly selected from a list hard-coded into the binary. Given that infected systems numbered in the hundreds of thousands, it is not possible to identify which ones have paid— a prerequisite for honoring the promise that paying users receive a decryption key. The only plausible scenario would have been a global ransom: offering to release a master key that would unlock all machines once a specified amount is sent in total from all affected users. But such a collectivized demand is far less likely to find any takers compared to individual offers. There is still no guarantee of recovering your files and now your success depends on other people cooperating. If the threshold is not reached, all donations are wasted. Meanwhile everyone has an incentive for free-riding, hoping that other people will chip in and they can collect the benefit when decryption key is released.

To wit those three Bitcoin addresses have collected a modest sum of 55BTC at the time of writing, worth approximately $125K at current exchange rates. That figure is dwarfed by the take from the Bangladesh heist. That raises a significant question: if Wannacry was developed by a highly skilled threat actor with nation-state backing and yet, for all that talent, proved an abject failure at monetization, were there other motives behind it?

Unfollowing the money

We can not rule out the theory that Wannacry was a precursor of the finished product North Korea wanted to unleash, an incomplete beta version which accidentally escaped the lab setting and propagated. According to this view, the final version would have handed out individual Bitcoin addresses to every user and featured some type of service in the cloud to hand out decryption keys when payment is made. (Although it is difficult to imagine how that would work, given the enormous incentive by ISPs and law-enforcement to shutdown such a service.) Yet for unclear reasons—either by accident or perhaps to meet some arbitrary deadline— this half-baked version was unleashed and since it is self-propagating malware, could no longer be recalled.

An alternative explanation is that the whole ransomware aspect is a diversion. The true purpose of Wannacry is inflicting economic harm by destroying data and rendering systems unusable. That there is no mechanism for recovering data after payment is not a “bug” in the operation; it is 100% by design. Unlike a true extortion scheme, these perpetrators have no plans to profit from providing any relief from the harms unleashed by their own creation. The objective is imposing costs on , not obtaining additional revenue for themselves. The negligible amount of Bitcoin collected is only a side-show. Even if a few people did pay up initially, future victims would be discouraged after learning that the promised data recovery never arrive. If this theory is correct, the operational cover for Wannacry became a victim of its own success: instead of blending into the background as yet another ransomware scam, Wannacry was extensively studied and reverse-engineered, eventually unearthing the link to North Korea. The main strike against this theory is the geographic distribution of Wannacry infections: Russia, India, several former Soviet republics, China and Iran are among the top 20 countries affected. While North Korea is greatly isolated and has few allies, these are not exactly the countries that one would expect DPRK to prioritize targeting—Iran in particular has been implicated in supplying DPRK with technology for its nuclear program. While some of this may be driven by the prevalence of outdated/pirated versions of Windows not receiving security updates, it would have been trivial to design safeguards that take place after infection to selectively target specific regions. For example Wannacry could have checked timezone and language settings on the machine before proceeding to encrypt files. (Malware in the wild carrying such checks has surfaced at least as early as 2009.) On the other hand carving out such exceptions provides circumstantial evidence about the source of the attack. If malware has been tailored to avoid particular countries, the assumption is its creators are affiliated with or  at least closely allied with the nations spared from damage. Taking an equal-opportunity approach to harming friend and foe, Wannacry may have been trying to avoid giving  such geopolitical clues.


Designing “honest ransomware” with Ethereum smart contracts: postscript (part III)

[continued from part II]

Postscript: Shadow Brokers auction done right

Incidentally this protocol also works for the type of auction Shadow Brokers attempted in 2016— if they intended it in earnest, which does not appear to have been the case for the stash of NSA exploits they planned to dump. To recap: this threat-actor had gotten its hand on an exploit kit associated with the NSA and initially offered to sell it to the highest bidder. This “auction” however was curiously designed to mirror the dollar auction from game theory, featuring a winner-takes-all property. Bids are placed by sending funds to a Bitcoin address, highest-bidder gets the stash of exploits while everyone else gets nothing— they forfeit any Bitcoin sent. Not surprisingly there were few takers for this model, given the odds that even the winner may end up with nothing.

While the auction setting introduces additional complexity, the protocols sketched earlier can help with a simpler version of the problem: a shady group claims to have a lucrative stash of documents up for sale in exchange for cryptocurrency. Given the nature of such underground transactions, both sides are concerned about the risk of being defrauded. The seller worries about delivering the stash without getting paid and the buyer is concerned about paying for worthless information. A variant of the fair-exchange payment protocol can be built on Ethereum smart contracts:

  1. Seller encrypts all documents using a hybrid-scheme with a fixed public-key and makes all ciphertexts available to the buyer. (The granularity of encryption need not be at the level of individual documents. Each page or 10K chunk of source code could be individually encrypted, as long as each fragment contains enough information for the buyer to make a judgment on its veracity.)
  2. Buyer and seller jointly select a random subset of ciphertexts to be opened by the seller, to verify that they conform to the uniform encryption format expected for the entire batch. This assumes the seller has some way to validate the authenticity of individual fragments.
  3. Buyer launches an Ethereum smart-contract, designed to release payment on delivery of the private-key. He funds the contract with an amount of ether corresponding to the sale price.
  4. Seller invokes the contract method disclosing the private-key and collects the proceeds.

There is of course one last optional step for the buyer: call up the Ethereum Foundation and demand a hard-fork to reverse the payment. After all, the fairness expressed in a smart contract is only as reliable as the immutability of the blockchain that contract executes on.


PS: Similar ideas are explored in a blog post on “The future of ransomware”, which in turns references the notion of zero-knowledge of contingent payments first demonstrated in Bitcoin. That approach front-loads the work into developing cryptographic proofs systems to verify that the encrypted data has the right structure, such as being the solution to a particular puzzle which can be verified by operating on encrypted data. It relies on disclosing the preimage for a hash (as opposed to a private-key) which can be expressed even with the limited scripting capabilities of Bitcoin. But that approach runs into the same problem as verifiable encryption when applied to ransomware: the plaintext has no particular structure, and we only assume the user has access to an oracle that can answer thumbs up/down on whether the result of a decryption corresponds to an authentic file which had been hijacked by malware.


Designing “honest ransomware” with Ethereum smart contracts (part II)

[continued from part I]

There are special-case solutions for common public-key algorithms to prove that a given ciphertext decrypts to an alleged plaintext. For example, with RSA the key-holder can return the plaintext without removing its padding. Recall that RSA is typically used in conjunction with a padding scheme such as PKCS1.5 or OAEP. These schemes apply randomized transformation to plaintext prior to encryption and undo that transformation after decryption. Without knowing the padding used during encryption, it is not possible to check that a given unpadded plaintext corresponds to some known ciphertext. But once the fully padded version is revealed, the user can both check that it encrypts to the original ciphertext and that removing the padding results in expected plaintext, because both of those steps are fully deterministic.

This straightforward approach does not carry over to other algorithms. For example ElGamal is a randomized public-key encryption scheme—each encryption operation uses a randomly chosen nonce, so reencrypting the exact same message can yield different ciphertexts. Yet it is not possible to recover the random nonce used after the fact, even with knowledge of the private key.  Unless the private-key owner takes additional steps to stash that nonce someplace, they are stuck in a strange position: they can decrypt any ciphertext but they can not simply prove the decryption is correct by sharing the result. That is not the case for RSA: if you can decrypt, you can also recover the transformed input complete with randomized padding. ElGamal calls for something more complex, such as a zero-knowledge proof of discrete logarithm equality to show that the exponentiation of the random masking value (first part of the ciphertext) by the private-key was done correctly.

For a more generic solution, we can leverage blind decryption: ask the private-key holder to decrypt a ciphertext without that party realizing which ciphertext is being decrypted. This is typically achieved by exploiting homomorphic properties of cryptosystems. For instance raw RSA—without any padding— has a simple multiplicative relationship: encryption of a product is the product of encryptions.

RSA-ENC(a·b) = RSA-ENC(a) * RSA-ENC(b)

where all multiplication is done modulo N associated with the key. A similar property also holds true for decryption where X and Y are ciphertexts:


Looked another way, we can decrypt a ciphertext indirectly by decrypting a different ciphertext with known relationship to the original:


To decrypt X, we ask the ransomware operator to instead decrypt X·Y as long we know the decryption of Y. (Typically because we obtained Y by encrypting a random plaintext m of our choice to begin with.) Note the original encryption can still be using a strict padding mode such as OAEP, as long as the decryption side is willing to handle arbitrary ciphertext that results in plaintext with no specific padding format.

This additional step prevents the ransomware author from cheating: X·Y looks like a random ciphertext, completely unrelated to the original encryption X of the symmetric key. Even if there is a cheat sheet listing every such X and its associated plaintext to answer chalenges, there is no way to correlate the particular challenge presented to one of these entries. Each challenge is equally likely to be derived from any ciphertext in the collection. The intuition is that if files were not encrypted according to a consistent scheme with the same RSA public-key, it would not be possible to produce the correct answer when presented with such a random-looking ciphertext.

This step can be repeated for a small subset of files to achieve high-confidence that most files have been encrypted according to the expected scheme. As long as the challenges are selected randomly, even a small number of successful tests provides high assurance against cheating. For example, suppose ransomware encrypted a million files but replaced 2% of them with random data instead of following the claimed hybrid-encryption process. If 100 files out of that collection are randomly selected for spot-checking, odds are 87% that this departure from protocol will be caught. The downside is diminishing returns with increased coverage. While a handful of challenges can rule out cheating on a wide scale— for example every other file being corrupted— it is not feasible to prove that 100% of data is recoverable. If ransomware deliberately mangled exactly 1 file in the collection, it would take a great deal of luck to discover that; one must pick that specific file as a challenge. That also suggests a strategy of picking the files you care about most for the challenge phase, given that for every other files there is a small but non-zero probability of failure to recover. (Incidentally the ransomware author has a diametrically opposed interest in making sure users do not get to choose the challenge subset unilaterally. Otherwise they could pick the 100 files they care about most and abandon the rest.)

There is one more subtlety here: the decryption challenge returns a symmetric key, which is then used to undo the bulk symmetric-encryption applied to the file. But there is still the problem of deciding whether the outcome of that decryption is “correct” in the sense that the plaintext was a file originally belonging to this user. Neither authenticated encryption or integrity checks help with that. After all ransomware could have replaced an MP3 music file with the “correct” AES-GCM encryption of random noise. Given the right AES key, that file will decrypt successfully, including validation of the GCM tag. It will not result in recovery of the original data, which is the only outcome the user cares about. To work around this we have to posit that users have access to an oracle that can examine a file (complete with all meta-data such as directory path and timestamps) and determine whether it is one of the documents originally belonging to their collection. In practice this could be based on a catalog of hashes or digitally signatures over documents, or in the worst-case scenario, manual inspection of files.

Once the user is convinced all files are indeed encrypted correctly, the next step is crafting a smart-contract to release payment conditional on the private-key being revealed. This contract will have a function intended to be invoked by the key-holder. After checking that the parameters supplied reveal enough information to recover the private-key, the function sends funds to an address agreed upon in advance. There is one subtlety here: specifying the destination address as part of the function call or inferring it from the message sender results in a race condition. Since Ethereum contract calls are broadcast on the network before they are mind in, anyone could copy the disclosed private-key and craft an alternative method invocation to shuttle funds some place else, hoping to get mined in first. Fixing the destination during contract creation solves this problem. Even if someone else succeeds in preempting the original method invocation with a different one sending the exact same information, the funds are still delivered to the intended address.

Just in case the private-key holder never shows up, the contract also needs an “escape hatch.” That is a second, time-locked method that can be invoked by the user after a mandatory delay to recover unclaimed funds. It is critical that these are the only ways of withdrawing any funds from the contract. If there was some other avenue for getting funds out, it would result in a race condition. When the private-key holder invokes the contract to collect payment, the contract owner can observe that transaction (including the disclosed private-key) and try to race them with a different transaction that siphons funds from the contract before it can pay out.

In terms of disclosing a private-key in a manner that can be verified by Ethereum smart-contracts, there are two natural solutions:

  1. Staying in the RSA setting, the smart-contract method that receives two large integers p and q, and verifies that their product is equal to the modulus N. While doable in principle, this requires implementing arbitrary precision integer multiplication in Solidity, which does not natively support such operations. The underlying Ethereum Virtual Machine (EVM) has 256-bit integer primitives and supports multiplication of words into 256-bit product, discarding more significant bits. One could write a big-number library to multiple large numbers by splitting them into 128-bit chunks. But this runs into another practical issue around gas costs: Ethereum smart-contract execution costs money proportional to the complexity of operations. Trying to check whether the product of two 1024-bit factors is equal to a known RSA modulus can become an “expensive” operation.
  2. Switch to an elliptic-curve setting and leverage the fragility of ECDSA for deliberately disclosing private keys. Because Ethereum natively supports verification of ECDSA signatures over the secp256k1 curve, this makes for a straightforward implementation. So instead of trying to make RSA operations work in Ethereum, we alter file-encryption model. ECIES or ElGamal can both be adapted to work over secp256k1. ElGamal in particular has a simple homomorphism that can be used to mask ciphertexts: multiplying both components of an ElGamal ciphertext by a scalar produces the encryption of the original message multiplied by that scalar. As with ECDSA, the public-key is a point on the curve and the private-key is the discrete logarithm of that with respect to the generator point. Since that key-pair is equally usable for ECDSA signatures, built-in EVM operations are sufficient to check for private key disclosure. As before, the function expects two valid ECDSA signatures over fixed messages such as “hello” and “world.” But in addition verifying these signatures— which is a primitive operation built into EVM— the contract also confirms these signatures share an identical nonce.


To recap: an honest ransomware scheme— or “third-party backup encryption service”— can be implemented using smart-contracts to make data recovery contingent on payment. We encrypt all files of interest using a hybrid encryption scheme with a single public-key. When it is time to recover the data, the user and TBES execute an interactive challenge protocol to decrypt randomly selected files, verifying that the encryption followed the expected format. (This step is redundant if encryption was done by the user, unless the integrity of encrypted backups is itself in question.) Assuming the proofs check out, the next step is for the user to create a smart-contract on the Ethereum blockchain. This contract is parametrized by an amount of Ether agreed upon, public-key used for encryption and address chosen by the TBES. Once the contract is setup and funded, TBES can invoke one of its methods to disclose the private-key associated with that public-key and collect the funds.

The logic of Ethereum smart-contract execution guarantees this exchange will be fair to both sides: payment only in exchange for valid private-key and no way to get out of payment once private-key is delivered.



Updated: 6/19, to describe alternative solution for ElGamal.

Designing “honest ransomware” with Ethereum smart contracts (part I)

“But to live outside the law, you must be honest” – Bob Dylan

By all accounts Wannacry ransomware made quite the splash, bringing thousands of systems to a standstill and forcing victims to shell out Bitcoin with little hope of recovery. In fact the ransomware aspect may well have been a diversion. Attribution for such attacks is tricky but both Kaspersky Labs and Symantec research groups have linked Wannacry to the Lazarus Group, a threat actor associated with the DPRK. (WaPo recently reported that the NSA concurs.) Their previous claim to fame: massive theft of funds from the Bangladesh central bank by exploiting the SWIFT network. That heist netted somewhere in the neighborhood of $80M, but the take would have been much higher were it not for the attackers’ rookie mistakes that resulted in even larger transfers being stopped or reversed. By comparison Wannacry earned a pittance, less than $100K at the time of writing.

Worse, this malware is not even  capable of living up to its raison d’etre: decrypting files after the attackers are paid off. For starters, only a handful of Bitcoin addresses were hard-coded in the binary, as opposed to unique deposit addresses for each victim. That makes it difficult to distinguish between different victims paying the bounty, which in turn violates the cardinal rule of ransomware: only users who pay the ransom get their data back. This is not so much a principle of fairness— there is no honor among thieves— as it is one of economic competitiveness: ransomware can only scale if victims are convinced that by paying the ransom they can recover their files. This is why successful ransomware campaigns in the past went so far as to feature helpful instructions to educate users about Bitcoin and staff customer-support operations to help “customers” recovery their data. Wannacry seems to have taken little interest in living up to such lofty standards of customer service.

But this incident does raise a question: are users at the mercy of ransomware authors when it comes to recovering their data? Is there a way to guarantee that payment will result in disclosure of the decryption key? After all, the crooks are demanding payment in cryptocurrency. Even Bitcoin with its relatively modest scripting language can express complex conditions for payment. Is it possible to design a fair-exchange protocol where decryption key is released if and only if corresponding payment is made?

This scenario is admittedly contrived and unlikely to be implemented. If there is no honor among thieves, certainly there is no desire to adapt more transparent and fair payment mechanisms to protect consumers from getting ripped-off by unscrupulous ransomware operators. But one can imagine more legitimate use-cases such as backup/escrow services that assist users encrypt their data for long-term storage. To avoid a single point of failure, the encryption key would be split using a threshold secret-sharing scheme. Specifically it is shared into N shares such that any quorum of M can reconstitute the original secret. Each share is in turn encrypted to the public-key of one trustee. When the time comes to decrypt this data, the consumer asks some subset of trustees to decrypt their share. This interaction calls for a fair-exchange protocol where the consumer receives the decryption result if and only if the trustee gets paid for its assistance.

Ethereum can solve this problem using the same idea behind fair-exchange of cryptocurrency across different blockchains, with a few caveats. The smart contract sketched out in previous blog-posts is designed to send funds when a caller discloses a specific private-key. But there are is a deeper problem around knowing which private-key to look for. In theoretical cryptography this falls under the rubric of “verifiable encryption” where it is possible to prove that some ciphertext is the encryption of an unspecified plaintext value that meets certain properties. Typically these constructs operate on abstract mathematical properties of plaintext, such as proving that it is an even number. In the more concrete setting of ransomware, plaintext under consideration are not mathematical structures but large complex data formats such as PDF documents. This model lends itself better to less-efficient statistical approach for verifying that the encryption process has been followed according to specification.

Let’s assume all files are encrypted using a hybrid-encryption scheme:

  • For each file/object to be encrypted, a random symmetric key is generated and bulk data is encrypted using a symmetric block-cipher such as AES.
  • The symmetric key is in turn encrypted using a fixed public-key cryptosystem such as RSA using the public-key of the trustee. This “wrapped” key is saved with the output of the bulk encryption.

This is similar to the format used by email encryption standards such as PGP and S/MIME. It is also how Wannacry operates, with an additional level of indirection. It generates a unique 2048-bit RSA keypair on each infected target and then encrypts that private-key using a different, fixed RSA public-key, presumably held by the ransomware crooks.  That means revealing the private-key used in step #2 is sufficient to decrypt all ciphertexts created according to this recipe.

There is one major difference between ransomware and the more legitimate, voluntary backup scenarios sketched out earlier: in the latter case the user can be certain of the public-key used for the encryption—because they performed the encryption themselves. In the former situation, they have just stumbled open a collection of ciphertext along with a ransom note asserting that all data has been encrypted using the process above with a private-key held by the author. Some proof is required that this claim is legitimate and the author is in possession of private-key required to recover their data. (Similar to fake DDoS threats, one can imagine fake ransomware authors reaching out to users to offer assistance, with no real capability to decrypt anything.)

A naive solution is to challenge the private-key holder to decrypt a handful of ciphertexts, effectively asking for free samples. But such a protocol can be cheated if it works by sending the full ciphertext or even the wrapped symmetric-key produced in step #2. For all we know, the ransomware encrypts each file using random symmetric keys and then stores those keys in a database. (In other words, it does not use a single public-key to wrap each of the symmetric keys; that part of the ciphertext is a decoy.) This operation could still respond to every challenge query successfully with the correct symmetric keys, by doing database lookups. But the encryption does not conform to the expected pattern above; there is no single private-key to unlock all files. In effect the user would be paying for a bogus key that has no bearing on the ability to decrypt ciphertexts of interest.



Two-factor authentication: a matter of time

How policy change in Russia locked Google users out of their account

Two-factor authentication is increasingly becoming common for online services to improve the protection afforded to customer accounts. Google started down this road in 2009, and its efforts were greatly accelerated when the company got 0wned by China in the Aurora attacks. [Full disclosure: this blogger worked on Google security team 2007-2013] At the time, and to a large extent today, the most popular way of augmenting an existing password-based system with an additional factor involved one-time passcodes or “OTP” for short. Unlike passwords, OTPs changed each time and could not be captured once for indefinite access going forward. (Note this is not equivalent to saying they can not be phished—nothing prevents a crook from creating fake login pages which ask for both password and OTP, and in fact such attacks have been observed in the wild.)

Counters and ticking seconds

There are many ways to generate OTPs but they all follow a similar pattern: there is a secret key, often referred to as a “seed” and this secret is combined with a variable factor such as current time or an incrementing sequence number using a one-way function. This one-way property is critical: it is a security requirement that even if an adversary can observe multiple OTPs and know the conditions they were generated in (such as exact timing) they can not recover the secret-seed required to generate future codes.

Perhaps the most well-known 2FA solution and one of the oldest is a proprietary offering from RSA called SecurID. It  was initially implemented only on hardware tokens sold by the company. SecurID is a decidedly closed ecosystem, at least in its original incarnation: not only did customers have to buy the hardware from RSA Inc but you also had to integrate with a service operated by the same company in order to verify if OTP codes submitted by users. That is because only RSA knew the secret-seed embedded in each token; customers did not receive these secrets or have any way to reprogram the token with their own secrets.


Such closed models may have been great for customer lock-in but would clearly not fly in a world accustomed to open standards, interoperability and transparency. (Not to mention the wisdom of relying on a third-party for your authentication, a lesson that many companies including Lockheed-Martin would learn the hard way when RSA was breached by threat-actors linked to China in 2010.) Other industry players began pushing for an open standard, eventually resulting in a new design called HOTP being published as an RFC. The letter “H” stands for HMAC, a modern cryptographic primitive with a sound security model, compared to the home-brew Rube-Goldberg contraption used in SecurID. HOTP uses an incrementing counter as the internal “state” of the token. Each time an OTP is generated, this counter is incremented by one.

That model relies on synchronization beween the side generating OTP codes and the side responsible for verifying them. Both must use the same sequence number in order to arrive at the same OTP value. The sides can get out of sync due to any number of problems: imagine that you generated an OTP (bumping up your own sequence number) but your attempt to login to a website failed because of a network timeout. The server never received the OTP and therefore its sequence number is one behind yours. In practice this is solved by checking a submitted OTP not just against the current sequence number N but a range of values {N, N+1, N+2, …, N + t} for some tolerance value t. If a given OTP checks out against one of the later values in this sequence, the server updates its own copy of the counter, on the assumption that the client skipped past a few values. Even then things can go awry since collisions are possible: a user can accidentally mistype an OTP which then happens to match one of these later numbers, incorrectly advancing the counter.


An alternative to maintaining a counter is using implicit state both sides independently have access to without having to synchronize. Time is the most obvious example: as long as both the client generating OTP value and server verifying it have access to an accurate clock, they can agree on the state of the OTP generator. Again in practice there is some clock drift permitted; instead of using a very accurate time down the millisecond, it is instead quantized into intervals of say 30 or 60 seconds. OTP codes are then generated by applying the HMAC function to the secret seed and this time-interval count. This was standardized in an open standard called TOTP, or Time-Based One-Time Password Algorithm.

There is one catch with TOTP: both sides must have an accurate source of time. This is easier on the server-side verifying OTP codes, but more difficult client-side where OTP generation takes place. The reason HOTP and sequence-numbers historically came first is that they could be implemented offline, using compact hardware without network connectivity. While embedded devices can have a clock, the challenge is those clocks require a battery to remain powered 24/7 and more importantly they eventually start drifting, running slower/faster than true time. A token that runs one second too fast every day will be a full 6 minutes ahead after a year.

Fast forward to 2009 with the smartphone revolution already underway, our model envisioned mobile apps handling OTP generation. Unlike stand-alone tokens, apps running on a smartphone have access to a system clock that is constantly being synchronized as long as the device is online. That take cares of time drift—even when the phone is only sporadically connected to the internet— making TOTP more appealing.

Which time?

One of the first questions that comes up about TOTP is the effect of time-zones. What happens when a user sitting in New York computes a TOTP that is submitted to a service in California for verification? In this case client and server separated by three time-zones. At first glance it looks like since they disagree on the current time, time-based OTP would break down in this model. Luckily that is not the case: as with most protocols relying on an accurate clock, TOTP calls on both participants to use an absolute frame of reference, namely the UNIX epoch time. Defined as the number of seconds elapsed since midnight January 1st, 1970 on Greenwich timezone, it does not depend on current location or daylight saving adjustments. That means TOTP generators only need to worry about having an approximately correct clock, a problem that is easily solved when the 2FA app runs on a mobile device with internet connectivity periodically checking some server in the cloud for authoritative time.

Daylight saving time considered harmful

Never underestimate the ability of the real world to throw a wrench in the plans. In 2011 Russia announced that it would not adjust back from daylight saving in the fall:

“President Medvedev has announced that Russia will not come off daylight saving time starting autumn 2011. Medvedev argued that switching clocks twice a year is harmful for people’s health and triggers stress. “

While the medical profession may continue to debate the effects of switching clocks on the general population, this change caused a good deal of frustration for software engineers. Typically “local time” displayed to users is determined by starting from a reference time such as GMT, making adjustments for local timezone and seasonal factors such as daylight saving. In the case of the Android operating system, those adjustments were hard-coded. If Russia did not going switch back to DST as scheduled, those devices would end up displaying the “wrong” time, even when they have perfectly accurate internal clocks.

In principle, this is only a matter of updating the operating system to follow the new, health-conscious Russian regime. In reality of course updating Android devices in the field has been a public quagmire of indifference and mutual hostility amongst device manufacturers, wireless carriers and Google. While the picture has improved drastically with Google moving to assert greater control over the update pipeline, in 2011 the situation was dire. Except for the handful of users on “pure” Google-experience devices such as Nexus S, everyone else was at the mercy of their wireless carrier for receiving software updates and those carriers were far more interested in locking users into 2-3 year contracts by selling another subsidized device than supporting existing units in the field. Critical security updates? Maybe, if you are lucky. Bug fixes and feature improvements? Forget about it.

Given that abject negligence from carriers and handset manufacturers, what is the average user to do when their phone displays 3’o clock when every one else is convinced it is 4’o clock? This user will take matters into their own hands and fix it somehow. The “correct” way to do that is shifting the timezone over by one, effectively going from Kaliningrad to Moscow. But this is far from obvious: the more intuitive fix given this predicament is to manually adjust the system clock forward by an hour. (One soon discovers that automatic time adjustments must also be disabled, or the next check-in against an authoritative time-server on the Internet will promptly restore the “correct” time.) Problem solved, the phone now reports that the local-time is 4’o clock as expected.

Off-by-one (hour)

Except for the unintended interaction with two-factor authentication, specifically TOTP which uses the current time to generate temporary codes. Overriding the system clock will shift the UNIX epoch time too. Now the TOTP generator is being fed from a clock with full one-hour skew. Garbage-in, garbage out. Most TOTP implementations will try to correct for slight clock drifts by checking a few adjacent intervals around current time, where each “interval” is typically 30 or 60 seconds. But no sane deployment is going to look back/forward as far as one hour, on the assumption that if your local clock is that far off, you are going to have many other problems.

Sure enough, reports started trickling in that users in Russia were getting locked out of their Google accounts because the 2FA codes generated by Google Authenticator on Android were not working. (In this blogger’s recollection, our response was adding a special-case check for a handful intervals around the +1 hour mark measured from current time— not all intervals between now and +1 hour mark. This has the effect of slightly lowering security, by increasing the number of “valid” OTP codes accepted, since each interval typically corresponds to a different OTP code modulo collisions.)

Real world deployments have a way of rudely bringing about the “impossible” condition. Until this incident, it was commonplace to assert that the reliability of TOTP based two-factor authentication is not affected by timezones or quirks of daylight saving time. In a narrow sense that statement is still true but that would have been no consolation to the customers in Russia locked out of their own accounts. Looking for a pace to pointer fingers, this is decidedly not a case of PEBKAC. Confronted with an obvious bug in their software, those Android users picked the most intuitive way of solving it. Surely they are not responsible for understanding the intricacies of local-time computation or the repercussions of shifting epoch time. Two other culprits emerge. Is the entire Android ecosystem to blame for not being able to deliver software upgrades to users, a problem the platform is still struggling with today? After all the announcement in Russia came months ahead of the actual change and iOS devices were not affected to the same extent. Or is the root-cause a design flaw in Android, having hard-coded rules about when daylight saving time kick-in? This information could have been retrieved from the cloud periodically, allowing the platform to respond gracefully when the powers-that-be declare DST a threat to the well-being of their citizenry.

Either way this incident is a great example of a security feature going awry because of decisions made in a completely different policy sphere affecting factors originally considered irrelevant to the system.