Use and misuse of code-signing (part I)

Or, there is no “evil bit” in X509

A recent paper from CCS2015 highlights the incidence of digitally signed PUP— potentially unwanted programs: malicious applications that harm users, spying on them, stealing private information or otherwise acting against the interests of the user. While malware is dime-a-dozen and occurence of malware digitally-signed with valid certificates is not new either, this is one of the first systemic studies of how malware authors operate when it comes to code signing. But before evaluating the premise of the paper, let’s step back and revisit the background on code-signing in general and MSFT Authenticode in particular.

ActiveX: actively courting trouble

Rewinding the calendar back to the mid-90s: the web is still in its infancy and browsers highly primitive in their capabilities compared to native applications. These are the “Dark Ages” before AJAX, HTML5 and similar modern standards which make web applications competitive with their native counterparts. Meanwhile JavaScript itself is still new and awfully slow. Sun Microsystems introduced Java applets as an alternative client-side programming model to augment web pages. Ever paranoid, MSFT responds in the standard MSFT way: by retreating to the familiar ground of Windows and trying to bridge the gap from good-old Win32 programming to this this scary, novel web platform. ActiveX controls were the solution the company seized on in hopes of continuing the hegemony of the Win32 API. Developers would not have to learn any new tricks. They would write native C/C++ applications using COM and invoking native Windows API as before—conveniently guaranteeing that it could only run on Windows— but they could now deliver that code over the web, embedded into web pages. And if the customers visiting those web pages were running a different operating system such as Linux or running on different hardware such as DEC Alpha? Tough luck.

Code identity as proxy for trust

Putting aside the sheer Redmond-centric nature of this vision, there is one minor problem: unlike the JavaScript interpreter, these ActiveX controls execute native code with full access to  operating system APIs. They are not confined by an artificial sandbox. That creates plenty of room to wreak havoc with machine: read files, delete data, interfere with the functioning of other applications. Even for pre-Trustworthy-Computing MSFT with nary a care in the world for security, that was an untenable situation: if any webpage you visit could take over your PC, surfing the web becomes a dangerous game.

There are different ways to solve this problem, such as constraining the power of these applications delivered over the web. (That is exactly what JavaScript and Java aim for with a sandbox.) Code-signing was the solution MSFT pushed: code retains full privileges but it must carry some proof of its origin. That would allow consumers to make an informed decision about whether to trust the application, based on the reputation of the publisher. Clearly there is nothing unique to ActiveX controls about code-signing. The same idea applies to ordinary Windows applications sure enough was extended to cover them. Before there were centralized “App Stores” and “App Markets” for purchasing applications, it was common for software to be downloaded straight from the web-page of the publisher or even a third-party distributor website aggregating applications. The exact same problems of trust arises here: how can consumers decide whether some application is trustworthy? The MSFT approach translates that into a different question: is the author of this application trustworthy?

Returning to the paper, the researchers make a valuable contribution in demonstrating that revocation is not quite working as expected. But the argument is undermined by a flawed model. (Let’s chalk up the minor errors to lack of fact-checking or failure to read specs: for example asserting that Authenticode was introduced in Windows 2000 when it  predates that, or stating that only SHA1 is supported when MSFT signtool has supported SHA256 for some time.) There are two major conceptual flaws in this argument:
First one is misunderstanding the meaning of revocation, at least as defined by PKIX standards. More fundamentally, there is a misunderstanding what code-signing and identity of the publisher represent, and the limits of what can be accomplished by revocation.

Revocation and time-stamping

The first case of confusion is misunderstanding how revocation dates are used: the authors have “discovered” that malware signed and timestamped continues to validate even after the certificate has been revoked. To which the proper response is: no kidding, that is the whole point of time-stamping; it allows signatures to survive expiration or revocation of the digital certificate associated with that signature. This behavior is 100% by design and makes sense for intended scenarios.

Consider expiration. Suppose Acme Inc obtains a digital certificate valid for exactly one year, say the calendar year 2015. Acme then uses this certificate so sign some applications published on various websites. Fast forward to 2016, and a consumer has downloaded this application and attempts to validate its pedigree. The certificate itself has expired. Without time-stamping, that would be a problem because there is no way to know whether the application was signed when the certificate was still valid. With time-stamping, there is a third-party asserting that the signature happened while the certificate was still valid. (Emphasis on third-party; it is not the publisher providing the timestamp because they have an incentive to backdate signatures.)

Likewise the semantics of revocation involve a point-in-time change in trust status. All usages of the key afterwards are considered void; usage before that time is still acceptable. That moment is intended to capture the transition point when assertions made in the certificate are no longer true. Recall that X509 digital certificates encode statements made by the CA about an entity, such as “public key 0x1234… belongs to the organization Acme Inc which is headquartered in New York, USA.” While competent CAs are responsible for verifying the validity of these facts prior to issuance, not even the most diligent CA can escape the fact that their validity can change afterwards. For example the private-key can be compromised and uploaded to Pastebin, implying that it is no longer under sole possession of Acme. Or the company could change its name and move its business registration to Timbuktu, a location different than the state and country specified in the original certificate.  Going back to the above example of the Acme certificate valid in 2015: suppose that half-way through the calendar year Acme private-key is compromised. Clearly signatures produced after that date can not be reliably attributed to Acme: it could be Acme or it could be the miscreants that stole the private-key. On the other hand signatures made before, as determined by third-party trusted timestamp should not be affected by events that occurred later.**

In some scenarios this distinction between before/after is moot. If an email message was encrypted using the public-key found in an S/MIME certificate months ago, it is not possible for the sender to go back in time and recall the message now that the certificate is revoked. Likewise authentication happens in real-time and it is not possible to “undo” previous instances when a revoked certificate was accepted. Digital signatures on the other hand are unique: the trust status of a certificate is repeatedly evaluated at future dates when verifying a signature created in the past. Intuitively signatures created before revocation time should still be afforded full trust, while those created afterwards are considered bogus. Authenticode follows this intuition. Signatures time-stamped prior to the revocation instant continue to validate, while those produced afterwards (or lacking a time-stamp altogether) are considered invalid. The alternative does not scale: if all trust magically evaporates due to revocation, one would have to go back and re-create all signatures.

To the extent that there is a problem here, it is an operational error on the part of CAs in choosing the revocation time. When software publishers are caught red-handed signing malware and this behavior is reported to certificate authorities, it appears that CAs are setting revocation date to the time of the report, as opposed to all the way back to original issuance time of the certificate. That means signed malware still continues to validate successfully according to Authenticode policy, as long as the crooks remembered to timestamp their signatures. (Not exactly a high-bar for crooks, considering that Verisign and others also operate free, publicly accessible time-stamping services.) The paper recommends “hard-revocation” which is made-up terminology for setting revocation time all the way back to issuance time of the certificate, or more precisely the notBefore date. This is effectively saying some assertion made in the certificate was wrong to begin with and the CA should never have issued it in the first place. From a pragmatic stance, that will certainly have the intended effect of invalidating all signatures. No unsuspecting user will accidentally trust the application because of a valid signature. (Assuming of course that users are not overriding Authenticode warnings. Unlike the case of web-browser SSL indicators which have been studied extensively, there is comparatively little research on whether users pay attention to code-signing UI.) While that is an admirable goal, this ambitious project to combat malware by changing CAs behavior is predicated on misunderstanding of what code-signing and digital certificates stand for.



** In practice this is complicated by the difficulty of determining precise time of key-compromise and typically involves conservatively estimating on the early side.

The problem with devops secret-management patterns

Popular configuration management systems such as Chef, Puppet and Ansible all have some variant of secret management solution included. These are the passwords, cryptographic keys and similar sensitive information that must be deployed to specific servers in a data-center environment while limiting access to other machines or even persons involved in operating the infrastructure.

The first two share a fundamental design flaw. They store long-term secrets as files encrypted using a symmetric key. When it is time to add a new secret or modify an existing one, the engineer responsible for introducing the change will decrypt the file using that key, make changes and reencrypt using the key. Depending on design, decryption during an actual deployment can happen locally on the engineer machine (as in the case of Ansible) or remotely on the server where those secrets are intended to be used.

This model is broken for two closely related reasons.

Coupling read and write-access

Changing secrets also implies being able to view existing ones. In other words, in order to add a secret to an encrypted store or modify the existing one (in keeping with best-practices, secrets are rotated periodically right?) requires knowledge of the same passphrase that also allows viewing the current collection of those secrets.

Under normal circumstances, “read-only” access is considered less sensitive than “write” access. But when it comes to managing secrets, this wisdom is inverted: being able to steal a secret by reading a file is usually more dangerous than being able to clobber the contents of that file without learning what existed before.**

Scaling problems

Shared passphrases do not scale well in a team context. “Three can keep a secret, if two of them are dead” said Benjamin Franklin. Imagine a team of 10 site-reliability engineers in charge of managing secrets. The encrypted data-bag or vault can be checked into a version control system such as git. But the passphrase encrypting it must be managed out of band. (If the passphrase itself were available at the same location, it becomes a case of locking the door and putting the key under the doormat.) That means coordinating a secret to be shared among multiple people. That creates two problems:

  • The attack surface of secrets is increased. An attacker need only compromise 1 of those 10 individuals to unlock all the data.
  • It increases the complexity of revoking access. Consider what happens when an employee with access to decrypt this file leaves the company. It is not enough to generate a new passphrase to reencrypt the file. Under worst-case assumptions, the actual secrets contained in that file were visible to that employee and could have been copied. At least some of the most sensitive ones (such as authentication keys to third-party services) may have to be assumed compromised and require rotation.

Separating read and write access

There is a simple solution to these problems. Instead of using symmetric cryptography, secret files can be encrypted using public-key cryptography. Every engineer has a copy of the public-key and can edit the file (or fragments of the file depending on format, as long as secret payloads are encrypted) to add/remove secrets. But the corresponding private-key required to decrypt the secrets does not have to be distributed. In fact, since these secrets are intended for distribution to servers in a data-center, the private-key can reside fully in the operational environment. Anyone could add secrets by editing a file on their laptop; but the results are decrypted and made available to machines only in the data-center.

If it turns out that one of the employees editing the file had a compromised laptop, the attacker can only observe secrets specifically added by that person. Similarly if that person leaves the company, only specific secrets he/she added need to be considered for rotation. Because they never had access to decrypt the entire file, remaining secrets were not accessible.

Case-study: Puppet

An example of getting this model right is Puppet encrypted hiera. Its original PGP-based approach to storing encrypted data is now deprecated. But there is an alternative that works by encrypting individual fields in YAML files called hiera-eyaml. By default it uses public-key encryption in PKCS7 format as implemented by OpenSSL. Downside is that it suffers from low-level problems, such as lack of integrity check on ciphertexts and no binding between keys/values.

Improving Chef encrypted data-bags

An equivalent approach was implemented by a former colleague at Airbnb for Chef. Chef data-bags define a series of key/value pairs. Encryption is applied at the level of individual items, covering only the value payload. The original Chef design was a case of amateur hour: version 0 used the same initialization vector for all CBC ciphertexts. Later versions fixed that problem and added an integrity check. Switching to asymmetric encryption allows for a clean-slate. Since the entire toolchain for editing as well as decrypting data-bags have to be replaced, there is no requirement to follow existing choice of cryptographic primitives.

On the other hand, it is still useful to apply encryption independently for each value, as opposed to on the entire file. That allows for distributed edits: secrets can be added or modified by different people without being able to see other secrets. It’s worth pointing out that this can introduce additional problems because ciphertexts are not bound strongly to the key. For example, one can move values around, pasting an encryption key into a field meant to hold a password or API key, resulting in the secret being used for an unexpected scenario. (Puppet hiera-eyaml has the same problem.) These can be addressed by including the key-name and other meta-data in the construction of the ciphertext; a natural solution is to use those attributes as additional data in an authenticated-encryption mode such as AES-GCM.

Changes to the edit process

With symmetric encryption, edits were  straightforward: all values are decrypted to create a plain data-bag written into a temporary file. This file can now be loaded in a favorite text editor, modified and saved. The updated contents are encrypted from scratch with the same key.

With asymmetric keys, this approach will not fly because existing values can not be decrypted. Instead the file is run through an intermediate parser to extract the structure, namely the sequence of keys defined, into a plain file which contains only blank values. Edits are made on this intermediate representation. New key/value pairs can be added, or new values can be specified for an existing key. These correspond to defining a new secret or updating an existing one, respectively. (Note that “update” in this context strictly means overwriting with a new secret; there is no mechanism to incrementally edit the previous secret in place.) After edits are complete, another utility merges the output with the original, encrypted data-bag. For keys that were not modified, existing ciphertexts are taken from the original data-bag. For new/updated keys, the plaintext values supplied during edit session are encrypted using the appropriate RSA public-key to create the new values.

Main benefit is that all of these steps can be executed by any engineer/SRE with commit access to the repository where these encrypted data-bags are maintained. Unlike the case of existing Chef data-bags, there is no secret-key to be shared with every team member who may someday need to update secrets.

A secondary benefit is that when files are updated, diff comparisons accurately reflect differences. In contrast, existing Chef symmetric encryption selects a random IV for each invocation. Simply decrypting and reencrypting a file with no changes will still result in every value being updated. (In principle the edit utility could have compensated for this by detecting changes and reusing ciphertexts when possible but Chef does not attempt that.) That means standards utilities for merging, resolving conflicts and cherry-picking work as before.

Changes to deployment

Since Chef does not natively grok the new format, deployment also requires an intermediate step to convert the asymmetrically-encrypted databags into plain databags. This step is performed on the final hop, on the server(s) where the encrypted data-bag is deployed and its contents are required. That is the only time when the RSA private key is used. It does not have to be available anywhere other than on the machines where the secrets themselves will be used.


** For completeness, it is possible under some circumstances to exploit write-access for disclosing secrets. For example, by surgically modifying parts of a cryptographic key and forcing the system perform operations with the corrupted key, one can learn information about the original (unaltered) secret. This class of attacks falls under the rubric differential fault analysis.

Multi-signature and correlated risks: the case of Bitfinex

BitFinex has not yet provided a detailed account of the recent security breach that resulted in loss of 120000 bitcoins from the exchange. Much of what has been written about the incident is incomplete, speculation or opportunistic marketing . Fingers have been pointed at seemingly everyone, including strangely the CFTC, for contributing to the problem. (While casting regulators as villains seems de rigueur in cryptocurrency land these days, that particular argument was debunked elsewhere.) Others have questioned whether there is a problem with the BitGo service or even intrinsic problem with the notion of multi-signature in Bitcoin.

Some of these questions can not be answered until more information about the incident is released, either by Bitfinex or perhaps by the perpetrators- it is not uncommon for boastful attackers to publish detailed information about their methods after a successful heist. But we can dispel some of the misconceptions about the effect of multi-signature on risk management.

Multi-signature background

In the Bitcoin protocol, control over funds is represented by knowledge of cryptographic secrets, also known as private keys. These secrets are used for digitally signing transactions that move funds from one address to another. For example, it could be moving funds from a consumer wallet to a merchant wallet, when paying fora cup of coffee in Bitcoin. Multi-signature is an extension to this model when more than 1 key is required to authorize the transaction. This is typically denoted by two numbers as “M-of-N.” That refers to a group of N total keys, of which any quorum of M is sufficient to sign the transaction.

Multisig achieves two seemingly incompatible objectives:

  • Better security. It is more difficult for an attacker to steal 3 secrets than it is to steal a single one. (More about that assumption below.)
  • Higher availability & resiliency. Theft is not the only risk associated with cryptographic secrets. There is also the far more mundane problem of plain losing the key due to hardware failures, forgotten credentials etc. As long as N > M, the system can tolerate complete destruction of some keys provided there are enough left around to constitute a quorum.

Multi-signature with third-party cosigners

Bitfinex did not have a cold-wallet in the traditional sense: storing Bitcoins on an offline system, one that is not accessible from the Internet even indirectly. Instead the exchange leveraged BitGo co-signing engine to hold customer funds in a multi-signature wallet. This scheme involves a 2-of-3 configuration with specific roles intended for each key:

  • One key is held by the “customer,” in this case Bitfinex (Not to be confused with end-users who are customers of Bitfinex)
  • Second key is held by BitGo
  • Third key is an offline “recovery” key held by Bitfinex. It is the escape-hatch to use for restoring unilateral control over funds, in case BitGo is unable/unwilling to participate

In order to withdraw funds out of Bitfinex, a transaction must be signed by at least two out of these three keys. Assuming the final key is truly kept offline under wraps and only invoked for emergencies, that means the standard process involves using the first two keys. Bitfinex has possession of the first the key** and can produce that signature locally. But getting the second one involves a call to the BitGo API. This call requires authentication. BitGo must only co-sign transactions originating from Bitfinex, which requires that it has a way to ascertain the identity of the caller. The parameters of this authentication protocol are defined by BitGo; it is outside the scope of blockchain or Bitcoin specifications.

Correlated vs independent risks

The assertion that multi-signature improves security was predicated on an assumption: it is more difficult for an attacker to gain control of multiple cryptographic keys compared to getting hold of one. But is that necessary?


Consider a gate with multiple padlocks on it. Did the addition of second or third padlock make it any harder to open this gate? The answer depends on many factors.

  • Are the locks keyed the same? If they are, there is still a “SPOF” or single-point-of-failure: getting hold of that key makes it as easy to open a gate with 10 locks as it is one protected by a single lock.
  • Are the locks the same model? Even if they have individual keys, if the locks are the same type, they may have a common systemic flow, such as being susceptible to the same lock-picking technique.

The general observation is that multiple locks improve security only against threats that are independent or uncorrelated. Less obvious, whether risks are uncorrelated is a function of threat model. To a casual thief armed with bolt-cutters, the second lock doubles the amount of effort required even if it were keyed identically: they have to work twice as hard to physically cut through two locks. But a more sophisticated attacker who plans on stealing that one key ahead of time, it makes no difference. Armed with the key, opening two locks is not substantially more difficult than opening one. Same holds true if the locks are different but both keys are kept under the doormat in front of the gate. Here again the probability of second lock being breached is highly correlated with the probability that first lock was breached.

When multi-signature does not help

Consider Bitcoin funds tied to a single private-key stored on a server. Would it help if we transferred those funds to a new Bitcoin address comprised of multisig configuration with 2 keys stored on the same server? Unlikely— virtually any attack that can compromise one key on that server is going to also get the second one with equal ease. That includes code execution vulnerabilities in the software running on the box, “evil-maid” attacks with physical access or malicious operators. The difficulty for some attacks might increase ever slightly: for example if there was a subtle side-channel leak, it may now require twice as much work to extract both keys. But in general, the cost of the attack does not double by virtue of having doubled the number of keys.

Now consider the same multi-signature arrangement, except the second private-key is stored in a different server, loaded with a different operating system and wallet software, located in a different data-center managed by a different group of engineers. In this scenario the cost of executing attacks has increased appreciably, because it is not possible for attacker to “reuse” resources between them. Breaking into a data-center operated by hosting provider X does not allow also breaking into one operated by company Y. Likewise finding a remote-code execution vulnerability in the first OS does not guarantee identical vulnerability in the second one. Granted the stars may align for the attacker and he/she may discover find a single weakness shared by both systems. But that is more difficult than breaking into a single server to recover 2 keys at once.

Multi-signature with co-signers

Assuming the above description of Bitfinex operation is correct, Bitfinex operational environment that runs the service must be in possession of two sets of credentials:

  • One of the three multi-signature keys; these are ECDSA private keys
  • Credentials for authenticating to the BitGo API

These need not reside on the same server. They may not even reside in the same data-center, as far as physical locations go. But logically there are machines within Bitfinex “live” system that hold these credentials. Why? Because users can ask to withdraw funds at any time and both pieces are required to make that happen. The fact that users can go to a web page, press a button and later receive funds indicates that there are servers somewhere within Bitfinex environment capable of wielding them.

Corollary: if an attack breaches that environment, the adversary will be in possession of both secrets. To the extent that correlated risks exist in this environment- for example, a centralized fleet management system such as Active Directory that grants access to all machines in a data-center- they reduce the value of having multiple keys.


BitGo API designers were aware of this limitation and attempted to compensate for it. Their API interface supports limits on funds movement. For example, it is possible to set a daily-limit in advance such that request to co-sign for amounts greater than this threshold will be rejected. Even if the customer systems were completely breached and both sets of credentials compromised, BitGo would cap losses in any given 24 hour period to that limit by refusing to sign additional Bitcoin transactions.

By all indications, such limits were in effect for Bitfinex. News reports indicate that the attack was able to work around them. Details are murky, but looking at BitGo API documentation offers some possible explanations. It is possible to remove policies by calling the same API, authenticating with the same credentials as one would use for ordinary transaction signing. So if the adversary breached Bitfinex systems and gained access to a valid authentication token for BitGo, that token would have been sufficient for lifting the spending limit.

This points to the tricky nature of API design and the counter-intuitive observation that somethings are best left not automated. Critical policies such as spending limits are modified very infrequently. It’s not clear that a programmatic API to remove policies is necessary. Requiring out-of-band authentication by calling/emailing BitGo to modify an existing policy would have introduced useful friction and sanity checks into the process. At a minimum, a different set of credentials could have been required for such privileged operations, compared to ordinary wallet actions. Now BitGo did have one mitigation available: if a wallet is “shared” among 2 or more administrators, lifting a spending limit requires approval by at least one other admin. The documentation (retrieved August 2016) recommends using that setup:

If a wallet carries a balance and there are more than two “admin” users associated with a Wallet, any policy change will require approval by another administrator before it will take effect (if there are no additional “admin” users, this will not be necessary). It is thus highly recommended to create wallets with at least 2 administrators by performing a wallet share. This way, policy can be effective even if a single user is compromised.

One possibility is Bitfinex only had a single administrator setup. Another possibility is a subtle problem in the wallet sharing API. For example, documentation notes that removing a share requires approval by at least one other admin- ruling out the possibility of going from 2 to 1 unilaterally. But if adding another admin was not subject to same restriction, one could execute a Sybil attack: create another share which appears to be another user but is in fact controlled by the attacker. This effectively grants adversary 2 shares, which is enough to subvert policy checks.

Multi-signature in perspective

Until more details are published about this incident, the source of the single point of failure remains unknown. BitGo has went on the record stating that its system has not been breached and its API has performed according to spec. Notwithstanding those assurances, Bitfinex has stopped relying on BitGo API for funds management and reverted to a traditional, offline cold-wallet system. Meanwhile pundits have jumped on the occasion to question the value proposition for multi-signature, in a complete about-face from 2014 when they were embracing multi-signature as the security silver bullet.  This newfound skepticism may have a useful benefit: a closer scrutiny on key-management beyond counting shares and better understanding of correlated risks.


** For simplicity, we assume there is just one “unspent transaction output” or UTXO in the picture with a single set of keys. In general, funds will be stored across hundreds or thousands of UTXO, each with their own unique 2-of-3 key sets that are derived from a hierarchical deterministic (HD) key generation scheme such as BIP32.

Getting by without passwords: web-authentication (part II)

In the second part of this series on web authentication without passwords, we look at the Firefox approach for interfacing cryptographic hardware. Recall that Chrome follows the platform pattern (“when in Rome, do as the Romans”) and uses the “native” middleware for each OS: Crypto API for Windows, tokend on OSX and PKCS#11 on Linux. By contract Firefox opts for what Emerson would have called foolish consistency: it always uses PKCS#11, even on platforms such as Windows where that interface is not the norm. It also turns out to be much less user-friendly. By giving up on functionality already built into the OS for automatic detection and management of cryptographic hardware, it forces user to jump through new hoops to get their gadgets working with Firefox.

Specifically, before using a PIV card or token with Firefox, we first have to tell Firefox how to interface with those objects. That means actually pointing at a PKCS#11 module on disk by jumping through some hoops. First, open the hamburger-menu and choose preferences to bring up a new tab with various settings. Next navigate to where most users rarely venture, namely “Advanced” group, choose “Certificates” tab and click “Security Devices” button:

Screen Shot 2016-06-27 at 00.48.10.png

Firefox advanced settings, certificates tab

This brings up a terse list of recognized cryptographic hardware grouped by their associated module:

Screen Shot 2016-06-27 at 00.48.29.png

“Security Devices” view in Firefox

There are already a few modules loaded, but none of them are useful for the purpose of using a PIV card/token with Firefox. Time to bring in reinforcements from the veritable OpenSC project:

Screen Shot 2016-06-27 at 00.48.51.png

Loading a new PKCS#11 module

The module name is arbitrary (“OpenSC” is used here for simplicity) and the more important part is locating the file on disk. OSX file-picker dialog does not make it easy to search for shared libraries. Navigating directly to the directory containing the module, typically at /usr/local/lib or /usr/lib, and selecting is the easiest option. (Already this is veering into user-unfriendly territory; getting this far requires significant knowledge on the part of users to locate shared libraries on disk.)

With OpenSC loaded, Firefox now displays another token slot present:

Screen Shot 2016-06-30 at 12.21.11.png

With OpenSC module loaded, PIV cards/token visible

Caveat emptor: This part of the codebase appears to be very unstable. “Log In” fails regardless of PIN presented, and simply removing a token can crash Firefox.

With tokens recognized, we can revisit the scenario from previous post. Start a simple TLS web-server emulated by openssl, which is configured to request optional client-authentication without any restrictions on acceptable CAs. (This example uses a self-signed certificate for the server and will require adding an exception to get past the initial error dialog.) Visiting the page in Firefox with a PIV card/token attached brings up this prompt:

Firefox certificate authentication prompt

Firefox certificate authentication prompt

When there are multiple certificates on the card, the drop-down allows switching between them. Compared to the clean UI in Chrome, the Firefox version is busy and dense with information drawn from the X509 certificate fields.

Choosing a certificate and clicking OK proceeds to the next step to collect the PIN for authenticating to the card:

Firefox smart-card PIN prompt

Firefox smart-card PIN prompt

After entering the correct PIN we have our mutually authenticated connection set up to retrieve a web page from openssl running as server:

Screen Shot 2016-07-14 at 22.00.27.png

Firefox retrieving a page from OpenSSL web server, using client-authentication

Having walked through how web authentication without passwords works in two popular, cross-platform web browsers, the next post will look at arguments for/against deploying this approach.



Bitcoin’s meta problem: governance (part II)

To explain why Bitcoin governance appears dysfunctional, it helps to explain why changing Bitcoin is very difficult to begin with- any group or organization tasked with this responsibility is facing very long odds. Comparisons to two others large-scale system are instructive.

Case study: the web

The world-wide web is a highly decentralized system with millions of servers providing content to billions of clients using HTML and HTTP.  Software powering those clients and servers is highly diverse. MSFT, Mozilla and Google are duking out for web-browser market share (Apple, if one includes iPhone), while on the server side nginx and Apache are the top predators. Meanwhile there is a large collection of software for authoring and designing web-pages, with the expectation that they will be rendered by those web browsers. Mostly for historical reasons the ownership of standards is divided up between IETF for the transport protocol HTTP and W3C for the presentation layer of HTML. But neither organization has any influence over the market or power to compel participants in the system to implement its standards. IETF specifications even carry the tentative, modest designation of “RFC” or “request for comments.”  There is no punishment for failing to render HTML properly (if there were, Internet Explorer would have been banished eons ago) or for that matter introducing proprietary extensions.

That seems like a recipe for getting stuck, with no way to nudge the entire ecosystem to adopt improved versions of the protocol, or prevent fragmentation where every vendor  introduces their own slightly incompatible variant in search of competitive advantage. (As an afterthought, they might even ask the standards body to sanction these extensions in the next version of the “standard”) But in reality the web has shown a remarkable plasticity in evolving and adapting new functionality. From XmlHttpRequest which fueled the AJAX paradigm for designing responsive websites to security features like Content Security Policy or improved versions of TLS protocol, web browsers and servers continue to add new functionality. Two salient properties of the system help:

  • The system is highly tolerant of change and experimentation. Web browsers ignore unknown HTTP headers in the server response. Similarly they ignore unknown HTML tags and attempt to render the page as best as they can. If someone introduces a new tag or HTTP header that is recognized by only one  browser, it will not break the rest of the web. Some UI element may be missing and some functionality may be reduced, but these are usually not critical failures. (Better yet, it is easy to target based on audience capability, serving one version of the page to new browsers and a different one with to legacy versions.) That means it is not necessary to get buy-in from every single user before rolling out a new browser feature. The feature may get traction as websites adopt it, putting competitive pressure on other browsers to support it and eventual standardization. That was the path for X-Frame-Options and similar security headers originally introduced by IE. Or it may crater and remain yet another cautionary tale about attempts to foist proprietary vendor crud on the rest of the web- as was the case with most other MSFT “extensions” to HTML including VBScript, ActiveX controls and behaviors.
  • There is competition among different implementations. (This was not always the case; MSFT Internet Explorer enjoyed a virtual monopoly in the early 2000s, which not coincidentally was a period of stagnation in the development of the web.)
  • There exists a standardization process for formalizing changes and this process has credible claim to impartiality. While software vendors participate in work carried out by these groups, no single vendor exercises unilateral control over the direction of standards. (At least that is the theory- hijacking of standards group to lend the imprimatur of W3C or IETF on what is effectively a finished product already implemented by one vendor is not uncommon.)

The challenge of changing Bitcoin

Bitcoin is the exact opposite of the web.

Intolerant of experimentation

Because money is at stake, all nodes on the network have to agree on what is a valid transaction. There needs to be consensus about the state of the blockchain ledger at all times. Consensus can drift temporarily when multiple miners come up with new blocks at the same time, and it is unclear at first which will emerge as the “final” word on the ledger. But the system is designed to eliminate such disagreements quickly and have everyone converge on a single winning chain. What it can not tolerate is a situation where  nodes permanently disagree about which block is valid because there is some new feature only recognized by some fraction of nodes. That makes it tricky to introduce new functionality without playing a game of chicken with upgrade deadlines.

There is a notion of soft-forks for introducing new features which “only” requires a majority of nodes to upgrade as opposed to everyone. These are situations where the change happens to be backwards compatible in the sense that a nodes that does not upgrade will not reject valid transactions using the new feature. But it may incorrectly accept bogus transactions, because it is not aware of additional criteria implied by that feature. Counter-intuitive as that sounds, this approach works because individual nodes only accepts transactions when they are confirmed by getting included by miners in the blockchain. As long as the majority of miners have upgraded to enforce new rules, bogus transactions will not make it into the ledger. This soft-fork approach has been flexible enough to implement a surprising number of improvements, including segregated-witness most recently. But there are limits: expanding blocksize limit can not be done this way because nodes would outright reject blocks exceeding the hardcoded limit even if miners mint them. That would require a hard-fork, which is the disruptive model where everyone must upgrade by a particular deadline. Those who fail face the danger of splitting off into a parallel universe where transactions move funds in ways that are not reflected in the “real” ledger recognized by everyone else.

No diversity in implementations

Until recently, virtually all Bitcoin peers were running a single piece of software (Bitcoin Core) maintained by the aptly named core team. Even today that version retains over 80% market share while its closest competitors are forks that are identical in all but one feature, namely the contentious question of how to increase maximum blocksize.

No impartial standard group

The closest to an organization maintaining a “specification” and deciding which Bitcoin improvement-proposal or BIPS gets implemented is the core team itself. It’s as if W3C is not only laying down the rules of HTML, but also shipping the web-browser and the web-server used by everyone in the world. Yet for all that power, that group still has no mandate or authority to compel software upgrades. It can release new updates to the official client with new features, but it remains up to miners and individual nodes to incorporate that release.

Between a rock and a hard-fork

This leaves Bitcoin stuck in its current equilibrium. Without the flexibility of experimenting with new features in a local manner by anyone with a good idea, all protocol improvements must be coordinated by a centralized group- the ultimate irony for a decentralized system. That group is vested with significant power. In the absence of any articulated principles around priorities- keeping the system decentralized, keeping miners happy, enabling greater Bitcoin adoption etc.- all of its decisions will be subject to second-guessing, met with skepticism or accusations of bias. Without the mandate or authority to compel upgrades across the network, even a well-meaning centralized planning committee will find it difficult to make drastic improvements or radical changes, lest the system descend into chaos with a dreaded hard-fork. Without the appetite to risk hard-forks, every improvement must be painstakingly packaged as a soft-fork, stacking the deck against timely interventions when the network is reaching a breaking point.


Making USB even more dicey: encrypted drives

Imagine you were expecting to documents from a business associate. Being reasonably concerned with op-sec, they want to encrypt the information en route. Being old fashioned, they also opt for snail-mailing an actual physical drive in the mail instead of using PGP or S/MIME to deliver an electronic copy. Is it safe to connect that USB drive that came out of the envelope? If this is bringing back memories of BadUSB, let’s take the exercise one step further: suppose this drive uses hardware-encryption and requires installing custom applications before users can access the encrypted side. But conveniently there is a small unencrypted partition containing those applications ready to install. Do you feel lucky today?

This is not hypothetical- it is how typical encrypted USB drives work.

Lost/stolen USB drives have been implicated in data breaches and encryption-at-rest is a solid mitigation against such threats. But this well-meaning attempt to improve security against one class of risks ends up reducing operational security overall against a different one: users are trained to install random applications from a USB drive that shows up in their mailbox. (It’s not a stretch to extend that bad habit to drives found anywhere, based on recent research.) Meanwhile the vendors are not helping their own case with a broken software distribution model and unsigned applications. Here is a more detailed peek at the Kingston DataTraveler Vault.

On connecting the drive, it will appear as a CD-ROM. This is not unusual; USB devices can encapsulate multiple interfaces as a single composite device. In this case the encrypted contents are not visible yet because the host PC does not have the requisite software. Looking into the contents of that emulated “CD” we find different solutions intended to cover Windows, OSX and Linux.


Considering this is an enterprise product and most enterprises are still Windows shops, one would expect the most streamlined experience here. Indeed- Windows itself recognizes the presence of an installer as soon as the drive is connected and asks about what to do. (Actually asking for user input is a major improvement over past Windows versions cursed with the overzealous autorun feature, which happily executed random binaries from any connected removable drive.)

If we decline this offer and decide to take a closer look at the installer, we can see that it has been digitally signed by the manufacturer using Authenticode, a de facto Windows standard for verifying the origin of binaries:

Screen Shot 2016-06-20 at 21.05.59.png

Looking at the digital signature of the installer in Explorer

Using the signtool utility:


Using signtool to examine Authenticode signature details

The signature includes a trusted timestamp hinting at the age of the binary. (Note the certificate is expired but the signature is still valid. This is possible only because the third-party timestamp provides evidence that the signature was originally produced at a point in time while the certificate was still valid.)

This particular signature uses the deprecated SHA1 hash algorithm, but we will give Kingston a pass on that. The urgency of migrating to better options such as SHA256 did not become apparent until long after this software had shipped. Authenticode features also include checking for certificate revocation, so we can be reasonably confident that Kingston is still in control of the private-key used to sign their binaries and they did not experience a Stuxnet moment. (Alternative view: if they have been 0wned, they are not aware of it and have not asked Verisign to revoke their certificate.)

Overall there is reasonable evidence suggesting that this application is indeed the legitimate one published by Kingston, although it may be slightly outdated given timestamps.


OSX applications can be signed and the codesign utility can be used to verify those signatures. Did Kingston take advantage of it? No:


What- me worry about code signing?

Not that it matters; a support page suggests that installing that application would have been a lost cause anyway:

The changes Apple made in MacOS 10.11 disabled the functionality of our secure USB drives. It will cause the error ‘Unable to start the DTXX application…’ or ‘ERR_COULD_NOT_START’ when you attempt to log into the drive. We recommend that you update your Kingston secure USB drive by downloading and installing one of the updates provided in links below.

(As an aside, it is invariably Apple’s fault when OS updates break existing applications.)

Luckily it is much easier to verify the integrity of this latest version- the download link uses SSL (although not the support page linking to the download, allowing for MITM attacks to substitute a different one.) More importantly Kingston got around to signing this one:



It is a rare surprise when any vendor attempts to get an enterprise scenario working on Linux. At this point, asking for some proof of the authenticity of binaries might be too much and the screenshot above confirms that.

In fairness Linux does not have its own native standard for signing binaries. There are some exceptions- boot loaders are signed with Authenticode thanks to UEFI requirements and some systems such as Fedora also require kernel-modules to be signed when secure-boot is used. There are also higher-level application frameworks such as Java and Docker with their own primitive code-signing schemes. But state-of-the-art code authentication in open source is PGP signatures. Typically there is a file containing hashes of files, which is cleartext signed using one of the project developers’ keys. (As for establishing trust in that key? Typically it is published on key servers and also available for download over SSL on the project pages- thus paradoxically relying on hierarchical certificate-authority model of SSL to bootstrap trust in the distribute PGP web-of-trust.) Luckily Kingston did not bother with any of that; Linux directory just contains a handful of ELF binaries without the slightest hint of verifying their integrity.

On balance

Where does that leave this approach to distributing files?

  • Users on recent versions of Windows have a fighting chance to verify that they are running a signed binary (Even that small victory comes with a long list of caveats: “signed” is not a synonym for “trustworthy.” There are dangerous applications which are properly signed, including remote-access tools.)
  • Users on OSX have to go out of their way to download the application from less dubious sources, by hunting around for it on the vendor website
  • Linux users should abandon hope and follow standard operating procedure: launch a disposable, temporary virtual machine to run dubious binaries and hope they did not include a VM-escape exploit.

Even considering 100% Windows environments, there is collateral damage: training users that it is perfectly reasonable to run random binaries that arrive on a USB drive. In the absence of additional controls such as group policy, users are one click away from bypassing the unsigned binary warning to execute something truly dangerous. (In fact given that Windows has sophisticated plug-and-play framework with capability to download drivers from Windows Update on  demand, it is puzzling that any users action to install an application is required.)

Bottom line: encrypting data on removable drives is a good idea. But there are better ways of doing that without setting bad opsec examples, such as encouraging users to install random applications of dubious provenance.


Ethereum and lessons on how to wreck a decentralized system

For what has been billed as a decentralized platform for smart-contracts, Ethereum is proving surprisingly amenable to central control. It turns out that when push comes to shove and a too-big-to-fail Ethereum application appropriately called The DAO is on the verge of losing all investor money, proponents of trustless systems start agitating for interventions and bailouts that would make Bernanke blush.

With damage toll from The DAO attack at 3.5M ether and counting, Ethereum team is gearing up to introduce a generic blacklist for Ethereum clients to stop the stolen DAO funds from being cashed out. Starting with the vulnerable DAO contract itself, this blacklist will contain the list of “bad actors” on the network who will be prevented from participating transactions and for good reasons- after all somebody said they are bad people.

But in the security field it is well-known blacklists are a fragile design. Trying to enumerate all known bad actors in a system without a strong notion of identity is playing a game of whack-a-mole. Banned miscreants disappear and reappear under a different pseudonym starting over with a clean reputation. It is much better to whitelist known good entities and only let those people into the network than trying to kick out bad apples after the fact.

So if the objective is to destroy the decentralized nature of Ethereum by exerting control on who gets access to the network, here is a proposal for doing it far more effectively and in a scalable way:

  • Today anyone can participate in the Ethereum network by generating a cryptographic key. Your network “address” is derived from the public key. Armed with an address, users can send/receive Ether from other participants, launch new contracts or interact with existing ones. That’s not good. How can we distinguish the good guys in white-hats from the bad guys in black-hats if the hoi polloi are allowed on the network without so much as a background check?
  • Instead let us require that all Ethereum public-keys be certified with an X509 certificate,issued by a trusted third-party CA after vetting the identity of that person. This the same system that underlies confidence in the web. It guarantees that when consumers visit a dubious website asking for their bank login, they will feel much better after being tricked into giving it away.
  • There is undeniably some “friction” involved in getting certificates (Compounded by the fact that the enrollment process will only work on Windows XP at the outset.) It is necessary to incentivize users to stick with the righteous path. To that end, all Ethereum wallets will display a shiny padlock icon when they receive payments from a certified contract.
    • To be clear: certified Ethereum addresses do not receive any preferential treatment from miners, nor are any safer than plain uncertified addresses. It remains a core principle that all Ethereum addresses are equal. But some are more equal than others.
  • Certificate issuance is a very competitive low-margin business for CAs, with a race to the bottom in prices. In order to help boost CA bottom-lines, an enhanced type of Ethereum certificate called Extraneous Validation or EV will be introduced requiring consumers to submit DNA samples for highest assurance levels. (Privacy concerns will be allayed by discarding those samples without actually checking them, in keeping with the traditional standards of CA due-diligence.) EV rating will naturally include a premium user experience too: compliant wallet software shall display full-screen animation of coins raining down from the sky whenever EV addresses/contracts are printed.
    • In addition, transactions involving EV addresses must be stored at lower  regions of the process heap. Higher memory locations are akin to nose-bleed seats and unworthy of favored Ethereum contracts.

It is expected that the Ethereum Politburo will take up this proposal as part of their next 5-year plan.

Long live the decentralized revolution.