Use and misuse of code-signing (part II)


[continued from part I]

There is no “evil-bit”

X509 certificates represent assertions about identity. They are not assertions about competence, good intentions, code-quality or sound software engineering practices. Code-signing solutions including Authenticode can only communicate information about the identity of the software publisher— the answer to the question: “who authored this piece of software?” That is the raison d’etre for the existence of certificate authorities and why they ostensibly charge hefty sums for their services. When developer Alice wants to obtain a code-signing certificate with her name on it, the CA must perform due diligence that it is really Alice requesting the certificate. Because if an Alice certificate is mistakenly issued to Bob, suddenly applications written by Bob will be incorrectly attributed to Alice, unfairly using her reputation and in the process quite possibly tarnishing that reputation. In the real world, code-signing certificates are typically issued not to individuals toiling alone— although many independent developers have obtained one for personal use— but large companies with hundreds or thousands of engineers. But the principle is same: a code-signing certificate for MSFT must not be given willy-nilly to random strangers who are not affiliated with MSFT. (Incidentally that exact scenario was one of the early debacles witnessed in the checkered history of public CAs.)

Nothing in this scheme vouches for the integrity of the software publisher or the fairness of their business model. CAs are only asserting that they have carefully verified the identity of the developer prior to issuing the certificate. Whether or not the software signed by that developer is “good” or suitable for any particular purpose is outside the scope of that statement. In that sense, there is nothing wrong— as far as X509 is concerned— with a perfectly valid digital certificate signing malicious code. There is no evil bit required in a digital certificate for publishers planning to ship malware. For that matter there is no “competent bit” to indicate that software published by otherwise well-meaning developers will not cause harm nevertheless due to inadvertent bugs or dangerous vulnerabilities. (Otherwise no one could issue certificates to Adobe.)

1990s called, they want their trust-model back

This observation is by no means novel or new. Very early on in the development of Authenticode in 1997, a developer made this point loud and clear. He obtained a valid digital certificate from Verisign and used it to sign an ActiveX control dubbed “Internet Exploder” [sic] designed to shut-down a machine when it was embedded on a web page. That particular payload was innocuous and at best a minor nuisance, but the message was unambiguous: the same signed ActiveX control could have reformatted the drive or steal information. “Signed” does not equal “trustworthy.”

Chalk it up to the naivete of the 1990s. One imagines a program manager at MSFT arguing this is good enough: “Surely no criminal will be foolish enough to self-incriminate by signing malware with their own company identity?” Yet a decade later that exact scenario is observed in the wild. What went wrong? The missing ingredient is deterrence. There is no global malware-police to chase after every malware outfit even when they are operating brazenly in the open, leaving a digitally authenticated trail of evidence in their wake. Requiring everyone to wear identity badges only creates meaningful deterrence  when there are consequences to being caught engaging in criminal activity while flashing those badges.

Confusing authentication and trust

Confusing authentication with authorization is a common mistake in information security. It is particularly tempting to blur the line when authorization can be revoked by deliberately failing authentication. A signed ActiveX control is causing potential harm to users? Let’s revoke the certificate and that signature will no longer verify. This conceptual shortcut is often a sign that a system lacks proper authorization design: when the only choices are binary yes/no, one resorts to denying authorization by blocking authentication.

Developer identity is neither necessary or sufficient for establishing trust. It is not necessary because there is plenty of perfectly useful open-source software maintained by talent developers only known by their Github handle, without direct attribution of each line of code to a person identified by their legal name. It is not sufficient either, because knowing that some application was authored by Bob is not useful on its own, unless one has additional information about Bob’s qualifications as a software publisher. In other words: reputation. In the absence of any information about Bob, there is no way to decide if he is a fly-by-night spyware operation or honest developer with years of experience shipping quality code.

Certificate authorities as reluctant malware-police

Interesting enough, that 1997 incident set another precedent: Verisign responded by revoking the certificate, alleging that signing this deliberately harmful ActiveX control was a violation of the certificate policy that this software developer agreed to as a condition for issuance. Putting aside the enforceability of TOUs and click-through agreements, this is a downright unrealistic demand for certificate authorities to start policing developers on questions of policy completely unrelated to verifying their identity. It’s as if the DMV had been tasked with revoking driver’s licenses for people who are late on their credit-card payments.

That also explains why revoking certificates for a misbehaving vendors is not an effective way to stop that developer from churning out malware. As the paper points out, there are many ways to game the system, all of which being used in the wild by companies with a track record of publishing harmful applications:

  • CA shopping: after being booted from one CA, simply walk over to their competitor to get another certificate for the exact same corporate entity
  • Cosmetic changes: get certificates for the same company with slightly modified information (eg variant of address or company name) from the same CA
  • Starting over: create a different shell-company doing exactly same line of business to start with a clean-slate

In effect CAs are playing whack-a-mole with malware authors, something they are neither qualifier or motivated to do. In the absence of a reputation system, the ecosystem is stuck with a model where revoking trust in malicious code requires revoking the identity of the author. This is a very different use of revocation than what the X509 standard envisioned. Here are the possible reasons defined in the specification– incidentally these appear in the published revocation status:

CRLReason ::= ENUMERATED {

unspecified             (0),
 keyCompromise           (1),
 cACompromise            (2),
 affiliationChanged      (3),
 superseded              (4),
 cessationOfOperation    (5),
 certificateHold         (6),
 -- value 7 is not used
 removeFromCRL           (8),
 privilegeWithdrawn      (9),
 aACompromise           (10) }

Note there is no option called “published malicious application.” That’s because none of the assertions made by the CA are invalidated upon discovering that a software publisher is churning out malware. Compare that to key-compromise (reason #1 above) where the private-key of the publisher has been obtained by an attacker. In that case a critical assertion has been voided: the public-key appearing in the certificate no longer speaks exclusively for the certificate holder. Similarly a change of affiliation could arise when an employee leaves a company, a certificate issued in the past now contains inaccurate information for “organization” and “organizational unit” fields. There is no analog for the discovery of signed malware, other than vague reference to compliance with the certificate policy. (In fairness, the policy itself can appear as URL in the certificate but it requires careful legal analysis to answer the question of how exactly the certificate subject has diverged from that policy.)

Code-signing is not the only area where this mission creep has occurred but it is arguably the one where highest demands are put on the actors least capable of fulfilling those expectations. Compare this to issuance of certificates for SSL: when phishing websites pop-up impersonating popular services, perhaps with a subtle misspelling of the name, complete with a valid SSL certificate. Here there may be valid legal grounds to ask the responsible CA to revoke a certificate because there may be trademark claim. (Not that it does any good, since the implementation of revocation in popular browsers ranges from half-hearted to comically flawed.) Lukcily web browsers have other ways to stop users from visiting harmful websites: for example, Safe Browsing and SmartScreen maintain blacklists of malicious pages. There is no reason to wait for CA to take any action- and for malicious sites that are not using SSL, it would not be possible anyway.

Code-signing presents a different problem. In open software ecosystems, reputation systems are rudimentary. Antivirus applications can recognize specific instances of malware but most applications start from a presumption of innocence. In the absence of other contextual clues, the mere existence of verifiable developer identity becomes a proxy for trust decision: unsigned applications are suspect, signed ones get a free pass. At least, until it becomes evident that the signed application was harmful. At that point, the most reliable way of withdrawing trust is to invalidate signatures by revoking the certificate. This uphill battle requires enlisting CAs in a game of whack-a-mole, even when they performed their job correctly in the first place.

This problem is unique to open models for software distribution, where applications can be sourced from anywhere on the web. By contrast, the type of tightly controlled “walled-garden” ecosystem Apple favors with its own App Store rarely has to worry about revoking anything, even though it may use code signing. If Apple deems an application harmful, it can be simply yanked from the store. (For that matter, since Apple has remote control over devices in the field, they can also uninstall existing copies from users’ devices.)

Reputation systems can solve this problem without resorting to restrictive walled-gardens or locking down application distribution to a single centralized service responsible for quality. They would also take CAs out of policing miscreants, a job they are uniquely ill-suited for. In order to block software published by Bob, it is not necessary to revoke Bob’s certificate. It is sufficient instead to signal a very low reputation for Bob. This also moves the conflict one-level higher, because reputations are attached to persons or companies, not to specific certificates. Getting more certificates from another CA after one has been revoked does not help Bob. As long as the reputation system can correlate the identities involved, the dismal reputation will follow Bob. Instead of asking CAs to reject customers who had certificates revoked from a different CA, the reputation system allows CAs do their job and focus on their core business: vet the identity of certificate subjects. It is up to the reputation system to link different certificates based on a common identity, or even related families of malware published by seemingly distinct entities acting on behalf of the same malware shop.

CP

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s