Is it an odd coincidence that all of the milestone, precedent-setting court cases around controversial laws have involved highly unsavory characters in the hot seat, being vigorously defended by organizations with impeccable reputation for taking principled stands? First there was Lori Drew, who bullied a teen-aged girl into suicide on Facebook on behalf of her daughter, to settle some high-school social scene vendetta. She was prosecuted for terms-of-use (TOU) violation on MySpace– remember that other social network?– and eventually acquitted on appeal. Now there is Andrew Auernheimer, better known by the handle weev, Internet troll and self-described “hacker,” getting handed down a hefty sentence for his role in the AT&T attack leading to disclosure of private information on iPad owners. In some circles, he has become a cause célèbre already, lionized in the hero-as-outlaw stereotype. EFF quickly jumped into the fray to announce that it would be joining the defense attorneys for the appeal. (With apologies to Godwin’s Law, the whole episode has echoes of the ACLU Skokie controversy from 1977.) For all the parallels that the self-martyrizing Mr. Auernheimer tried to draw between his case and that of Aaron Swartz, there is no comparison: Mr. Swartz was a widely respected and popular figure promoting open access. Auernheimer has an extensive rap sheet that includes intimidation, harassment and denial-of-service attacks.
It’s as if advocacy organizations decided to over-compensate for their failure to help Aaron Swartz by pledging their loyalty at all costs to the next defendant picked out of a hat, and much to their chagrin were handed a second-rate cartoon villain as their figurehead.
On the one hand, schadenfreude is easy when the likes of Ms. Drew and Mr. Auernheimer get their karmic comeuppance. If anything there is a degree of disappointment that Ms. Drew managed to get away with a tarnished reputation only, without any penalties under the law. Her more recent counterpart did not quite walk away scot-free. On the other hand, there is that problem of principle. This case was prosecuted under the Computer Fraud & Abuse Act or CFAA, dubbed “the most outrageous law you’ve never heard of” by Tim Wu in a recent New Yorker article. Badly decided litigation can establish precedent, and cast a long shadow over future cases, or even deter beneficial security research out of fear of similar lawsuits. Several researchers have hinted that Aurenheimer’s fate will create chilling effects for legitimate research. But it is difficult to see the events as the downfall of a good-intentioned security researcher punished by an ungrateful, retaliatory vendor.
As many commentators noted, the vulnerability itself was laughably simple: manipulating a URL gives unauthorized access to other people’s data. Changing numbers using nothing more sophisticated than a web browser and keyboard yielded personal information. AT&T failed at web security 101. But the ease of discovering vulnerability has never been a measure of good intent. The pertinent question is what is done after the discovery of the flaw. There is a long-standing debate over the meaning of “responsible disclosure.” It centers around the question of exactly how researchers can minimize user harm and create right incentives for vendors. Go public immediately, wait as long it is necessary for the vendor to deploy mitigations or give them an ultimatum with fixed deadline? The argument rages on. Auernheimer deserves the benefit of the doubt for his utilitarian argument that shaming AT&T by going public is the most effective way to get averted similar mistakes in the future– nothing impresses the value of security on developers quite as forcefully as living through a public incident. (The vulnerability was already fixed by AT&T before disclosure, and Gawker redacted the data set appropriately in their coverage.)
But a bright line does exist between doing vulnerability/exploit research (the two are intricately linked) verses using the exploits at large scale indiscriminately against bystanders. It is one thing to note that AT&T website has an obvious vulnerability or even running a few examples to verify this. Hosted services being blackboxes with no visibility into their internal structure, discovering such a vulnerability usually requires trying an “exploit” against a live site containing user data, and accidentally stumbling on other people’s information. Call it collateral damage. So far, so good– this is well within the realm of garden-variety web security research. But Mr. Auernheimer crossed the line when he ran an exhaustive search using the vulnerability to extract data for 100K+ iPad users, save all of it (merely counting the number of vulnerable records had a fighting chance of being called “research”) and hand over the entire dump to Gawker. This is difficult to file away under the guise of intellectual curiosity. Many researchers find vulnerabilities on popular software running on millions of computer, without feeling compelled to go find and compromise all of those machines with their exploit. More concretely, exploit writers specializing in IE or Firefox bugs do not , generally speaking, run their exploit against thousands of IE/Firefox users and collect a trophy from each one before disclosing their findings to the press.
There is no question that CFAA is outdated, utterly divorced from the complexities of online security today and plain dangerous. It is an instrument of selective justice, subject to egregious overreach and prosecutorial bullying in the hands of public officials with creative theories of criminality. The Aaron Swartz case drove that point forcefully. One can only hope that weev’s highly dubious case and incoherent post-hoc rationalizations will not distract from the true arguments for overhauling CFAA.