Browser in the middle: 25 years after the MSFT antitrust trial

In May 1998 the US Department of Justice and the Attorneys General of 20 states along with the District of Columbia sued Microsoft in federal court, alleging predatory strategies and anticompetitive business practices. At the heart of the lawsuit was the web browser Internet Explorer, and strong-arm tactics MSFT adopted with business partners to increase the share of IE over the competing Netscape Navigator. 25 years later in a drastically altered technology landscape, DOJ is now going after Google for its monopoly power in search and advertising. With the benefit of hindsight, there are many lessons in the MSFT experience that could offer useful parallels for the new era of antitrust enforcement, as both sides prepare for the trial in September.

The first browser wars

By all indications, MSFT was late to the Internet disruption. The company badly missed the swift rise of the open web, instead investing in once-promising ideas such such interactive TV or walled-garden in the style of early Prodigy. It was not until the Gates’s 1995 “Tidal Wave” memo that the company began to mobilize its resources. Some of the changes were laughably amateurish— teams hiring for dedicating “Internet program manager” roles. Others proved more strategic, including the decision to build a new browser. In the rush to get something out the door, the first version of Internet Explorer was based on Spyglass Mosaic, a commercial version of the first popular browser NCSA Mosaic developed at the University of Illinois. (The team behind Mosaic would go on to create Netscape Navigator.)  Even the name itself betrayed the Windows-centric and incrementalist attitude prevalent in Redmond: “explorer” was the name of the Windows GUI or “shell” for browsing content on the local machine. Internet Explorer would be its networked cousin helping users explore the wild wild web.

By the time IE 1.0 shipped in August 1995, Netscape already had a commanding lead in market share, not to mention the better product measured in features and functionality. But by this time MSFT had mobilized its considerable resources, greatly expanding investment in the browser team, replacing Spyglass code with its own proprietary implementation. IE3 was the first credible version to have some semblance of feature parity against Navigator, having added support for frames, cascading stylesheets, and Javascript. It was also the first time MSFT started going on the offensive, and responding with its own proprietary alternatives to technologies introduced by Netscape. Navigator had the Netscape Plugin API (NPAPI) for writing browser extensions; IE introduced ActiveX— completely incompatible with NAPI but entirely built on other MSFT-centric technologies including COM and OLE. Over the next two years this pattern would repeat as IE and Navigator duked it out for market share by introducing competing technologies. Netscape allowed web pages to run dynamic content with a new scripting language Javascript; MSFT would support that in the name of compatibility but also subtly try to undermine JS by pushing vbscript, based on the Visual Basic language so familiar to existing Windows developers.

Bundle of trouble

While competition heated up over functionality— and chasing fads, such as the “push” craze of the late 1990s that resulted in Channel Definition Language— there was one weapon uniquely available to MSFT for grabbing market share: shipping IE with Windows. Netscape depended on users downloading the software from their website. Quaint as this sounds in 2024, it was a significant barrier to adoption in an age when most of the world had not made the transition to being online. How does one download Navigator from the official Netscape website if they do not have a web browser to begin with? MSFT had a well-established channel exempt from this bootstrapping problem: copies of Windows distributed “offline” using decidedly low-tech means such as shrink-wrapped boxes of CDs or preinstalled on PCs. In principle Netscape could seek out similar arrangements with Dell or HP to include its browser instead. Unless of course MSFT made the OEMs an offer they could not refuse.

That became the core of the government accusation for anticompetitive practices: MSFT pushed for exclusive deals, pressuring partners such as PC manufactures (OEM or “original equipment manufacturer in industry lingo) to not only include a copy of Internet Explorer with prominent desktop placement but also rule out shipping any alternative browsers. Redmond clearly had far more leverage than Mountain View over PC manufacturers: shipping any browser at all was icing on the cake, but a copy of the reigning OS was practically mandatory.

What started out as a sales/marketing strategy rapidly crossed over into the realm of software engineering when later releases of Windows began to integrate Internet Explorer in what MSFT claimed was an inextricable fashion. The government objected to this characterization: IE was an additional piece of software downloaded from the web or installed from CDs at the consumer’s discretion. Shipping a copy with Windows out-of-the-box may have been convenient to save users the effort of jumping through those installation hoops, but surely a version of Windows could also be distributed without this optional component

 When MSFT objected that these versions of Windows could not function properly without IE, the government sought out a parade of expert witnesses to disprove this. What followed was a comedy of errors on both sides. One expert declared the mission accomplished after removing the icon and primary executable, forgetting about all of the shared libraries (dynamic link library or DLL in Windows parlance) that provide the majority of browser functionality. IE was designed to be modular, to allow “embedding” the rendering engine or even subsets of functionality such as the HTTP stack into as many applications as possible. The actual “Internet Explorer” icon users clicked on was only the tip of the iceberg. Deleting that was the equivalent of arguing that the electrical system in a car can be safely removed by smashing the headlights and noting the car still drives fine without lights. Meanwhile MSFT botched its own demonstration of how a more comprehensive removal of all browser components results in broken OS functionality. A key piece of evidence entered by the defense was allegedly a screen recording from a PC showing everything that goes wrong with Windows when IE components are missing. Plaintiffs lawyers were quick to point out strange discontinuities and changes in the screenshots, eventually forcing MSFT into an embarrassing admission that the demonstration was spliced together from multiple sequences.

Holding back the tide

The next decade of developments would vindicate MSFT, proving that company leadership was fully justified in worrying about the impact of the web. MSFT mobilized to keep Windows relevant, playing the game on two fronts:

  1. Inject Windows dependencies into the web platform, ensuring that even if websites were accessible on any platform in theory, they worked best on Windows viewed in IE. Pushing ActiveX was a good example of this. Instead of pushing to standardize cross-platform APIs, IE added appealing features such as the initial incarnation of XML HTTP request as ActiveX controls. Another example was the addition of Windows-specific quirks into the MSFT version of Java. This provoked a lawsuit from Sun for violating the trademark “Java” with incompatible implementation. MSFT responded by deciding to remove the JVM from every product that previously shipped it.
  2. Stop further investments into the browser once it became clear that IE won the browser wars.  The development of IE4 involved a massive spike of resources. That release also marked the turning of the tide, and IE starting to win out in comparisons against Navigator 4. IE5 was an incremental effort by comparison. By IE6, the team had been reduced to a shadow of itself where it would remain for the next ten years until Google Chrome came upon the scene. (Even the “security push” in the early 2000s culminating in SP2 focused narrowly on cleaning up the cesspool of vulnerabilities in the IE codebase. It was never about adding features and enhancing functionality for a more capable web.)

This lack of investment from MSFT had repercussions far beyond the Redmond campus. It effectively put the web platform into deep freeze. HTML and JavaScript evolved very quickly in the 1990s. HTML2 was published as an RFC in 1995. Later the World Wide Web Consortium took up the mantle of standardizing HTML. HTML3 came out in 1997. It took less than a year for HTML4 to be published as an official W3C “recommendation”— what would be called a standard under any other organization. This was a time of rapid evolution for the web, with Netscape, MSFT and many other companies participating to drive the evolution forward. It would be another 17 years before HTML5 followed.

Granted MSFT had its own horse in the race with MSN, building out web properties and making key investments such as the acquisition of Hotmail. Some even achieved a modicum of success, such as the travel site Expedia which was spun out into a public company in 1999. But a clear consensus had emerged inside the company around the nature of software development. Applications accessed through a web browser were fine for “simple” tasks, characterized by limited functionality, with correspondingly low performance expectations: minimalist UI, laggy/unresponsive interface, only accessible with an Internet connection and even then constrained by the limits of broadband at the time. Anything more required native applications, installed locally and designed to target the Windows API. These were also called “rich clients” in a not-so-subtle dig at the implied inferiority of web applications.

Given that bifurcated mindset, it is no surprise the web browser became an afterthought in the early 2000s. IE had emerged triumphant from the first browser wars, while Netscape disappeared into the corporate bureaucracy of AOL following the acquisition. Mozilla Firefox was just staring to emerge phoenix-like from the open-sourced remains of the Navigator codebase, far from posing any threat to market share. The much-heralded Java applets in the browser that were going to restore parity with native applications failed to materialize. There were no web-based word processors or spreadsheets to compete against Office. In fact there seemed to be hardly any profitable applications on the web, with sites still trying to work out the economics of “free” services funded by increasingly annoying advertising. 

Meanwhile MSFT itself had walked away from the antitrust trial mostly unscathed. After losing the initial round in federal court after a badly botched defense, the company handily won at the appellate court. In a scathing ruling the circuit court not only reversed the breakup order but found the trial judge to have engaged in unethical, biased conduct. Facing another trial under a new judge, the DOJ blinked and decided it was no longer seeking a structural remedy. The dramatic antitrust trial of the decade ended with a whimper: the parties agreed to a mild settlement that required MSFT to modify its licensing practices and better documents its APIs for third-parties to develop interoperable software.

This outcome was widely panned as a minor slap-on-the-wrist by industry pundits, raising concerns that it left the company without any constraints to continue engaging in the same pattern of anticompetitive behavior. In hindsight the trial did have an important consequence that was difficult to observe from the outside: it changed the rules of engagement within MSFT. Highly motivated to avoid another extended legal confrontation that would drag on share price and distract attention, leadership grew more cautious about pushing the envelope around business practices. It may have been too little too late for Netscape, but this shift in mindset meant that when the next credible challenger to IE materialized in the shape of Google Chrome, the browser was left to fend for itself, competing strictly on its own merits. There would be no help from the OS monopoly.

Second chances for the web

More than any other company it was Google responsible for revitalizing the web as a capable platform for rich applications. For much of the 2000s, it appeared that the battle for developer mindshare had settled into a stalemate: HTML and Javascript were good for basic applications (augmented by the ubiquitous Adobe Flash for extra pizzaz when necessary) but any heavy lifting— CPU intensive computing, fancy graphics, interacting with peripheral devices— required a locally installed desktop application. Posting updates on social life and sharing hot-takes on recent events? Web browsers proved  perfectly adequate for that. But if you planned to crunch numbers on a spreadsheet with complex formulas, touch up high-resolution pictures or hold a video conference, the consensus held that you needed “real” software written in a low-level language such as C/C++ and directly interfacing with the operating system API.

Google challenged that orthodoxy, seeking to move more applications to the cloud. It was Google continually pushing the limits of what existing browsers can do, often with surprising results. Gmail was an eye opener for its responsive, fast UI as much as it was for the generous gigabyte of space every user received and the controversial revenue model driven by contextual advertising based on the content of emails. Google Maps— an acquisition, unlike the home-grown Gmail which had started out as one engineer’s side project— and later street view proved that even high resolution imagery overlaid with local search results could be delivered over existing browsers with decent user experience. Google Docs and Spreadsheets (also acquisitions) were even more ambitious undertakings aimed at the enterprise segment cornered by MSFT Office until that point.

These were mere opening moves in the overall strategic plan: every application running in the cloud, accessed through a web browser. Standing in the way of that grand vision was the inadequacy of existing browsers. They were limited in principle by the modest capabilities of standard HTML5 and Javascript APIs defined at the time, without venturing into proprietary, platform-dependent extensions such as Flash, Silverlight and ActiveX. They were hamstrung in practice even further thanks to the mediocre implementation of those capabilities by the dominant browser of the time, namely Internet Explorer. What good would innovative cloud applications do when users had to access them through a buggy, slow browser riddled with software vulnerabilities? (There is no small measure of irony that the 2009 “Aurora” breach of Google by Chinese APT started out with an IE6 SP2 zero-day vulnerability.)

Google was quick to recognize the web browser as a vital component for its business strategy, in much the same way MSFT correctly perceived the danger Netscape posed. Initially Google put its weight behind Mozilla Firefox. The search deal to become the default engine for Firefox (realistically, did anyone want Bing?) provided much of the revenue for the fledgling browser early on. While swearing by the benefits of having an open-source alternative to the sclerotic IE, Google would soon realize that a development model driven by democratic consensus came with one undesirable downsides: despite being a major source of revenue for Firefox, it could exert only so much influence over the product roadmap. For Google controlling its own fate all but made inevitable to embark on its own browser project.

Browser wars 2.0

Chrome was the ultimate Trojan horse for advancing the Google strategy: wrapped in the mantle of “open-source” without any of the checks-and-balances of an outside developer community to decide what features are prioritized (a tactic that  Android would soon come to perfect in the even more cut-throat setting of mobile platforms.) Those lack of constraints allowed Google to move quickly and decisively on the main objective: advance the web platform. Simply shipping a faster and safer browser alone would not have been enough to achieve parity with desktop applications. HTML and Javascript itself had to evolve.

More than anything else Chrome gave Google a seat at the table for standardization of future web technologies. While work on HTML5 had started in 2004 at the instigation of Firefox and Opera representatives, it was not until Chrome reignited the browser wars that bits and pieces of the specification began to find their way into working code. Crucially the presence of a viable alternative to IE meant standardization efforts were no longer an academic exercise. The finished output of W3C working groups is called a “recommendation.” That is no false modesty in terminology because at the end of the day W3C has no authority or even indirect influence to compel browser publishers to implement anything. In a world where most users are running an outdated version of IE (with most desktops were stuck on IE6 SP2 or IE7) the W3C can keep cranking out enhancements to HTML5 on paper without delivering any tangible benefit to users. It’s difficult enough to incentivize websites to take advantage of new features. The path of least resistance already dictates coding for the least common denominator. Suppose some website crucially depends on a browser feature missing from 10% of visitors who are running an ancient version of IE. Whether they do not care enough to upgrade, or perhaps can not upgrade as with enterprise users at the mercy of their IT department for software choice, these users will be shut out of using the website, representing a lost revenue opportunity. By contrast a competitor with more modest requirements from their customers’ software, or alternatively a more ambitious development mindset dedicated to backwards compatibility will have no problem monetizing that segment.

The credibility of a web browser backed by the might of Google shifted that calculus. Observing the clear trend with Chrome and Firefox capturing market share from IE (and crucially, declining share of legacy IE versions) made it easier to justify building new applications for a modern web incorporating the latest and greatest from the W3C drawing board: canvas, web-sockets, RTC, offline mode, drag & drop, web storage… It no longer seemed like questionable business judgment to bet on that trend and build novel applications assuming a target audience with modern browsers. In 2009 YouTube engineers snuck in a banner threatening to cut off support for IE, careful to stay under the radar lest their new overlords at Google object to this protest. By 2012 the tide had turned to the point that an Australian retailer began imposing a surcharge on IE7 users to offset the cost of catering to their ancient browser.

While the second round of the browser wars is not quite over, some conclusions are obvious. Google Chrome has a decisive lead over all other browsers especially in the desktop market. Firefox share is declining, creating doubts about the future of the only independent open-source web browser that can claim the mantle of representing users as stakeholders. As for MSFT, despite getting its act together and investing in auto-update functionality to avoid getting stuck with another case of the “IE6 installed-base legacy” problem, Internet Explorer versions have steadily lost market share during the 2010s. Technology publications cheered on every milestones such as the demise of IE6 and the “flipping” point when Google Chrome reached 50%. Eventually Redmond gave up and decided to start over with a new browser altogether dubbed “Edge,” premised on a full rewrite instead of incremental tweaks. That has not fared much better either. After triumphantly unveiling a new HTML rendering engine written from scratch to replace IE’s “Trident,” MSFT quickly threw in the towel, announcing that it would adopt Blink— the engine from Chrome. (In as much as MSFT of the 1990s was irrationally combative in its rejection of technology not invented in Redmond, its current incarnation had no qualms admitting defeat and making pragmatic business decisions to leverage competing platforms.) Despite multiple legal skirmishes with EU regulators over its ads and browser monopoly, there are no credible challengers to Chrome on the desktop today. When it comes to market power, Google Chrome is the new IE.

The browser disruption in hindsight

Did MSFT overreact to the Netscape Navigator threat and knee-cap itself by inviting a regulatory showdown through its aggressive business tactics? Subsequent history vindicates the company leadership in correctly judging the disruption potential but not necessarily the response. It turned out the browser was indeed a critical piece of software— it literally became the window users through which users experience the infinite variety of content and applications beyond the narrow confines of their local device. Platform-agnostic and outside the control of companies providing the hardware/software powering that local device, it was an escape hatch out of the “Wintel” duopoly. Winning the battle against Netscape diffused that immediate threat for MSFT. Windows did not become “a set of poorly debugged device drivers” as Netscape’s Marc Andreessen had once quipped.

An expansive take on “operating system”

MSFT was ahead of its time in another respect: browsers are considered an intrinsic component of the operating system, a building block for other applications to leverage. Today a consumer OS shipping without some rudimentary browser out-of-the-box would be an anomaly. To pick two examples comparable to Windows:

  • MacOS includes Safari starting with the Panther release in 2003.
  • Ubuntu desktop releases come with Firefox as the default browser.

On the mobile front, browser bundling is not only standard but pervasive in its reach:

  • iOS not only ships a mobile version of Safari but the webkit rendering engine is tightly integrated into the operating system, as the mandatory embedded browser to be leverage by all other apps that intend to display web content. In fact until recently Apple forbid shipping any alternative browser that is not built on webkit. The version of “Chrome” for iOS is nothing more than a glossy paint-job over the same internals powering Safari. Crucially, Apple can enforce this policy. Unlike desktop platforms with their open ecosystem where users are free to source software from anywhere, mobile devices are closed appliances. Apple exerts 100% control on software distribution for iOS.
  • Android releases have included Google Chrome since 2012. Unlike iOS, Google has no restrictions on alternative browsers as independent applications. However embedded web views in Android are still based on the Chrome rendering engine.

During the antitrust trial, some astute observers pointed out that only a few years ago even the most rudimentary networking functionality— namely the all important TCP/IP stack— was an optional component in Windows. Today it is not only a web browser that has become table stakes. Here are three examples of functionality once considered strictly distinct lines of business from providing an operating system:

  1. Productivity suites: MacOS comes with Pages for word processing, Sheets for spreadsheets and Keynote for crafting slide-decks. Similarly many Linux distributions include LibreOffice suite which includes open-source replacements for Word, Excel, PowerPoint etc. (This is a line even MSFT did not cross: to this day no version of Windows includes a copy of the “Office suite” understood as a set of native applications.)
  2. Video conferencing and real-time collaboration: Again each vendor has been putting forward their preferred solution, with Google including Meet (previously Hangouts), Apple promoting FaceTime and MSFT pivoting to Teams after giving up on Skype.
  3. Cloud storage: To pick an example where the integration runs much deeper, Apple devices have seamless access to iCloud storage, Android & ChromeOS are tightly coupled to Google Drive for backups. Once the raison d’être of unicorn startups Dropbox and Box, this functionality has been steadily incorporated into the operating system casting doubt on the commercial prospects of these public companies. Even MSFT has not shied away from integrating its competing OneDrive service with Windows.

There are multiple reasons why these examples raise few eyebrows from the antitrust camp. In some cases the applications are copycats or also-rans: Apple’s productivity suite can interop with MSFT Office formats (owing in large part to EU consent decree that forced MSFT to start documenting its proprietary formats) but still remains a pale imitation of the real thing. In other cases, the added functionality is not considered a strategic platform or has little impact on the competitive landscape. FaceTime is strictly a consumer-oriented product that has no bearing on the lucrative enterprise market. While Teams and Meet have commercial aspirations, they face strong headwinds competing against established players Zoom and WebEx specializing in this space. No one is arguing that Zoom is somehow disadvantaged on Android because it has to be installed as a separate application from Play Store. But even when integration obviously favors an adjacent business unit— as in the case of mobile platforms creating entrenched dependencies on the cloud storage offering from the same company— there is a growing recognition that the definition of an “operating system” is subject to expansion. Actions that once may have been portrayed as leveraging platform monopoly to take over another market— Apple & Google rendering Dropbox irrelevant— become the natural outcome of evolving customer expectations.

Safari on iOS may look like a separate application with its own icon, but it is also the underlying software that powers embedded “web views” for all other iOS apps when those apps are displaying web content inside their interface. Google Chrome provides a similar function for Android apps by default. No one in their right mind would resurrect the DOJ argument of the 1990s that a browser is an entirely separate piece of functionality and weaving it into the OS is an arbitrary marketing choice without engineering merits. (Of course that still leaves open the question of whether that built-in component should be swappable and/or extensible. Much like authentication or cryptography capabilities for modern platforms have an extensibility mechanism to replace default, out-of-the-box software with alternatives, it is fair to insist that the platform allow substituting a replacement browser designated by the consumer.) Google turned the whole model upside down with Chromebooks, building an entire operating system around a web browser.

All hail the new browser monopoly

Control over the browser temporarily handed MSFT significant leeway over the future direction of the web platform. If that platform remained surprisingly stagnant afterwards— compared to its frantic pace of innovation during the 1990s— that was mainly because MSFT had neither the urgency or vision to take it to the next level. (Witness the smart tags debacle.) Meanwhile the W3C ran around in circles, alternating between incremental tweaks— introducing XHTML, HTML repackaged as well-formed XML— and ambitious visions of a “semantic web.” The latter imagined a clean separation of content from style, two distinct layers HTML munged together, making it possible for software to extract information, process it and combine it in novel ways for the benefit of users. Outside the W3C there were few takers. Critics derided it as angle-brackets-everywhere: XSLT, XPath, XQuery, Xlink. The semantic web did not get the opportunity it deserved for large-scale demonstration to test its premise. For a user sitting in front of their browser and accessing websites, it would have been difficult to articulate the immediate benefits. Over time Google and ChatGPT would prove machines were more than adequate for grokking unstructured information on web pages even without the benefit of XML tagging.

Luckily for the web, plenty of startups did have more compelling visions of how the web should work and what future possibilities could be realized— given the right capabilities. This dovetailed nicely with the shift in emphasis from shipping software to operating services. (It certainly helped that the economics were favorable. Instead of selling a piece of software once for a lump sum and hope the customer upgrades when the next version comes out, what if you could count on a recurring source of revenue from monthly subscriptions?) The common refrain for all of these entrepreneurs: the web browser had become the bottleneck. PCs kept getting faster and even operating systems became more capable over time, but websites could only access a tiny fraction of those resources through HTML and Javascript APIs, and only through a notoriously buggy, fragile implementation held together by duct-tape: Internet Explorer.

In hindsight it is clear something would change; there was too much market pressure against a decrepit piece of software guarding an increasingly untenable OS monopoly. Surprisingly that change came in the form of not one but two major developments in the 2010s. One shift had nothing to do with browsers: smart-phones gave developers a compelling new way to reach users. It was a clean slate, with new powerful APIs unconstrained by the web platform. MSFT did not have a credible response to the rise of iOS and Android any more than it did to Chrome. Windows Mobile never made much inroads with device manufacturers, despite or perhaps because of the Nokia acquisition. It had even less success winning over developers, failing to complete the virtuous cycle between supply & demand that drives platform. (At one point a desperate  MSFT started outright offering money to publishers of popular apps to port their iOS & Android apps to Windows Mobile.)

Perhaps the strongest evidence that MSFT judged the risk accurately comes from Google Chrome itself. Where MSFT saw one-sided threat to the Windows and Office revenue streams, Google perceived a balanced mix of opportunity and risk. The “right” browser could accelerate the shift to replace local software with web applications— such as the Google Apps suite— by closing the perceived functionality gap between them. The “wrong” browser would continue to frustrate that shift or even push the web towards another dead-end proprietary model tightly coupled to one competitor. Continued investment in Chrome is how the odds get tilted towards the first outcome. Having watched MSFT squander its browser monopoly with years of neglect, Google knows better than to rest on its laurels.

CP

The elusive nature of ownership in Web3

A critical take on Read-Write-Own

In the recently published “Read Write Own,” Chris Dixon makes the case that blockchains allow consumers to capture more of the returns from the value generated in a network because of the strongly enshrined rules of ownership. This is an argument about fairness: the value of networks is derived from the contributions of participants. Whether it is Facebook users sharing updates with their network or Twitter/X influencers opining on latest trends, it is Metcalfe’s law that allows these systems to become so valuable. But as the history of social networks has demonstrated time and again, that value accrues to a handful of employees and investors who control the company. Not only do customers not capture any of those returns (hence the often used analogy of “sharecroppers” operating on Facebook’s land) they are stuck with the negative externalities, including degraded privacy, disinformation and in the case of Facebook, repercussions that spill out into the real-world including outbreaks of violence.

The linchpin of this argument is that blockchains can guarantee ownership in ways that the two prevailing alternatives (“protocol networks” such as SMTP or HTTP and the better-known “corporate networks” such as Twitter) can not. Twitter can take away any handle, shadow-ban the account or modify their ranking algorithms to reduce its distribution. By comparison if you own a virtual good such as some NFT issued on a blockchain, no one can interfere with your rightful ownership of that asset. This blog post delves into some counterarguments on why this sense of ownership may prove illusory in most cases. The arguments will run from the least-likely and theoretical to most probably, in each case demonstrating ways these vaunted property rights fail.

Immutability of blockchains

The first shibboleth that we can dispense with is the idea that blockchains operate according to immutable rules cast in stone. An early dramatic illustration of this came about in 2016, as a result of the DAO attack on Ethereum. The DAO was effectively a joint investment project operated by a smart-contract on the Ethereum chain. Unfortunately that contract had a serious bug, resulting in an critical security vulnerability. An attacker exploited that vulnerability to drain most of the funds, to tune of $150MM USD notional at the time.

This left the Ethereum project with a difficult choice. They could double down on the doctrine that Code-Is-Law and let the theft stand: argue that the “attacker” did nothing wrong, since they used the contract in exactly the way it was implemented. (Incidentally, that is a mischaracterization of the way Larry Lessig intended that phrase. “Code and other laws of cyberspace” where the phrase originates was prescient in warning about the dangers in allowing privately developed software, or “West Coast Code” as Lessig termed it, to usurp democratically created laws or “East Coast Code” in regulating behavior.) Or they could orchestrate a difficult, disruptive hard-fork to change the rules governing the blockchain and rewrite history to pretend the DAO breach never occurred. This option would return stolen funds back to investors.

Without reopening the charged debate around which option was “correct” from an ideological perspective, we note the Ethereum foundation emphatically took the second route. From the attacker perspective, their “ownership” of stolen ether proved very short lived.

While this episode demonstrated the limits of blockchain immutability, it is also the least relevant to the sense of property rights that most users are concerned about. Despite fears that the DAO rescue could set a precedent and force the Ethereum foundation to repeatedly bailout vulnerable projects, no such hard-forks followed. Over the years much larger security failures occurred on Ethereum (measured in notional dollar value) with the majority attributed with high confidence to rogue states such as North Korea. None of them merited so much as a serious discussion of whether another hard-fork is justified to undo the theft and restore the funds to rightful owners. If hundreds of million dollars in tokens ending up in the coffers of a sanctioned state does not warrant breaking blockchain immutability, it is fair to say the average NFT holder has little reason to fear that some property dispute will result in blockchain-scale reorganization that takes away their pixelated monkey images.

Smart-contract design: backdoors and compliance “features”

Much more relevant to the threat model of a typical participant is the way virtual assets are managed on-chain: using smart-contracts that are developed by private companies and often subject to private control. Limiting our focus to Ethereum for now, recall that the only “native” asset on chain is ether. All other assets such as fungible ERC-20 tokens and collectible NFTs must be defined by smart contracts, in other words software that someone authors. Those contracts govern the operation of the asset: conditions under which it can be “minted”— in other words, created out of thin air—transferred or destroyed. To take a concrete example: a stablecoin such as Circle (USDC) is designed to be pegged 1:1 to the US dollar. More USDC is issued on chain when Circle the company receives fiat deposits from a counterparty requesting virtual assets. Similarly USDC must be taken out of circulation or “burned” when a counterparty returns their virtual dollars and demands ordinary dollars back in a bank account.

None of this is surprising. As long the contract properly enforces rules around who can invoke those actions on chain, this is exactly how one envisions a stablecoin to operate. (There is a separate question around whether the 1:1 backing is maintained, but that can only be resolved by off-chain audits. It is outside the scope of enforcement by blockchain rules.) Less appreciated is the fact that most stablecoins contracts also grant the operator ability to freeze funds or even seize assets from any participant. This is not a hypothetical capability; issuers have not shied away from using it when necessary. To pick two examples:

While the existence of such a “backdoor” or “God mode” may sound sinister in general, these specific interventions are hardly objectionable. But it serves to illustrate the general point: even if blockchains themselves are immutable and arbitrary hard-forks a relic of the past, virtual assets themselves are governed not by “native” rules ordained by the blockchain, but independent software authored by the entity originating that asset. That code can include arbitrary logic granting the issuer any right they wish to reserve.

To be clear, that logic will be visible on-chain for anyone to view. Most prominent smart-contracts today have their source code published for inspection. (For example, here is the Circle USD contract.) Even if the contract did not disclose its source code, the logic can be reverse engineered from the low-level EVM bytecode available on chain. In that sense there should be no “surprises” about whether an issuer can seize an NFT or refuse to honor a transfer privately agreed upon by two parties. One could argue that users will not purchase virtual assets from issuers who grant themselves such broad privileges to override property rights by virtue of their contract logic. But that is a question of market power and whether any meaningful alternative exists for consumers who want to vote with their wallet. It may well become the norm that all virtual assets are subject to permanent control by the issuer, something users accept without a second thought much like the terms-of-use agreements one clicks through without hesitation when registering for advertising-supported services. The precedent with stablecoins is not encouraging: Tether and Circle are by far the two largest stablecoins by market capitalization. The existence of administrative overrides in their code was no secret. Even multiple invocations of that power has not resulting in a mass exodus of customer into alternative stablecoins.

When ownership rights can be ignored

Let’s posit that popular virtual assets will be managed by “fair” smart-contracts without designed-in backdoors that would enable infringement of ownership rights. This brings us to the most intractable problem: real-world systems are not bound by ownership rights expressed on the blockchain.

Consider the prototypical example of ownership that proponents argue can benefit from blockchains: in-game virtual goods. Suppose your game character has earned a magical sword after significant time spent completing challenges. In most games today, your ownership of that virtual sword is recorded as an entry in the internal database of the game studio, subject to their whims. You may be allowed to trade it, but only on a sanctioned platform most likely affiliated with the same studio. The studio could confiscate that item because you were overdue on payments or unwittingly violated some other rule in the virtual universe. They could even make the item “disappear” one day if they decide there are too many of these swords or they grant an unfair advantage. If that virtual sword was instead represented by an NFT on chain, the argument runs, the game studio would be constrained in these types of capricious actions. You could even take the same item to another gaming universe created by a different publisher.

On the face of it, this argument looks sound, subject to the caveats about the smart-contract not having backdoors. But it is a case of confusing the map with the territory. There is no need for the game publisher to tamper with on-chain state in order to manipulate property rights; nothing prevents the game software from ignoring on-chain state. On-chain state could very well reflect that you are the rightful owner of that sword while in-game logic refuses to render your character holding that object. The game software is not running on the blockchain or in any way constrained by the Ethereum network or even the smart-contract managing virtual goods. It is running on servers controlled by a single company— the game studio. That software may, at its discretion, consult the Ethereum blockchain to check on ownership assignments. That is not the same as being constrained by on-chain state. Just because the blockchain ledger indicates you are the rightful owner of a sword or avatar does not automatically force the game rendering software to depict your character with those attributes in the game universe. In fact the publisher may deliberately depart from on-chain state for good reasons. Suppose an investigation determines that Bob bought that virtual sword from someone who stole it from Alice. Or there have been multiple complaints about a user-designed avatar being offensive and violating community standards. Few would object to the game universe being rendered in a way that is inconsistent with on-chain ownership records under these circumstances. Yet the general principle stands: users are still subject to the judgment of one centralized entity on when it is “fair game” to ignore blockchain state and operate as if that virtual asset did not exist.

Case of the disappearing NFT

An instructive case of  “pretend-it-does-not-exist” took place in 2021 when Moxie Marlinspike created a proof-of-concept NFT that renders differently depending on which website it is viewed from. Moxie listed the NFT on OpenSea, at the time the leading marketplace for trading NFTs. While it was intended in good spirit as a humorous demonstration of the mutability and transience of NFTs, OpenSea was not amused. Not only did they take down the listing, but the NFT was removed from the results returned by OpenSea API. As it turns out, a lot of websites rely on that API for NFT inventories. Once OpenSea ghosted Moxie’s API, it is as if the NFT did not exist. To be clear: OpenSea did not and could make any changes to blockchain state. The NFT was still there on-chain and Moxie was its rightful owner as far as the Ethereum network is concerned. But once the OpenSea API started returning alternative facts, the NFT vanished from view for every other service relying on that API instead of directly inspecting the blockchain themselves. (It turns out there were a lot of them, further reinforcing Moxie’s critique of the extent of centralization.)

Suppose customers disagree with the policy of the game studio. What recourse do they have? Not much within that particular game universe, anymore than the average user has any leverage with Twitter or Facebook in reversing their trust & safety decisions. Users can certainly try to take the same item to another game but there are limits to portability. While blockchain state is universal, game universes are not. The magic sword from the medieval setting will not do much good in a Call Of Duty title set in WW2.

In that sense, owners of virtual game assets are in a more difficult situation than Moxie with his problematic NFT. OpenSea can disregard that NFT but can not preclude listing on competing marketplaces or even arranging private sales to a willing buyer who values it on collectible or artistic merits. It would be the exact same situation if OpenSea for some bizarre reason came to insist that you do not own a bitcoin that you rightfully own on blockchain. OpenSea persisting at such a delusion would not detract in any way from the value of your bitcoin. Plenty of sensible buyers exist elsewhere who can form an independent judgment about blockchain state and accept that bitcoin in exchange for services. But when the value of a virtual asset is determined primarily by its function within a single ecosystem— namely that of the game universe controlled by a centralized publisher— what those independent observers think about ownership status carries little weight.

CP