We can bill you: antagonistic gadgets and dystopian visions of Philip K Dick

Dystopian visions

Acclaimed science-fiction author Isaac Asimov’s stories on robots involved a set of three rules that all robots were expected to obey:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The prioritization leaves no ambiguity in the relationship between robots and their creators. Regardless of their level of artificial intelligence and autonomy, robots were to avoid harm to human beings. Long before sentient robots running amok and turning against their creators became a staple of science-fiction (Mary Shelly’s 19th century novel“Frankenstein” could be seen as their predecessor) Asimov systematically was formulating the intended relationship. Ethical implications of artificial intelligence are a recurring theme today. Can our own creations end up undermining humanity after they achieve sentience? But there are far more subtle and less imaginative ways that technology works against people in everyday settings, and this involves emergent AI to blame. This version too was predicted by science-fiction.

Three decades after Asimov, the dystopian imagination of Philip K Dick produced a more conflicted relationship between man and his creations. In the 1969 novel “Ubik”, the protagonist inhabits a world where advanced technology controls basic household functions from kitchen appliances to locks on the door. But there is a twist: all of these gadgets operate on what today would be called a subscription model. The coffee-maker refuses to brew the morning cup of joe until coins are inserted. (For all the richness and wild imagination of his alternate realities, PKD did not bother devising an alternative payment system for this future world.) When he runs out of money, he is held hostage at home; his front-door will not open without coins.

Compared to some of the more fanciful alternate universes brought to life in PKD fiction— Germany winning World War II in “The man in the high castle” or an omniscient police-state preemptively arresting criminals before they commit crimes as in “The minority report”— this level of dystopia is mild, completely benign. But it is also one that bears a striking resemblance to where the tech industry is stridently marching towards. Consumers are increasingly losing control over their devices— devices which they have fully and rightfully paid for. Not only is the question of ownership being challenged with increasing restrictions on what they can do to hardware they have every right to expect 100% control over, but those devices are actively working against the interests of the consumer, doing the bidding of third-parties be it the manufacturer, the service-provider or possibly the government.

License to tinker

There is a long tradition in American culture of hobbyists tinkering with their gadgets. This predates the Internet or the personal computer. Perhaps automobiles and motorcycles were the first technology which lent itself to mass tinkering. Mass production made cars accessible to everyone and for those with a knack for spending long hours in the garage, they were relatively easy to modify- for different aesthetics or better performance under the hood. In one sense the hot-rodders of the 1950s and 1960s were one of the cultural predecessors of today’s software hobbyists. Cars at the time were relatively low-tech; with carburetors and manual transmission being the norm, spending a few months in high school shop-class would provide adequate background. But more importantly the platform was tinkering-friendly. Manufacturers did not go out of their way to prevent buyers from modifying their hardware. That is partly related to technical limitations. It is not as-if cars could be equipped with tamper-detection sensors to immobilize the vehicle if the owner installed parts the manufacturer did not approve of. But more importantly, ease of customization was itself considered a competitive advantage. In fact some of the most cherished vehicles of the 20th century including muscle-cars, V-twin motorcycles and air-cooled Volkswagens owed part of their iconic status to their vibrant aftermarket for mods.

Natural limits existed on how far owners could modify their vehicle. To drive on public-roads, it had to be road-legal after all. One could install a different exhaust system to improve engine sound, but not have flames shooting out the back. More subtly an economic disincentive existed. Owners risked giving up on their warranty coverage for modified parts, a significant consideration given that Detroit was not exactly known for high-quality, low-defect manufacturing at the time. But even that setback was localized. Replace the stereo or rewire the speakers yourself, and you can no longer complain about electrical system malfunctions. But you would still expect the transmission to operate as advertised and the manufacturer to continue honoring any warranty coverage for the drivetrain. There was no warning sticker anywhere that loosening this or that bolt would void the entire warranty on every other part of the vehicle. Crucially consumers were given a meaningful choice: you are free to modify the car for personal expression in exchange for giving up warranty claims against the manufacturer.

From honor code to software enforcement

Cars from the golden-era of hot-rodding were relatively dumb gadgets. Part of the reason manufacturers did not have much of a say in how owners could modify their vehicle is that they had no feasible technology to enforce those restrictions once the proud new owner drove it off the lot. By contrast, software can enforce very specific restrictions on how a particular system operates. In fact it can impose entirely arbitrary limitations to disallow specific uses of the hardware, even when the hardware itself is perfectly capable of performing those functions.

Here is an example. In the early days Windows NT 3.51 had two editions: workstation and server, differentiated by the type of scenario they were intended for. The high-end server SKU supported machines with up to 8 processors while the workstation maxed out at 2. If you happened to have more powerful hardware, even if you did not need any of the bells-and-whistles of server, you had to spring for the more expensive product. (Note: there is a significant difference between uniprocessor and multiprocessor kernels; juggling multiple CPUs requires substantial changes but going from 2 to 8 processors does not.) What was the major difference between those editions? From an economical perspective, $800 measured in 1996 dollars. From a technology perspective, handful of bytes in a registry key describing which type of installation occurred. As noted in a 1996 article titled Differences Between NT Server and Workstation Are Minimal:

“We have found that NTS and NTW have identical kernels; in fact, NT is a single operating system with two modes. Only two registry settings are needed to switch between these two modes in NT 4.0, and only one setting in NT 3.51. This is extremely significant, and calls into question the related legal limitations and costly upgrades that currently face NTW users.”

There is no intrinsic technical reason why the lower-priced edition could not take advantage of more powerful hardware, or for that matter, allow more than 10 concurrents connections to function as a web-server— as Microsoft later relented after customer backlash. These are arbitrary calls made by someone on sales team who, in their infinite wisdom, concluded that customers with expensive hardware or web-sites ought to pay more for their operating system.

Two tangents worth exploring about this case. First the proprietary nature of the software and its licensing model is crucial for enforcing these types of policies. Arbitrary restrictions would not fly with open-source software. If a clueless vendor shipped a version of Linux with random, limit on the number of CPUs or memory which does not originate from technical limitations, customers could modify the source-code to lift that restriction. Second, the ability to enforce draconian restrictions dreamt up by marketing is greatly constrained by platform limitations. That’s because the personal computer is an open platform. Even with a proprietary operating system such as Windows, users get full control over their machine. You could edit the registry or tamper with OS logic to trigger an identity crisis between workstation/server.  Granted, that would be an almost certain violation of the shrink-warp license nobody read when installing the OS. MSFT would not look kindly upon this practice if carried out large scale. It’s one thing for hobbyists to demonstrate the possibility as a symbolic gesture; it is another level of malicious intent for an enterprise with thousands of Windows licenses to engage in systematic software piracy by giving themselves a free upgrade. So at the end of the day, enforcement still relied on messy social norms and imperfect contractual obligations. Software did not aspire to replace the conscience of the consumer, to stop them from perceived wrongdoing at all costs.

Insert quarters to continue

In fact software licensing in the enterprise has a history of such arbitrary restrictions enforced through a combination of business logic implemented in proprietary-software along with dubious reliance on over-arching “terms of use” that discourage tampering with said logic. To this day copies of Windows Server are sold with client-licenses, dictating the number of concurrent users that the server is willing to support. If the system is licensed for 10 clients, the eleventh user attempting to connect will be turned away regardless of how much spare CPU or memory capacity is left. You must purchase more licenses. In other words: insert quarters to continue.

Yet this looks very different than Philip K Dick’s dystopian coffeemaker and does not elicit anywhere near the same level of indignation. There are several reasons for that. First, enterprise software has acclimatized to the notion of discriminatory pricing. Vendors extract higher prices from companies who are in a position to pay. The reasoning goes: if you can afford that fancy server with a dozen CPUs and boatloads of memory, surely you can also spring for the high-end edition of Windows server that will agree to fully utilize the hardware? Second, the complex negotiations around software licensing are rarely surfaced to end-users. It is the responsibility of the IT department to work out how many licenses are required and determine the right mix of hardware/software required to support the business. If an employee is unable to perform her job because she is turned down by a server having reached its cap on maximum simultaneous users—an arbitrary limit that exists only in the realm of software licensing it must be noted, not in the absolute resources available in the underlying hardware— she is not expected to solve that problem by taking out her credit-card and personally paying for the additional license. Finally this scenario is removed from everyday considerations. Not everyone works in a large enterprise subject to stringent licensing rules, and even for those who are unlucky enough to run into this situation, the inconvenience created by an uncooperative server is relatively mild- far cry from the front-door that refuses to open and locks its occupants inside.

From open platforms to appliances

One of more disconcerting trends of the past decade is that what used to be norm in the enterprise segment is now trickling down into the consumer space. We may not have  coffeemakers operating on a subscription model yet. Doors that demand payment for performing their basic function would likely never pass fire-code regulations. But gradually consumer electronics have started imposing greater restrictions on their alleged owners, restrictions that are equally arbitrary and disconnected from the capabilities of hardware, chosen unilaterally by their manufacturers. Consider some examples from consumer electronics:

  • Region coding in DVD players. DVD players are designed to play content manufactured only for a specific region, even though in principle there is nothing that prevents the hardware from being able to play discs purchased anywhere in the world. Why? Because of disparities in purchasing power, DVDs are priced much lower in developing regions than they are in Western countries. If DVD players sold to American consumers could play content, it would suddenly become possible to “arbitrage” this price difference by purchasing cheap DVDs in say Taiwan and play them in the US. Region coding protects the revenue model of content providers, which depends crucial on price discrimination: charging more to US consumers for the same thing as Taiwanese consumers because they can afford to pay higher prices for movies.
  • Generalizing from the state of DVD players, any Digital Rights Management (or as it has been derisively called “digital restrictions management”)  technology is an attempt to hamper the capabilities of software/hardware platforms to further the interests of content owners. While rest of the software industry is focused on doing more with existing resources— squeeze more performance out of the CPU, add more features to an application that users will enjoy— those working on DRM are trying to get devices to do less. Information is inherently copyable; DRM tries to stop users from copying bits. By default audio and video signals can be freely sent to any output device; HDMI tries to restrict where they can be routed in the name of battling piracy. The restrictions do not even stop with anything involving content. Because the PC platform is inherently open, DRM enforcement inevitably takes an expansive view of its mission and begins to monitor the user for any signs that they could be doing perfectly valid activity that could potentially undermine DRM such as installing unsigned device drivers or enabling kernel-mode debugging on Windows.
  • Many cell phones sold in North America are “locked” to a specific carrier, typically the one where the customer bought their phone from. It is not possible to switch to another wireless carrier while keeping the device. Again there is no technical reason for this. Much like the number of processors that an operating system will agree to run on, it is an arbitrary setting. (In fact it takes more work to implement such checks.) The standard excuse is that cost of devices are greatly subsidized by the carrier in hidden costs buried into the service contract. But this argument fails basic sanity checks. Presumably the subsidy is paid-off after some number of months but phones remain locked. Meanwhile customers who bring their own unlocked device are not rewarded with any special discounts, effectively distorting the market. Also carriers already charge an early termination fee to customers who walk away from their contract prematurely, surely they can also include additional costs to cover the lost subsidy?
  • Speaking of cell phones, they are increasingly becoming more and more locked down appliances to use the terminology from Zittrain’s “The future of the Internet,” instead of open computing platforms. Virtually all PCs allow users to replace the operating system. Not a fan of Windows 8? Feel free to wipe the slate clean and install Linux. Today consumers can even purchase PCs preloaded with Linux to escape the dreaded “Microsoft tax” where cost of Windows licenses are implicitly factored into hardware prices. And if the idea of Linux-on- the-desktop turns out to be wishful thinking yet again, you can repent and install Windows 10 on that PC which came with Ubuntu out of the box. By contrast phones ship with one operating system picked by the all-knowing manufacturer and it is very difficult to change that. On the surface, consumers have plenty of choice because they can pick from thousands of apps written for that operating system. Yet one level below that, they are stuck with the operating system as an immutable choice. In fact, some Android devices never receive software updates from the manufacturer or carrier, so they are “immutable” in a very real sense. Users must go out of their way to exploit a security vulnerability in order to jailbreak/root their devices to replace the OS wholesale or even extend its capabilities in ways the manufacturer did not envision. OEMs further exploit this confusion to discourage users from tinkering with their devices, trying to equate such practices with weakening security— as if users are better off sticking to their abandoned “stock” OS with known vulnerabilities that will never get patched.

Unfree at any speed

Automative technology too is evolving in this direction of locked-down appliances. Cars remained relatively dumb until the 1990s when microprocessors slowly started making their way into every system, starting with engine management. On the way to becoming more software- driven, effectively computers-on-wheels, something funny happened: the vehicle gained greater capability to sense present conditions and more importantly, it became capable of reacting to these inputs. Initially this looks like an unalloyed good. All of the initial applications are uncontroversial, improving occupant safety: antilock brakes, airbags and traction control. All depend on software monitoring input from sensors and promptly responding to signals indicating that a dangerous condition is imminent.

The next phase may be less clear-cut, as enterprising companies continue pushing the line between choice and coercion. Insurers such as Geico offer pay-per-mile plans that use gadgets attached to the OBD-II port to collect statistics on how far the vehicle is driven, and presumably on how aggressively the driver attacks corners. While some may consider this an invasion of privacy, at least there is a clear opt-out: do not sign up for that plan. In other cases, opt-out becomes ambiguous. GM found itself in a pickle over the Stingray Corvette recording occupants with a camera in the rearview mirror. This was a feature not a bug, designed to create YouTube-worthy videos while the car was being put through its paces. But if occupants are not aware that they are being recorded, it is not clear they consented to appearing as extras in a Sebastian-Vettel-role-playing game. At the extreme end of the informed consent scale is use of remote immobilizers for vehicles sold to consumers with subprime credit. In these cases the dealers literally get a remote kill-switch for disabling operation of the vehicle if the consumer fails to stay current on payments. (At least that is the idea; NYT reports allegations of mistaken or unwarranted remote shutdown by unscrupulous lenders.) One imagines the next version of these gadgets will incorporate a credit-card reader to better approximate the PKD dystopia. Insert quarters to continue.

What is at stake here is a question of fairness and rights, but not in the legal sense. Very little has changed about the mechanics of consumer financing: purchasing a car on credit still obligates the borrower to make payments promptly until the balance is paid off. Failure to fulfill that obligation entitles the seller to repossess the vehicle. This is not some new-fangled notion of how to handle loans in default; the right to repossess or foreclose has always existed on the books. In practice exercising that right often required some dramatic, made-for-TV adventures in for tracking down the consumer or vehicle in question. Software has greatly amplified ability of lenders to enforce their rights and collect on their entitlements under the law.

From outright ownership to permanent tenancy

Closely related is a shift from ownership to subscription models. Software has made it possible to recast what used to be one-time purchases into ongoing subscriptions or pay-per-use models. Powerful social norms exist around how goods are distributed according to one or other model. No one expects that they can pay for electricity or cable with a lump-sum payment once and call it a day, receiving free service in perpetuity. If you stop paying for cable, the screen will eventually go blank. By contrast hardware gadgets such as television sets are expected to operate according to a different model: once you bring it home, it is yours. It may have been purchased with borrowed money, with credit extended by the store or your own credit-card issuer. But consumers would be outraged if their bank, BestBuy or TV manufacturer remotely reached out to brick their television in response to late on payments. Even under most subscription models, there are strict limitations on how service providers can retaliate against consumers breaking the contract.  If you stop paying for water, the utility can shut off future supply of water. They can not send vigilantes over to your house to drain the water tank or “take back” water you are presumably no longer entitled to.

Such absurd scenarios can and do happen in software. Perhaps missing the symbolism, Amazon remotely wiped copies of George Orwell’s 1984 from Kindles over copyright problems. (The irony can only be exceeded if Amazon threatens to remove  copies of Philip K Dick’s “Ubik” unless customers pay up.) These were not die-hard Orwell fans or DMCA protestors deliberately pirating the novel; they had purchased their copies from the official Amazon store. Yet the company defended its decision, arguing that the publisher who had offered those novels on its marketplace lacked the proper rights. Kindle is a locked-down appliance where Amazon calls the shots and customers have no recourse no matter however arbitrary those decisions appear.

What about computers? It used to be the case that if you bought a PC, it was yours for the keeping. It would continue running until its hardware failed.  In 2006 Microsoft launched FlexGo, a pay-as-you-go model for PC ownership in emerging markets. Echoing the words of the used car-salesmen on the benefits bestowed on consumers while barely suppressing a sense of colonialist contempt, a spokesperson for a partnering bank in Brazil enthuses: “Our lower-income customers are excited to finally buy their first PC with minimal upfront investment, paying for time as they need it, and owning a computer with superior features and genuine software.” (Emphasis on genuine software, since consumers in China or Brazil never had any problem getting their hands on pirated versions of Windows.) MSFT takes a more measured approach in touting the benefits of this alternative: “Customers can get a full featured Windows-enabled PC with low entry costs that they can access using prepaid cards or through a monthly subscription.” Insert quarters to continue.

FlexGo did not did not crater like “Bob,” Vista or others in the pantheon of MSFT disasters. Instead it faded into obscurity, having bet on the wrong vision of “making computing accessible” soon rendered irrelevant on both financial and technology grounds. Hardware prices continued to drop Better access to banking services and consumer credit meant citizens in developing countries got access to flexible payment options to buy a regular PC, without an OEM or software vendor in the loop to supervise the loan or tweak the operating system to enforce alternative licensing models. More dramatically the emergence of smartphones cast into doubt whether everyone in Brazil actually needed that “full-featured Windows-enabled PC” in the first place to cross the digital divide.

FlexGo may have disappeared but the siren song of subscription models still exerts its pull on the technology industry. Economics favor such models on both sides. Compared to the infrequent purchase of big-ticket items, the steady revenue stream from monthly subscribers smooths out seasonal fluctuations in revenue. From the consumer perspective, making “small” monthly payments over time instead of one big lump payment may look more appealing due to cognitive biases.

If anything the waning of PC as the dominant platform paves the way for this transformation. Manufacturers can push locked-down “appliances” without the historical baggage associated with the notion of a personal computer. Ideas that would never fly on the PC platform, practices that would provoke widespread consumer outrage and derision—locked boot-loaders, mandatory data-collection, always-on microphones and cameras, remote kill capabilities— can become the new normal for a world of locked-down appliances. In this ecosystem users no longer “own” their devices in the traditional sense; even if they were paid in full and no one can legally show up at the door to demand their return. These gadgets suffer from a serious case of split-personality disorder. On the one hand they are designed to provide some useful service to their presumed “owner;” this is the ostensible purpose they are advertised and purchased for. At the same time the gadget software contains business logic to serve the interests of the device manufacturer/service-provider/whoever happens to actually control the bits running there. These two goals are not always aligned. In a hypothetical universe with efficient markets, one would expect strong correlation. If the gadget deliberately sacrificed functionality to protect the manufacturer’s platform or artificially sustain an untenable revenue model, enlightened consumers would flock to an alternative from a competitor that is not saddled with such baggage. In reality such competitive dynamics operate imperfectly if at all, and the winner-takes-all nature of many market segments means that it is very difficult for a new entrant to make significant gains against entrenched leaders by touting openness or user-control as distinguishing feature. (Case in point: troubled history of open-source mobile phone projects and their failure to reach mass adoption.)

Going against the grain?

If there are forces counteracting the irresistible pull of locked-down appliances, they will face an uneven playing field. The share of PCs continues to decline among all consumer devices; Android has recently surpassed Windows as the most common operating systems on the Internet. Meanwhile the highly fashionable Internet of Things (IoT) notion is predicated on blackbox devices which are not programmable or extensible by their ostensible owners. It turns out that in some cases, they are not even managed by the manufacturer; just ask owners of IP cameras who devices were unwittingly enrolled into the Mirai botnet.

Consumers looking for an alternative face a paradoxical situation. On the one hand, there is a dearth of off-the-shelf solutions designed with user rights in mind. The “market” favors polished solutions such as the Nest thermostat, where hardware, software and cloud services are inextricably bundled together. Suppose you are a fan of the hardware but skeptical about how much private information it is sending to a cloud service provider? Tough luck; there is no cherry picking allowed. On the other hand, there has never been a better time to be tinkering with hardware: Arduino, Raspberry Pi and a host of other low-cost embedded platforms made it easier than ever to put together your own custom solution. This is still a case of payment in exchange for preserving user rights but it is a uniquely undemocratic system. But this “payment” is in the form of additional time spent to engineer and operate home-brew solutions. More worrisome is that such capabilities are only available to a small number of people, distinguished by their ability to renegotiate the terms service providers attempt to impose on their customer base. While that capability is to be celebrated—and it is why every successful jailbreak of a locked-down appliance is celebrated in the security community— it is fundamentally undemocratic by virtue of being restricted to a new ruling class of technocrats.

CP

[Update: Edited Feb 27th to correct typo.]

The missing identity layer for DeFi

Bootstrapping permissioned applications

To paraphrase the infamous 1993 New Yorker cartoon: “On the blockchain, nobody knows that you are a dog.” All participants are identified by opaque addresses with no connection to their real-world identity. Privacy by default is a virtue, but even those who voluntarily want to link addresses to their identity have few good options that would be persuasive. This blog post can lay claim to the address 0xa12Db34D434A073cBEE0162bB99c0A3121698879 on Ethereum but can readers be certain? (Maybe Ethereum Naming Serivce or ENS can help.) On the one hand, there is an undeniable egalitarian ethos here: if the only relevant facts about an address are those represented on-chain— its balance in cryptocurrency, holdings of NFTs, track record of participating in DAO governance votes— there is no way to discriminate between addresses based on such “irrelevant” factors as the citizenship or geographic location of the person/entity controlling that address. Yet such discrimination based on real-world identity is exactly what many scenarios call for. To cite a few examples:

  1. Combating illicit financing of sanctioned entities. This is particularly relevant given that rogue states including North Korea have increasingly pivoted to committing digital asset theft as their access to the mainstream financial system is cut off.
  2. Launching a regulated financial service where the target audience must be limited by law, for example to citizens of a particular country only.
  3. Flip-side of the coin: excluding participants from a particular country (for example, the United States) in order to avoid triggering additional regulatory requirements that would come into play when serving customers in that jurisdiction.
  4. Limiting participation in high-risk products to accredited investors only. While this may seem trivial to check by inspecting the balance on-chain, the relevant criteria are total holdings of that person, which are unlikely to be all concentrated in one address.

As things stand, there are at best some half-baked solutions to the first problem. Blockchain analytics companies such as Chainalysis, TRM Labs and Elliptic surveil public blockchains, tracing funds movement associated with known criminal activity as these actors hop from address to address. Customers of these services can in turn receive intelligence about the state of an address or even an application such as lending pool. Chainalysis even makes this information conveniently accessible on-chain: the company maintains smart-contracts on Ethereum and other EVM compatible chains containing a list of OFAC sanctioned addresses. Any other contract can consult this registry to check on the status of an address they are interacting with.

The problem with these services is three-fold:

  1. The classifications are reactive. New addresses are innocent until proven guilty, when they are later involved in illicit activity. At that point, the damage has been done: other participants may have interacted with the address or allowed the address to participate in their decentralized applications. In some cases it may be possible to unwind specific transactions or isolate the impact. In other situations such as a lending pool where funds from multiple participants are effectively blended together, it is difficult to identify which transactions are now “tainted” by association and which ones are clean.
  2. “Not a terrorist organization” is a low bar to meet. Even if this could be ascertained promptly and 100% accurately, most applications have additional requirements of their participants. Some of the examples alluded to below include location, country of citizenship or accredited investor status. Excluding the tiny fraction of bad actors in the cross-hairs of FinCEN is useful but insufficient for building the types of regulated dapps that can take DeFi mainstream.
  3. All of these services follow a “blacklist” model: excluding bad actors. In information security, it is a well-known principle that this model is inferior to “whitelisting”— only accepting known good actors. In other words, a blacklist fails open: any address not on the list is assumed clean by default. The onus is on the maintainer of the list to keep up with the thousands of new addresses that crop up, not to mention any sign of nefarious activity by existing addresses previously considered safe. By contrast, whitelists require an affirmative step before addresses are considered trusted. If the maintainer is slow to react, the system fails safe: a good address is considered untrusted because the administrator has not gotten around to including them.

What would an ideal identity verification layer for blockchains look like? Some high-level requirements are:

  • Flexible. Instead of expressing a binary distinction between sanctioned vs not-yet-sanctioned, it must be capable of expressing a range of different identity attributes as required by a wide range of decentralized apps.
  • Opt-in. The decision to go through identity verification for an address must reside strictly with person or persons controlling that address. While we can not stop existing analytics companies from continuing to conduct surveillance of all blockchain activity and try to deanonymize addresses, we must avoid creating additional incentives or pressure for participants to voluntarily surrender their privacy. 
  • Universally accepted. The value of an authentication system increases with the number of applications and services accepting that identity. If each system is only useful for onboarding with a handful of dapps, it is difficult for participants to justify the time and cost of jumping through the hoops to get verified. Government identities such as driver’s licenses are valuable precisely because they are accepted everywhere. Imagine an alternative model where every bar had to perform its own age verification system and issue their own permits— not recognized by any other establishment— in order to enforce laws around drinking age.
  • Privacy respecting. The protocols involved in proving identity must limit information disclosed to the minimum required to achieve the objective. Since onboarding requirements vary between dapps, there is a risk of disclosing too much information to prove compliance. For example, if a particular dapp is only open to US residents, that is the only piece of information that must be disclosed, and not for example the exact address where the owner resides. Similarly proof of accredited investor status does not require disclosing total holdings, or proving that a person is not a minor can be done without revealing the exact date of birth. (This requirement has implications on design. In particular, it rules out simplistic approaches around issuing publicly readable “identity papers” directly on-chain, for example as a “soul-bound” token attached to the address. 

Absent such an identity layer, deploying permission DeFi apps is challenging. Aave’s dedicated ARC pool is an instructive example. Restricted to KYCed entities vetted by the custodian Fireblocks, it failed to achieve even a fraction of the total-value locked (TVL) available in the main Aave lending pool. While there were many headwinds facing the product due to timing and the general implosion of cryptocurrency markets in 2022 (here is a good post-mortem thread), the difficulty of scaling the market when participants must be hand-picked is one of the challenges. While ARC may have been one of the first and more prominent examples, competing projects are likely to face the same odds for boot-strapping their own identity system. In fact they do not even stand to benefit from the work done by the ARC team: while participants went through rigorous checks to gain access to that walled garden, there is no reusable, portable identity resulting from that process. Absent an open and universally recognized KYC standard, each project is required to engage in a wasteful effort to field their own identity system. In many ways, the situation is worse than the early days of web authentication. Before identity federation standards such as SAML and OAuth emerged to allow interoperability, every website resorted to building their own login solution. Not surprisingly, many of these were poorly designed and riddled with security vulnerabilities. Even in the best case when each system functioned correctly in isolation, collectively they burdened customers with the challenge of managing dozens of independent usernames and passwords. Yet web authentication is a self-contained problem, much simpler than trying to link online identities to real-world ones. 

What about participants’ incentives for jumping through the necessary hoops for on-boarding? Putting aside ARC, there is a chicken-egg problem to boot-strapping any identity system: Without interesting application that are gated on having that ID, participants have no compelling reason to sign up for one; the value proposition is not there. Meanwhile if few people have onboarded with that ID system, no developer wants to build an application limited to customers with one of those rare IDs— that would be tantamount to choking off your own customer acquisition pipeline. Typically this vicious cycle is only broken in one of two ways:

  1. An existing application with a proprietary identity system, which is already compelling and self-sustaining, sees value in opening up that system such that verified identities can be used elsewhere. (Either because it can monetize those identity services or due to competitive pressure from competing applications offering the same flexibility to their customers for free.) If there are multiple such applications with comparable criteria for vetting customers, this can result in an efficient and competitive outcome. Users are free to take their verified identity anywhere and participate in any permissions application, instead of being held hostage by the specific provider who happened to perform the initial verification. Meanwhile developers can focus on their core competency— building innovative applications— instead of reinventing the wheel to solve an ancillary problem around limiting access to the right audience.
  2. New regulations are introduced, forcing developers to enforce identity verification for their applications. This will often result in an inefficient scramble for each service provider to field something quickly to avoid the cost of noncompliance, leaving little room for industry-wide cooperation or standards to emerge. Alternatively it may result in a highly centralized outcome. One provider specializing in identity verification may be in the right place at the right time when rules go into effect, poised to become the de facto gatekeeper for all decentralized apps. 

In the case of DeFi, this second outcome is looking increasingly more likely.

CP