CFP 2008: Network neutrality and the end of flat pricing models


(Reflections on the past Computers, Freedom and Privacy conference.)

The event had no coherent theme this year unlike the relevance of copyright in 2002, electronic voting in 2004 at Berkeley, the panopticon of commercial surveillance in 2005 at Seattle and the corresponding questions around intelligence in 2006 in DC. Network neutrality and the recent overtures from Comcast, British Telecom and Charter may have been the closest to a shared preoccupation with the crisis-of-the-day.

One welcome development is that the audience on the whole had moved beyond the particulars of Comcast blocking BitTorrent, discussed earlier here. Many people including Paul Ohm and David Reed (who coined Reed’s law describing the value of collaborative networks) made the point that the purported goal of managing scarce upstream bandwidth could have been managed by much less intrusive means including metering usage regardless of the protocol involved. Network neutrality principle rules out any justification for picking on one protocol or application– even if Comcast network engineers decided empirically that one protocol was responsible for the lion’s share of bandwidth usage. And there is no excuse for injecting bogus network traffic (forged reset packets) in response to perceived usurping of bandwidth. Comcast to its credit had a recent moment of clarity and announced a more nuanced approach for managing its available capacity, emphasizing “protocol agnostic.”

As the CFP discussion made clear, BitTorrent and its alleged use for sharing copyrighted content is a red herring, a distraction from the core issue that is purely economical. It is the question of who is paying for bandwidth and exactly how much. Throughout much of the 1990s residential Internet access remain slow, primitive and uncommon. Dial-up connections were the norm and subscribers paid for amount of bandwidth used. In this environment bits were precious, applications were designed to eke out the greatest utilization from the modest bandwidth available and spam literally cost money by driving up usage charges. Eventually as the amount of capacity expanded everywhere, from the massive amounts of fiber underground bulking up the backbone to upgrades in the so-called last mile to the home, it became possible for ISPs to enter the market with a disruptive business model: flat monthly fee for unlimited usage. When AOL switched over to this structure in 1996, it was overwhelmed by the response.

During the transition from dial-up to broadband this tradition of all-you-can-eat pricing was inherited. Granted, service tiers still existed and greater bandwidth could be purchased for higher monthly fees. Within a particular tier it made no difference if the subscriber surfed the web all day along or rarely powered up her computer. This was either the realization of an old prediction made about nuclear energy (“electricity to cheap to meter”) realized in the context of bandwidth, or a sign that everyone was on board with the arrangement of infrequent users subsidizing the high-demand households. It would not have been the first time: similar subsidies occur all the time in technology, including for example different SKUs for software where enterprises pay far above cost to enable consumer versions to be sold at deep discounts.

Either way, the tacit agreement between subscribers and ISPs has continued. Until now. As predictable as the post World War II euphoria over nuclear energy making electricity essentially free disappeared in the Cold War anxiety as the long term problems were better understood, the visions of exponentially improving bandwidth quickly disappeared. Unlike CPU and memory, it proved surprisingly resilient to Moore’s law. Broadband access by DSL or cable still costs comparable to what it did several years ago, and while available network speeds increased gradually, it was a far cry from the doubling every 18 months rate that other components of the PC experienced.

The major disruption instead was the rise of new bandwidth hungry applications, particularly those clamoring for upstream bandwidth. Peterson’s law says that work expands to fill the time available. Internet applications did the same thing for bandwidth. Streaming video may have brought us to an inflection point. All-you-can-eat makes sense when the subsidies are reasonable; in other words the expected range of consumption lies in a narrow band, where the difference between heaviest users and less demanding ones is small. (That is a proxy for the amount of subsidization going on. Less frequent users are missing out on that much value and the heavy users get a corresponding free ride.) In the good old days of narrowband, the difference between the Internet addicts and infrequent users may have been insignificant. Today the difference between checking email and streaming a Netflix movie can be two orders of magnitude.

It’s clear that ISP networks are over provisioned: there is not enough capacity to deliver 10Mbps to every user at the same time even though that is the advertised service level. As long as the average demand works out to below some threshold, everyone is happy. That situation calls for a mix of connection profiles: some idling, others engaged in low bandwidth-intensive tasks and another fraction going full throttle. When more subscribers start maxing out their usage and disparities in consumption grows, the flat pricing model can not survive. Not surprising for a telco, Comcast tried to solve this problem in the most crude and heavy-handed way by trying to “take out” one protocol and suppress demand. Equally predictably, it just dug itself into a deeper hole, sparking a new round of debate on network neutrality and even stirring government into action.

Future predictions? Instituting pay-as-you-go may be a challenge, even when it is most efficient allocation of bandwidth. Customers are used to the flat fee structure. Instead we might expect two things. First is a global cap on amount of bandwidth available per month, similar to wireless plans, with overcharges or reduced service levels when the cap is reached. The second response would be an increasing number of service tiers: for example a “file-sharing plan”  (obviously named something more acceptable) may offer higher upstream bandwidth and greater caps. All of these are consistent with network neutrality: the subscriber gets an allotment of bandwidth in terms of maximum available, sustained over a period of time and perhaps for the duration of a month. The user is free to exercise this bandwidth any way they choose: any protocol, any website, any time etc. without interference from the ISP. Limitations imposed on exceeding the expected demand level are transparent and fixed in advance. More importantly the customer can decide to opt for the next service tier when necessary.

cemp

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s