The recent furor over Android 4.2.2. removing AppOps is a great example of Lawrence Lessig’s idea of regulation-by-code, when design decisions in technology are used to promote the interests of one particular group– in this case application developers– over another disfavored group, namely users.
First some background on this debacle. Android implements a relatively coarse-grained permission model, controlling what can be done by applications installed on the device. For example an app can be prevented from reading address book, making phone calls, sending text messages, accessing the camera/microphone or using Bluetooth. When this model debuted circa 2007, it represented a significant advance in terms of security models implemented for an off-the-shelf consumer device.
Sure, the academic literature did not lack for examples of elaborate fine-grained sandboxing schemes to control not only whether an application could access local storage but specify exactly which files it could read/write. Yet the state-of-the-art for PCs and Macs continued to rely on user accounts as the fundamental security boundary. The assumption was any application user runs automatically inherits all access rights available to that user. For example when a Windows application is installed and run, that application executes with the full privileges associated with that account. (In fact since installers typically required administrator privileges, at least for a limited period of time the application had full reign over the machine.) User-account control in Vista, the first attempt to challenge that assumption, ended in disaster with its often ridiculed elevation prompt constantly asking users to make security decisions.
Luckily Android took a different route: applications declare the permissions they require to function, the user only gets to review & approve that once at the time of installation. There are very few situations beyond that when consumers were confronted with security prompts otherwise. (Authenticating with user credentials saved on the device being the primary exception.)
But there is a difference between mechanism and policy. While Android allowed specifying permissions at a granular level, it did not allow users to assign permissions at will. There was a very simple negotiation model. The author of the application specified exactly which permissions were required. The user either agreed and installed that application, or they disagreed and walked away. In other words: take it or leave it. This was clearly not a case of “meaningful choice,” in the same way that passengers can opt out of TSA security screening if they agree not to board the airplane. (Strictly speaking if one does not board the flight, they are not a passenger.)
Many people– both inside and outside Google– wandered out loud why Android did not give users more control. For example, why not allow an application to be installed but also remove some egregious privilege requested by the developer? This provides the best of both worlds for the user: she gets the benefit of running the application and avoids the potential privacy harms caused by granting the dangerous permission..
And that is exactly what AppOps offered. A hidden feature inside the familiar Android Settings application, it allowed users to view and edit permissions granted to each application.
Too powerful for its own good apparently. Not long after EFF wrote a glowing review calling it “awesome privacy tools” and guiding Android users to third-party applications for unleashing this hidden functionality, a minor system update to Android quietly removed the feature completely. Before the functionality was present but hidden: not exposed to users directly through the standard Android user interface, suggesting that perhaps Google did not consider it ready for prime-time or only intended it for internal experiments. After installing the update it was gone for good, with work-arounds only available on rooted devices.
So what was the problem with AppOps? This question can be answered on two levels: technology and business.
Privacy-infringement apologists may point out that allowing applications to be installed with a subset of requested permissions would greatly increase the complexity of writing applications. Instead of a simple model where the application either never gets to run or runs with all requested permissions, the developer has to worry about in-between states where some operation may fail because the user decided to deny that permission. At an implementation level, that translates into added complexity. Permission errors translate into an exception in code, so now the developer is stuck writing additional code to catch this error condition and respond appropriately. (Typically one of: exit the application in disgust, shame the user into granting missing permission or move on without the requested information.)
Why not fabricate data?
There is a simple counter-argument to that objection: if the user declined permission, let’s not trigger a fatal error but instead return bogus or empty data. The application wants to learn the current location but the user does not want to be tracked? Have “GPS” report a random location around the globe. Developer wants a list of all the contacts in the address book? Fabricate “John Doe” friends or report that this user has no contacts. Developer is asking for Bluetooth access? Appear to grant access, but pretend there are no other Bluetooth devices paired to connect to.