This raises the question of whether it is possible to design systems such that even the manufacturer can not deliver malicious updates after the fact. “After the fact” being the operative keyword- of course they always have the option of delivering a pre-compromised machine. So a more realistic line to hold is making sure if that a vendor that started out honest can not later changes its mind, regardless of reason. It could be that the vendor itself has gone rogue- the way SourceForge started injecting spyware into binaries after a “change of business model.” Perhaps the HR department hired a corrupt insider- more than a few developers were caught sneaking in Bitcoin mining software into their company applications. Or it could be the situation Apple faces: under a legal order to provide access to specific user data. Varied as the motivations are, from a security perspective they are equivalent. The question is properly framed as one of capability- can the vendor do this?- rather than one of intention (“we promise we would never do this to our users!”), political inclinations or creativity of legal department in pushing back against subpoenas.
From a technology perspective, this problem is non-trivial. Most systems have a “God mode” as part of their security design which gives full control over the system. This role is exempt from the usual security checks that the system diligently applies to all actions. The answer to the access-control question “is this person allowed to read/write/modify this piece of data?” is an unqualified yes for that role. Unix has root, Windows has administrator. Over time changes to the operating system tried to limit the capabilities of these roles. For example 64-bit versions of Windows prevent even admin account from running arbitrary code in kernel-mode by requiring signed drivers. But admins with physical access can still override such restrictions. Meanwhile the introduction of software-update mechanisms introduced yet another cook into the kitchen: the operating system vendor. Taking the example of Windows, MSFT can remotely update operating system components including the kernel itself on modern versions of the OS. That was not always the case and users can still opt-out, but the history of Windows Update shows a very clear progression: what started out as a convenience feature for the minority of users who cared to pull updates morphed into a powerful large-scale distribution channel for pushing updates to everyone by default. Since MSFT can now silently update Windows with arbitrary code of its choice, it effectively has administrator access to all machines running recent versions of that OS. (Note that code-signing has no effect on this capability, although it creates a deterrent effect. Updates have to be signed but MSFT is capable of signing a malicious binary just as much as it is capable of signing a legitimate OS update intended for public consumption. But that signature provides compelling evidence of culpability if the system is later examined forensically.) In short, while modern OS designs attempt to tame old-school “root” account in the name of least-privilege, they have introduced an even more powerful role with remote access. The situation is worse on mobile devices. Android does not give the user root access by default. You have to “root” your device, the equivalent of jail-breaking an iPhone, for earning that capability. Google on the other hand, retains the ability to push updates to the operating system running at root privilege. Power dynamics have been inverted: all-powerful remote entity, highly constrained local user.
To be clear, this notion of an anything-goes account exempt from usual access-control restrictions is very useful. Being able to tweak every knob and update every last component in the system is essential for improving functionality over time. Otherwise bugs could not be fixed and one would have to purchase brand new PC each time they ran into a critical bug deep in the operating system code itself. A platform shipped in permanently “fused” state stuck with its initial software and no ability to deliver future enhancements is a non-starter.* But when the OS itself is responsible for enforcing aspects of security policy— such as who gets to read data residing on an iPhone— unchecked update capability translates into an exemption from previously defined security restrictions.
So is there a middle-ground? The ideal design would allow delivering new functionality over time (so users are not stuck with the hardware as they purchased it on day one) minus the ability to use the update channel for subverting previously defined security policies. It’s easy to craft theoretical designs, but it is more instructive to look at deployed systems. It turns out that an architecture originally intended for managing smart-cards has exactly this characteristic. More surprisingly, the latest iPhone and some Android devices already include a separate piece of hardware called the embedded secure element which obeys that architecture. It’s called Global Platform.
[continued in part II]
* Interestingly that describes the state of many Android devices, not for lack of an auto-update mechanism which certainly exists in Android, but inability/unwillingness of wireless carriers to leverage that channel for delivering updates.