The highly anticipated panel ended up taking a different turn. My colleague Tavis from Google could not attend, leaving DJ Capelis from UCSD as the only other skeptical voice to point out the risks from virtualization. (Recap: Last year Tavis found several problems in qemu, Virtual PC/Server and VMware virtualization platforms.) Intel was represented, and so was AMD with John Wiederhirn and Tal Garfinkel attended for VMware completing the viewpoints at the table: hardware, virtualization platform, security research.
Most of the discussion implicitly focused on the server consolidation scenario, without spelling out the other uses of virtualization. Briefly consolidation scenario is about replacing multiple physical machines by a single powerful box that runs a VM with the equivalent OS/software configuration for each PC displaced. It sounds like re-arranging deck chairs but in fact this is a major cost saving opportunity for enterprise IT departments. A single powerful, expensive server hosting N virtual machines is by far easier to maintain than N low-end servers each running a different configuration. And in the long run the cost of maintenance dominates the original purchase cost of the hardware. Full machine virtualization creates new opportunities because it allows very clean consolidation between applications that could not otherwise live on the same bare metal: for example a legacy W2K3 line-of-business app alongside a new W2K8 terminal server or even Linux and Windows coexisting side-by-side.
This is the most commercially viable market for virtualization. VMware has been leading the charge and MSFT giving chase with Virtual Server R2 and the upcoming hypervisor in W2K8. But focusing on it alone skewed the discussion, setting the stage for a predictable debate around trade-offs. Separate hardware is an isolation boundary: it keeps different applications from interfering with each other accidentally or by malicious logic. Virtualization is another one, as are operating system processes, BSD jails etc. Each one has an assurance level from security perspective or equivalently an attack surface. Server consolidation with a VMM involves changing the isolation boundary and creating new attack surface. There may be new channels for one VM to attack another when running on the same bare-metal, while separate boxes would have been confined to the network or shared storage etc. Quantifying that incremental risk and sharing opinions on whether it is a reasonable trade-off fueled much of the debate.
This is a comparison virtualization can not win on the single dimension of risk. Considering the extent VMs are used for malware research and quarantining untrusted code, it’s surprising that other applications were not considere. The flip-side of consolidation is sand-boxing: moving applications running in the same trust boundary to different VMs is a corresponding improvement– although the extent that it improve security is again debatable and dependent on the quality of implementation.
As a side note, the moderator raised a point about reduced customer choice: with individual machines one has a choice of different vendors to buy a network switch from. With the functionality of the switch subsumed in the software stack that choice goes away.