The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Virtualization Is Successful Because Operating Systems Are Weak


It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems.  Virtualization supports functions such as these:

  1. Availability
    • Minimized downtime for patching OSes and applications
    • Restart a crashed OS or server

  2. Scalability
    • More or different images as demand changes

  3. Isolation and compartmentalization
  4. Better hardware utilization
  5. Hardware abstraction for OSes
    • Support legacy platforms

Compare it to the list of operating system duties:

  1. Availability
    • Minimized downtime for patching applications
    • Restart crashed applications

  2. Scalability
    • More or different processes as demand changes

  3. Isolation and compartmentalization
    • Protected memory
    • Accounts, capabilities

  4. Better hardware utilization (with processes)
  5. Hardware abstraction for applications

The similarity suggests that virtualization solutions compete with operating systems.  I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously.  Typical operating systems lack security, reliability and ease of maintenance.  They have drivers in kernel space;  Windows Vista thankfully now has them in user space, and Linux is moving in that direction.  The complexity is staggering.  This is reflected in the security guidance;  hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex.  The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex.  The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization.  In reality, virtualization typically depends on another, full-blown operating system. 
more OSes
VMWare ESX Server runs its own OS with drivers.  Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation).  Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it.  So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason.  This other OS also needs patches, configuration and maintenance.  Now we have multiple OSes to maintain!  What did we gain?  We don’t trust OSes but we trust “virtualization” that depends on more OSes?  At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory.  Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own.  The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes. 

As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary.  ESX is an operating system anyway…  I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS;  there may be a thesis or two in figuring out how to implement them back in an operating system. 

What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista:  virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest.  Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure,  and for legacy applications.  Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).

Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level.  ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure?  Computer, 39


Posted by › Segnalazioni sparse
on Wednesday, March 26, 2008 at 12:15 PM

[...] se la vera ragione della virtualizzazione fosse che non ci fidiamo dei sistemi operativi di oggi? E intanto ne mettiamo su un [...]

Posted by Ryan
on Friday, March 28, 2008 at 04:11 AM

I think you’re spot-on. Something like Microsoft Reasearch’s Singularity OS would be a far better solution than Virutalization.

Unfortunaltey, no real, useful applications run on Singularity, or any other “future” secure and stable OS.

So Virtualization’s dominance is inevitable, because we have a lot of legacy stuff that needs to keep operating. Virtualization lets us do that with lots of flexibility and minimal hardware requirements.

It’s the difference between academic IT and real-world IT “we gotta provide this service to the business or our customers at this cost”.

Posted by Richard Bejtlich
on Tuesday, June 24, 2008 at 12:00 PM

Spaf, I totally agree.

Posted by Rob Lewis
on Thursday, June 26, 2008 at 09:47 AM


IT security has generally realized that it must head towards information-centric security. I do not see any advantages in virtual machines for this purpose. In fact, it is hard enough to achieve granular access controls on a per user level at the data file level as it is, without having users run rogue virtual machines and doing whatever they may, with data.

Any thoughts?

Posted by Ivan
on Thursday, June 26, 2008 at 08:04 PM

You incorrectly state “VMWare ESX Server runs its own OS with drivers.”



Posted by Allen Baranov
on Thursday, July 3, 2008 at 08:29 AM

My answer is on my Security Thoughts blog here -> .

I basically agree with you and take it one step further.

In the 90s we *had* different services running on one box!

Posted by Richard Bejtlich
on Friday, July 18, 2008 at 12:12 PM

Ivan, ESX != Linux.  You don’t know what you’re talking about.  Try talking to one of the VMware developers.

Posted by Ivan
on Friday, July 18, 2008 at 08:54 PM

Allen - did you bother to read the link I provided?

If it wasn’t try you can bet a commercial company like vmware would have been all over him, but yet the blog stands and its a comprehensive blog. Ask a vmware developer, like this dude who begs for help…

Do VMware know about this? They should: they employ a number of developers who work on the Linux kernel. And they do: witness the following from last August - replying to a post from VMware’s Zachary Amsden to the Linux kernel mailing list, Hellwig wrote to Zachary:

‘Until you stop violating our copyrights with the VMWare ESX support nothing is going to be supported. So could you please stop abusing the Linux code illegally in your project so I don’t have to sue you, or at least piss off and don’t expect us to support you in violating our copyrights.

Posted by Pascal Meunier
on Friday, July 18, 2008 at 09:36 PM

ESX has its own microkernel, vmkernel, which is in control, and its own drivers, forming the base of a modern microkernel OS.  It uses the Linux kernel mostly for convenience in booting and other tasks, more or less transforming it into a library.  I suggest that the Wikipedia entry at: which currently agrees with the gist of what I said, should first benefit from your advocacy.  Thanks for having presented another point of view.

Posted by cebeci
on Wednesday, December 17, 2008 at 09:53 PM

Unfortunaltey, no real, useful applications run on Singularity, or any other “future” secure and stable OS..

Posted by Allen Baranov
on Friday, December 19, 2008 at 01:52 AM


I think you were refering to Richard’s comment. (If you got confused then I don’t blame you - the “Posted By” lines are pretty confusing in that they seem to belong to the *next* comment - a case of fancy rather than useful).

My comment backs up the general idea of this article in that we are really just taking multiple services, each running on their own virtual OS and putting them on one box. In the past we’d just run all the services on one box, with one OS, no virtualisation and be done with it.

Posted by Matt
on Tuesday, January 13, 2009 at 09:23 PM

Interesting post.  Virtualization is very important to us as a web host.

Leave a comment

Commenting is not available in this section entry.