CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
CERIAS Logo
Purdue University - Discovery Park
Center for Education and Research in Information Assurance and Security

CERIAS Blog

Page Content

New Record for the Largest CVE Entry

Share:

Last week my script that processes and logs daily CVE changes broke.  It truncated inputs larger than 16000 bytes, because I believed that no CVE entry should ever be that large, therefore indicating some sort of trouble if it ever was.  Guess what…  The entry for CVE-2006-4339 reached 16941 bytes, with 352 references.  This is an OpenSSL issue, and highlights how much we are dependent on it.  It’s impressive work from MITRE’s CVE team in locating and keeping track of all these references.

Virtualization Is Successful Because Operating Systems Are Weak

Share:

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems.  Virtualization supports functions such as these:

  1. Availability
    • Minimized downtime for patching OSes and applications
    •          
    • Restart a crashed OS or server

  2. Scalability
    • More or different images as demand changes

  3. Isolation and compartmentalization
  4. Better hardware utilization
  5. Hardware abstraction for OSes
    • Support legacy platforms

Compare it to the list of operating system duties:

  1. Availability
    • Minimized downtime for patching applications
    •          
    • Restart crashed applications
    •      

  2. Scalability
    • More or different processes as demand changes

  3. Isolation and compartmentalization
    • Protected memory
    • Accounts, capabilities

  4. Better hardware utilization (with processes)
  5. Hardware abstraction for applications

The similarity suggests that virtualization solutions compete with operating systems.  I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously.  Typical operating systems lack security, reliability and ease of maintenance.  They have drivers in kernel space;  Windows Vista thankfully now has them in user space, and Linux is moving in that direction.  The complexity is staggering.  This is reflected in the security guidance;  hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex.  The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex.  The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization.  In reality, virtualization typically depends on another, full-blown operating system. 
more OSes
VMWare ESX Server runs its own OS with drivers.  Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation).  Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it.  So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason.  This other OS also needs patches, configuration and maintenance.  Now we have multiple OSes to maintain!  What did we gain?  We don’t trust OSes but we trust “virtualization” that depends on more OSes?  At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory.  Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own.  The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes. 

As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary.  ESX is an operating system anyway…  I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS;  there may be a thesis or two in figuring out how to implement them back in an operating system. 

What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista:  virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest.  Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure,  and for legacy applications.  Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).

Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level.  ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure?  Computer, 39

Open Source Outclassing Home Router Vendor’s Firmware

Share:

I’ve had an interesting new experience these last few months.  I was faced with having to return a home wireless router again and trying a different model or brand, or try an open source firmware replacement. If one is to believe reviews on sites like Amazon and Newegg, all home wireless routers have significant flaws, so the return and exchange game could have kept going on for a while.  The second Linksys device I bought (the most expensive on the display!) had the QoS features I wanted but crashed every day and had to be rebooted, even with the latest vendor-provided firmware.  It was hardly better than the Verizon-provided Westell modem, which had to be rebooted sometimes several times per day despite having simpler firmware. That was an indication of poor code quality, and quite likely security problems (beyond the obvious availability issues). 

I then heard about DD-WRT, an alternative firmware released under the GPL.  There are other alternative firmwares as well, but I chose this one simply because it supported the Linsys router;  I’m not sure which of the alternatives is the best.  For several months now, not only has the device demonstrated 100% availability with v.24 (RC5), but it supports more advanced security features and is more polished.  I expected difficulties because it is beta software, but had none.  Neither CERIAS or I are endorsing DD-WRT, and I don’t care if my home router is running vendor-provided or open source firmware, as long as it is a trustworthy and reliable implementation of the features I want.  Yet, I am amazed that open source firmware has outclassed firmware for an expensive (for a home router) model of a recognized and trusted brand.  Perhaps home router vendors should give up their proprietary, low-quality development efforts, and fund or contribute somehow to projects like DD-WRT and install that as default.  A similar suggestion can be made if the software development is already outsourced.  I believe that it might save a lot of grief to their customers, and lower the return rates on their products.