Posts tagged virtualization

Page Content

Thoughts on Virtualization, Security and Singularity

The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before.  It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM).  An undetectable VMM would be “transparent”.  Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings.  The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. 

However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God.  They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts.  This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex.  However, this is not an hermetic, formal proof that it is impossible and cannot exist;  a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible. 

Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine.  One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? 

McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then.  The idea is to integrate security functions within the virtual machine monitor.  Malware nowadays prevents the installation of security tools and interferes with them as much as possible.  If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools.  I really like that idea. 
 
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS.  Because of this, VMM detection isn’t a moot point for the defender.  However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker.  Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM?  Perhaps it would need more hooks into the VMM to do this.  Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly.  What is the cost of all these detection attempts, which must be executed regularly?  Aren’t they prohibitive, therefore making strong malicious VMM detection impractical?  In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection.  Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.


Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler.  What strikes me is how it resembles the “white list” approach I’ve been talking about.  “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner.  It states what processes do and what may happen, instead of focusing on what must not happen. 

Last year I thought that virtualization and security could provide a revolution;  now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger.  Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race.  In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.

Complexity, virtualization, security, and an old approach

[tags]complexity,security,virtualization,microkernels[/tags]
One of the key properties that works against strong security is complexity.  Complexity poses problems in a number of ways.  The more complexity in an operating system, for instance, the more difficult it is for those writing and maintaining it to understand how it will behave under extreme circumstances.  Complexity makes it difficult to understand what is needed, and thus to write fault-free code.  Complex systems are more difficult to test and prove properties about.  Complex systems are more difficult to properly patch when faults are found, usually because of the difficulty in ensuring that there are no side-effects.  Complex systems can have backdoors and trojan code implanted that is more difficult to find because of complexity.  Complex operations tend to have more failure modes.  Complex operations may also have longer windows where race conditions can be exploited.  Complex code also tends to be bigger than simple code, and that means more opportunity for accidents, omissions and manifestation of code errors.

It is simple that complexity creates problems.

Saltzer and Schroeder identified it in their 1972 paper in CACM. They referred to “economy of mechanism” as their #1 design principle for secure systems.

Some of the biggest problems we have now in security (and arguably, computing) are caused by “feature creep” as we continue to expand systems to add new features.  Yes, those new features add new capabilities, but often the additions are foisted off on everyone whether they want them or not.  Thus, everyone has to suffer the consequences of the next exapnded release of Linux, Windows (Vista), Oracle, and so on.  Many of the new features are there as legitimate improvements for everyone, but some are of interest to only a minority of users, and others are simply there because the designers thought they might be nifty.  And besides, why would someone upgrade unless there were lots of new features?

Of course, this has secondary effects on complexity in addition to the obvious complexity of a system with new features.  One example has to do with backwards compatibility.  Because customers are unlikely to upgrade to the new, improved product if it means they have to throw out their old applications and data, the software producers need to provide extra code for compatibility with legacy systems.  This is not often straight-forward—it adds new complexity.

Another form of complexity has to do with hardware changes.  The increase in software complexity has been one motivating factor for hardware designers, and has been for quite some time.  Back in the 1960s when systems began to support time sharing, virtual memory became a necessity, and the hardware mechanisms for page and segment tables needed to be designed into systems to maintain reasonable performance.  Now we have systems with more and more processes running in the background to support the extra complexity of our systems, so designers are adding extra processing cores and support for process scheduling.

Yet another form of complexity is involved with the user interface.  The typical user (and especially the support personnel) now have to master many new options and features, and understand all of their interactions.  This is increasingly difficult for someone of even above-average ability.  It is no wonder that the average home user has myriad problems using their systems!

Of course, the security implications of all this complexity have been obvious for some time.  Rather than address the problem head-on by reducing the complexity and changing development methods (e.g., use safer tools and systems, with more formal design), we have recently seen a trend towards virtualization.  The idea is that we confine our systems (operating systems, web services, database, etc) in a virtual environment supported by an underlying hypervisor.  If the code breaks…or someone breaks it…the virtualization contains the problems.  At least, in theory.  And now we have vendors providing chipsets with even more complicated instruction sets to support the approach.  But this is simply adding yet more complexity.  And that can’t be good in the long run. Already attacks have been formulated to take advantage of these added “features.”

We lose many things as we make systems more complex.  Besides security and correctness, we also end up paying for resources we don’t use.  And we are also paying for power and cooling for chips that are probably more powerful than we really need.  If our software systems weren’t doing so much, we wouldn’t need quite so much power “under the hood” in the hardware.

Although one example is hardly proof of this general proposition, consider the results presented in 86 Mac Plus Vs. 07 AMD DualCore.  A 21-year old system beat a current top-of-the-line system on the majority of a set of operations that a typical user might perform during a work session.  On your current system, do a “ps” or run the task manager.  How many of those processes are really contributing to the tasks you want to carry out?  Look at the memory in use—how much of what is in use is really needed for the tasks you want to carry out?

Perhaps I can be accused of being a reactionary ( a nice word meaning “old fart:”), but I remember running Unix in 32K of memory.  I wrote my first full-fledged operating system with processes, a file system, network and communication drivers, all in 40K.  I remember the community’s efforts in the 1980s and early 1990s to build microkernels.  I remember the concept of RISC having a profound impact on the field as people saw how much faster a chip could be if it didn’t need to support complexity in the instruction set.  How did we get from there to here?

Perhaps the time is nearly right to have another revolution of minimalism.  We have people developing low-power chips and tiny operating systems for sensor-based applications.  Perhaps they can show the rest of us some old ideas made new.

And for security?  Well, I’ve been trying for several years to build a system (Poly^2) that minimalizes the OS to provide increased security.  To date, I haven’t had much luck in getting sufficient funding to really construct a proper prototype; I currently have some funding from NSF to build a minimal version, but the funding won’t allow anything close to a real implementation.  What I’m trying to show is too contrary to conventional wisdom.  It isn’t of interest to the software or hardware vendors because it is so contrary to their business models, and the idea is so foreign to most of the reviewers at funding agencies who are used to build ever more complex systems.

Imagine a system with several dozen (or hundred) processor cores.  Do we need process scheduling and switching support if we have a core for each active process?  Do we need virtual memory support if we have a few gigabytes of memory available per core?  Back in the 1960s we couldn’t imagine such a system, and no nation or company could afford to build one.  But now that wouldn’t even be particularly expensive compared to many modern systems.  How much simpler, faster, and more secure would such a system be?  In 5 years we may be able to buy such a system on a single chip—will we be ready to use it, or will we still be chasing 200 million line operating systems down virtual rat holes?

So, I challenge my (few) readers to think about minimalism.  If we reduce the complexity of our systems what might we accomplish?  What might we achieve if we threw out the current designs and started over from a new beginning and with our current knowledge and capabilities?

[Small typo fixed 6/21—thanks cfr]

Copyright © 2007 by E. H. Spafford
[posted with ecto]

VMworld 2006: How virtualization changes the security equation

This session was very well attented (roughly 280 people), which is encouraging.  In the following, I will mix all the panel responses together without differentiating the sources.

It was said that virtualization can make security more acceptable, by contrast to past security solutions and suggested practices that used to be hard to deploy or adopt.  Virtual appliances can help security by introducing more boundaries between various data center functions, so if one is compromised the entire data center hasn’t been compromised.  One panel member argued that virtual appliances (VA) can leverage the expertise of other people.  So, presumably if you get a professional VA it may be installed better and more securely than an average system admin could, and you could pass liability on to them (interestingly, someone else told me outside this session that liability issues were what stopped them from publishing or selling virtual appliances).

I think you may also inherit problems due to the vendor philosophy of delivering functional systems over secure systems.  As always, the source of the virtual appliances, the processes used to create them, the requirements that they were designed to meet, should be considered in evaluating the trust that can be put into them.  Getting virtual appliances doesn’t necessarily solve the hardening problem.  Except, now instead of having one OS to harden, you have to repeat the process N times, where N is the number of virtual appliances you deploy.

As a member of the panel argued, virtualization doesn’t make things better or worse, it still all depends on the practices, processes, procedures, and policies used in managing the data center and the various data security and recovery plans.  Another pointed out that people shouldn’t assume that virtual appliances or virtualization provide security out-of-the-box.  Out of all malicious software, currently 4-5% check if they are running inside a virtual machine;  this may become more common.

It was said that security is not the reason why people are deploying virtualization now.  Virtualization is not as strong as using several different physical, specialized machines, due to the shared resources and shared communication channels.  Virtualization would be much more useful on the client side than on the data center for improving security.  Nothing else of interest was said.

Unfortunately, there was no time for me to ask what the panel thought of the idea of opening VMware to plugins that could perform various security functions (taint tracking and various attack protection schemes, IDS, auditing, etc…).  After the session one of the panel members mentioned that this was being looked at, and that it raised many problems, but would not elaborate.  In my opinion, it could trump the issue of Microsoft (supposedly) closing Windows to security vendors, but they thought of everything!  Microsoft’s EULA forbids running certain versions of Windows on virtual machines.  I wonder about the wisdom of this, as restricting the choices of security solutions can only hurt Microsoft and their users.  Is this motivated by the fear of people cracking the DRM mechanism(s)?  Surely just the EULA can’t prevent that—crackers will do what they want.  As Windows could simply check to see if it is running inside a VM, DRMed content could be protected by refusing to be performed under those conditions, without making all of Windows unavailable.  The fact that the most expensive version of Windows allows running inside a virtual machine (even though performing DRMed content is still forbidden) hints that it’s mostly due to marketing greed, but on the whole I am puzzled by those policies.  It certainly won’t help security research and forensic investigations (are forensic examinators exempt from the licensing/EULA restrictions?  I wonder).

VMworld 2006:  ReAssure (CERIAS), VIX and Lab Manager (VMware)

The conference is surprisingly huge (6000 people).  Virtualization is obviously important to IT now.  I am looking forward to the security-related talks (I’ll post about them later).  Here are a few notes from the sessions I attended:

  • Saturday a VMware team shot a video of yours truly talking about ReAssure (of course I became tongue-tied when the camera was turned on!).  It will be presented at the general session Wednesday morning.  I hope it generates interest in ReAssure!
  • The VIX API on Tuesday morning was a very interesting session.  It will enable the remaining automation functionality of ReAssure.  It allows to automate the powering on and off of virtual machines, the taking of snapshots, transfering files (e.g., results) between the host and guest OS, and even starting programs in the guest OS!  It was introduced with VMWare server 1.0 last summer, but I hadn’t noticed.  It is still work in progress though;  there’s support only for C, Perl and COM (no Python, although I was told that there was a source forge project for that).
  • The VMware lab manager (introduced last summer) is very much like ReAssure.  Except, ReAssure doesn’t have IP conflicts, and in ReAssure all experiments (“deployed configurations”) are independent and their traffic is isolated with VLANs.  In some respects, VMware lab manager is more sophisticated, and in others it is more primitive.  For example, all networks in Lab Manager are flat (and even, all experiments share the same network, apparently), whereas ReAssure supports complex networks.  To resolve IP conflicts, Lab Manager uses “fenced networks” which is a NAT hack.  Lab Manager is also limited to fibre channel NAS, and is tied to VMware ESX while disabling most of what makes ESX flexible and interesting (ReAssure uses the VMware server freeware).  I’m excited about the VIX API (see above) because will bring ReAssure beyond lab manager, by allowing snapshots, suspend and resume functionality, etc…I wonder what I need to do to make ReAssure more well-known and adopted.  I haven’t found any bugs in it for a while, so I think I’ll officially release the first final (not beta) version very soon (e.g., Friday or next week).