The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Complexity, virtualization, security, and an old approach

Share:

[tags]complexity,security,virtualization,microkernels[/tags]
One of the key properties that works against strong security is complexity.  Complexity poses problems in a number of ways.  The more complexity in an operating system, for instance, the more difficult it is for those writing and maintaining it to understand how it will behave under extreme circumstances.  Complexity makes it difficult to understand what is needed, and thus to write fault-free code.  Complex systems are more difficult to test and prove properties about.  Complex systems are more difficult to properly patch when faults are found, usually because of the difficulty in ensuring that there are no side-effects.  Complex systems can have backdoors and trojan code implanted that is more difficult to find because of complexity.  Complex operations tend to have more failure modes.  Complex operations may also have longer windows where race conditions can be exploited.  Complex code also tends to be bigger than simple code, and that means more opportunity for accidents, omissions and manifestation of code errors.

It is simple that complexity creates problems.

Saltzer and Schroeder identified it in their 1972 paper in CACM. They referred to “economy of mechanism” as their #1 design principle for secure systems.

Some of the biggest problems we have now in security (and arguably, computing) are caused by “feature creep” as we continue to expand systems to add new features.  Yes, those new features add new capabilities, but often the additions are foisted off on everyone whether they want them or not.  Thus, everyone has to suffer the consequences of the next exapnded release of Linux, Windows (Vista), Oracle, and so on.  Many of the new features are there as legitimate improvements for everyone, but some are of interest to only a minority of users, and others are simply there because the designers thought they might be nifty.  And besides, why would someone upgrade unless there were lots of new features?

Of course, this has secondary effects on complexity in addition to the obvious complexity of a system with new features.  One example has to do with backwards compatibility.  Because customers are unlikely to upgrade to the new, improved product if it means they have to throw out their old applications and data, the software producers need to provide extra code for compatibility with legacy systems.  This is not often straight-forward—it adds new complexity.

Another form of complexity has to do with hardware changes.  The increase in software complexity has been one motivating factor for hardware designers, and has been for quite some time.  Back in the 1960s when systems began to support time sharing, virtual memory became a necessity, and the hardware mechanisms for page and segment tables needed to be designed into systems to maintain reasonable performance.  Now we have systems with more and more processes running in the background to support the extra complexity of our systems, so designers are adding extra processing cores and support for process scheduling.

Yet another form of complexity is involved with the user interface.  The typical user (and especially the support personnel) now have to master many new options and features, and understand all of their interactions.  This is increasingly difficult for someone of even above-average ability.  It is no wonder that the average home user has myriad problems using their systems!

Of course, the security implications of all this complexity have been obvious for some time.  Rather than address the problem head-on by reducing the complexity and changing development methods (e.g., use safer tools and systems, with more formal design), we have recently seen a trend towards virtualization.  The idea is that we confine our systems (operating systems, web services, database, etc) in a virtual environment supported by an underlying hypervisor.  If the code breaks…or someone breaks it…the virtualization contains the problems.  At least, in theory.  And now we have vendors providing chipsets with even more complicated instruction sets to support the approach.  But this is simply adding yet more complexity.  And that can’t be good in the long run. Already attacks have been formulated to take advantage of these added “features.”

We lose many things as we make systems more complex.  Besides security and correctness, we also end up paying for resources we don’t use.  And we are also paying for power and cooling for chips that are probably more powerful than we really need.  If our software systems weren’t doing so much, we wouldn’t need quite so much power “under the hood” in the hardware.

Although one example is hardly proof of this general proposition, consider the results presented in 86 Mac Plus Vs. 07 AMD DualCore.  A 21-year old system beat a current top-of-the-line system on the majority of a set of operations that a typical user might perform during a work session.  On your current system, do a “ps” or run the task manager.  How many of those processes are really contributing to the tasks you want to carry out?  Look at the memory in use—how much of what is in use is really needed for the tasks you want to carry out?

Perhaps I can be accused of being a reactionary ( a nice word meaning “old fart:”), but I remember running Unix in 32K of memory.  I wrote my first full-fledged operating system with processes, a file system, network and communication drivers, all in 40K.  I remember the community’s efforts in the 1980s and early 1990s to build microkernels.  I remember the concept of RISC having a profound impact on the field as people saw how much faster a chip could be if it didn’t need to support complexity in the instruction set.  How did we get from there to here?

Perhaps the time is nearly right to have another revolution of minimalism.  We have people developing low-power chips and tiny operating systems for sensor-based applications.  Perhaps they can show the rest of us some old ideas made new.

And for security?  Well, I’ve been trying for several years to build a system (Poly^2) that minimalizes the OS to provide increased security.  To date, I haven’t had much luck in getting sufficient funding to really construct a proper prototype; I currently have some funding from NSF to build a minimal version, but the funding won’t allow anything close to a real implementation.  What I’m trying to show is too contrary to conventional wisdom.  It isn’t of interest to the software or hardware vendors because it is so contrary to their business models, and the idea is so foreign to most of the reviewers at funding agencies who are used to build ever more complex systems.

Imagine a system with several dozen (or hundred) processor cores.  Do we need process scheduling and switching support if we have a core for each active process?  Do we need virtual memory support if we have a few gigabytes of memory available per core?  Back in the 1960s we couldn’t imagine such a system, and no nation or company could afford to build one.  But now that wouldn’t even be particularly expensive compared to many modern systems.  How much simpler, faster, and more secure would such a system be?  In 5 years we may be able to buy such a system on a single chip—will we be ready to use it, or will we still be chasing 200 million line operating systems down virtual rat holes?

So, I challenge my (few) readers to think about minimalism.  If we reduce the complexity of our systems what might we accomplish?  What might we achieve if we threw out the current designs and started over from a new beginning and with our current knowledge and capabilities?

[Small typo fixed 6/21—thanks cfr]

Copyright © 2007 by E. H. Spafford
[posted with ecto]

Comments

Posted by crf
on Wednesday, June 20, 2007 at 05:31 PM

It’s 21 years old, rather than 11.

Posted by Spaf
on Thursday, June 21, 2007 at 04:05 AM

Correct, crf.  No idea how the translation from mind to keyboard to text dropped the 10!  Fixed now in the article.

Posted by George Jones
on Monday, June 25, 2007 at 08:41 AM

> So, I challenge my (few) readers to think about > minimalism. If we reduce the complexity of our
> systems what might we accomplish? What might we
> achieve if we threw out the current designs and
> started over from a new beginning and with our
> current knowledge and capabilities?
>
>
> Copyright © 2007 by E. H. Spafford
>
> [posted with ecto]

So, do you think you could give up your
fancy, complex Mac/blogging software
and do blog posts with telnet ?  grin

BTW, I am in full agreement with your
basic premise.  complexity == insecurity.
I just doubt we’ll ever get people to choose
security over functionality. 

—-George Jones

Posted by Pascal Meunier
on Tuesday, June 26, 2007 at 07:05 AM

I think that virtualization, and instruction sets in the hardware to support it, have the potential to “transfer complexity” (as in “transfer risk”).  It replaces some and isolates other complexities instead of simply making things worse, by contrast to pure “featuritis”.  I don’t know anyone who would argue that protected memory and pre-emptive multitasking decrease security due to their complexity (if you disagree, go back to using MacOS 9 tongue laugh).  I believe that virtualization can provide additional or better security guarantees, and I think that it’s a plus.  Virtualization resembles more security in depth instead of just adding more attack surface, although it currently does a mix of both.  It’s a tool—just don’t poke your eye out with it, and don’t think it will solve everything.

Posted by Pascal Meunier
on Tuesday, June 26, 2007 at 07:44 AM

Thinking some more about it, I’d like to point out that you consider a single software security principle in isolation.  Virtualization offers both compartmentalization and defense in depth.  In addition, it enables simplicity and the usage of community resources through the use of virtual appliances.  That’s 4 software security principles out of 10 with a single technology—not bad!

I am surprised that your discussion of complexity doesn’t mention web browsers.    I believe that it is an important battle—there seems to be no upper bound to the growth of their complexity and fragility.  Web 2.0 apps resemble in my mind a juggler’s act, likely to come crashing down at the slightest tremor.  Google apps are like spinning tops, twirling adamantly but unsteady.  To answer your question about minimalism, I’d love to see browsers support a “restricted”, “safe” or “simple” JavaScript that wouldn’t be able to invoke ActiveX, for example, but would be able to support basic events and functions, such as changing the class of HTML tags.  Such a simple JavaScript would enable people to use most websites that require JavaScript, but without exposing dangerous APIs and plugins, and restricting JavaScript functionality to just that needed to make the sites work.  Also, current web browsers would greatly benefit from an internal organization (architecture) that would compartmentalize plugins, pages, tabs, cookies, etc…—but I’ve already discussed that in one of my posts.

Posted by Spaf
on Tuesday, June 26, 2007 at 06:32 PM

Thanks for the feedback, Pascal.

Even if the hardware has some support for the virtualization, there is still added complexity that needs to be managed to set up registers, map memory, and fire up processes.  The whole process of virtualization is complex, and putting some of it in hardware makes it safer, but not safe!

And yes, WWW browsers and Web X.0 (for x > 1) are indeed fine examples of both the complexity creep I described, and the security/maintenace complications that result.

Posted by Liudvikas Bukys
on Wednesday, June 27, 2007 at 04:28 AM

The IBM Blue Gene supercomputer adopts the “processor per process” strategy, not for security but for raw speed.  (At the high end, speed, packaging and architectural simplicity all intersect.)

Posted by Pascal Meunier
on Thursday, June 28, 2007 at 03:32 AM

The Intel Core 2 Duo bugs show that complexity in the hardware is bad too…  I think I’d rather have virtualization done the VMWare way, in software, because it can be patched.  Even though we say patching doesn’t work (well), it’s still better than being stuck with a useless piece of hardware.  It’s not the first time either:  see the previous Intel processor Pentium instruction that could crash the CPU: see Sun Security Bulletin #00161 (1997).  RISC processors may be better for security too.

Posted by Daniel Chien
on Saturday, July 7, 2007 at 05:57 PM

I have a very simple way to enhance Internet security.  On the Internet, everyone has IP address which can not be faked due to the Internet Routing.  You can hide but can not fake. Based on the IP address, lots thing can be done.  For example, phishing website has it own IP address, and can not use the same IP address as legitimate financial institutions.  So when visiting a website, we can check its IP address and know for sure if it is a phishing site.  This is just one of the many security enhancements we can do based on IP address.  It is very simple.

Posted by Spaf
on Monday, July 9, 2007 at 06:01 AM

I’m assuming that you are proposing a revision to the protocols so that IP addresses can’t be faked, because they certainly can (and are) faked now in many different kinds of attack.

Assuming we could get everyone to switch to using that protocol—which is extremely unlikely in even a near term—there are still problems with people using “victim” sites as relays and bots.  There is also the problem of using intermediary sites where logs are not kept, and/or controlling interests (national or commercial) refuse to provide log details to investigators.

Knowing a source IP address might help some, but it is not a complete solution.

Posted by Pascal Meunier
on Tuesday, July 17, 2007 at 03:16 AM

Is it really better to have a multitude of simple items that are possibly proven correct according to their design, but designed separately and interacting together, leading to emergent behavior problems, or a few well-studied complex ones?  Isn’t there a point where a high cardinality of simpler but interacting items isn’t really less complex? 

What if the “simple” items are really difficult to get right?  In your POLY(2) project, you try to assemble a system from “simple” parts with minimalistic functions.  However, getting each part minimalized and simplified is quite difficult and time-consuming, requiring highly-skilled labor;  I hear you even need to make kernel patches.  Many mistakes and bugs can be created while doing so.  Therefore, I submit to you that your approach is paradoxical:  it is quite difficult and complex to create the simple parts in your system, so the reduction in complexity may not be so advantageous as it would seem. 

How can complexity be assessed and compared?  How can someone know if (and I’m not saying you do, I’m just considering it as an abstract case) they are spending a lot of energy just trading and moving complexity around instead of reducing it?

Posted by Spaf
on Tuesday, July 17, 2007 at 10:23 AM

Pascal,
There is effort involved in creating any system.  The use of existing systems simply “hides” the effort that has been involved in development over the years.

The Poly^2 nodes are small, so the effort is less.  Furthermore, because they are small, they will undergo far few patches and upgrades than current systems do, so the lifetime cost is less.

The “big picture” view here is where the value occurs.

Leave a comment

Commenting is not available in this section entry.