The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Thoughts on Virtualization, Security and Singularity

The "VMM Detection Myths and Realities" paper has been heavily reported and discussed before. It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM). An undetectable VMM would be "transparent". Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings. The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God. They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts. This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex. However, this is not an hermetic, formal proof that it is impossible and cannot exist; a new breakthrough technology or an "alien science-fiction" god-like technology might make it possible. Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine. One might be tempted to remark, doesn't this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? McAfee's "secure virtualization" The security seminar by George Heron answers some of the questions I was asking at last year's VMworld conference, and elaborates on what I had in mind then. The idea is to integrate security functions within the virtual machine monitor. Malware nowadays prevents the installation of security tools and interferes with them as much as possible. If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools. I really like that idea. The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS. Because of this, VMM detection isn't a moot point for the defender. However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker. Also, would it be possible to detect a "bad" VMM if the McAfee security tools themselves run inside a virtualized environment on top of the "good" VMM? Perhaps it would need more hooks into the VMM to do this. Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly. What is the cost of all these detection attempts, which must be executed regularly? Aren't they prohibitive, therefore making strong malicious VMM detection impractical? In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection. Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time. Which Singularity? Microsoft's Singularity project attempts to create an OS and execution environment that is secure by design and simpler. What strikes me is how it resembles the "white list" approach I've been talking about. "Singularity" is about constructing secure systems with statements ("manifests") in a provable manner. It states what processes do and what may happen, instead of focusing on what must not happen. Last year I thought that virtualization and security could provide a revolution; now I think it's more of the same "keep building defective systems and defend them vigorously", just somewhat stronger. Even if I find the name somewhat arrogant, "Singularity" suggests a future for security that is more attractive and fundamentally stable than yet another arms race. In the meantime, though, "secure virtualization" should help, and expect lots of marketing about it.

Legit Linux Codecs In the U.S.

As a beginner Linux user, I only recently realized that few people are aware or care that they are breaking U.S. law by using unlicensed codecs. Even fewer know that the codecs they use are unlicensed, or what to do about it. Warning dialogs (e.g., in Ubuntu) provide no practical alternative to installing the codecs, and are an unwelcome interruption to workflow. Those warnings are easily forgotten afterwards, perhaps despite good intentions to correct the situation. Due to software patents in the U.S., codecs from sound to movies such as h.264 need to be licensed, regardless of how unpalatable the law may be, and of how this situation is unfair to U.S. and Canadian citizens compared to other countries. This impacts open source players such as Totem, Amarok, Mplayer or Rythmbox. The CERIAS security seminars, for example, use h.264. The issue of unlicensed codecs in Linux was brought up by Adrian Kingsley-Hughes, who was heavily criticized for not knowing about, or not mentioning, fluendo.com and other ways of obtaining licensed codecs. Fluendo Codecs So, as I like Ubuntu and want to do the legal thing, I went to the Fluendo site and purchased the "mega-bundle" of codecs. After installing them, I tried to play a CERIAS security seminar. I was presented with a prompt to install 3 packs of codecs which require licensing. Then I realized that the Fluendo set of codecs didn't include h.264! Using Fluendo software is only a partial solution. When contacted, Fluendo said that support for h.264, AAC and WMS will be released "soon". Wine Another suggestion is using Quicktime for Windows under Wine. I was able to do this, after much work; it's far from being as simple as running Synaptic, in part due to Apple's web site being uncooperative and the latest version of Quicktime, 7.2, not working under Wine. However, when I got it to work with an earlier version of Quicktime, it worked only for a short while. Now it just displays "Error -50: an unknown error occurred" when I attempt to play a CERIAS security seminar. VideoLAN Player vs MPEG LA The VideoLAN FAQ explains why VideoLAN doesn't license the codecs, and suggests contacting MPEG LA. I did just that, and was told that they were unwilling to let me pay for a personal use license. Instead, I should "choose a player from a licensed supplier (or insist that the supplier you use become licensed by paying applicable royalties)". I wish that an "angel" (a charity?) could intercede and obtain licenses for codecs in their name, perhaps over the objections of the developers, but that's unlikely to happen. What to do Essentially, free software users are the ball in a game of ping-pong between free software authors and licensors. Many users are oblivious to this no man's land they somehow live in, but people concerned about legitimacy can easily be put off by it. Businesses in particular will be concerned about liabilities. I conclude that Adrian was right in flagging the Linux codec situation. It is a handicap for computer users in the U.S. compared to countries where licensing codecs isn't an issue. One solution would be to give up Ubuntu (for example) and getting a Linux distribution that bundles licensed codecs such as Linspire (based on Ubuntu) despite the heavily criticized deal they made with Microsoft. This isn't about being anti-Microsoft, but about divided loyalties. Free software, for me, isn't about getting software for free, even though that's convenient. It's about appreciating the greater assurances that free software provides with regards to divided loyalties and the likelihood of software that is disloyal by design. Now Linspire may have or in the future get other interests in mind besides those of its users. This deal being part of a vague but threatening patent attack on Linux by Microsoft also makes Linspire unappealing. Linspire is cheap, so cost isn't an issue; after all getting the incomplete set of codecs from Fluendo ($40) cost me almost as much as getting the full version of Linspire ($49) would have. Regardless, Linspire may be an acceptable compromise for many businesses. Another advantage of Linspire is that they bundle a licensed DVD player as well (note that the DMCA, and DVD CCA license compliance, are separate issues from licensing codecs such as h.264). Another possibility is to keep around an old Mac or use lab computers until Fluendo releases the missing codecs. Even if CERIAS was to switch to Theora just to please me, the problem would surface again later. So, there are options, but they aren't optimal.

Hypocritical Security Conference Organizers

Every once in a while, I receive spam for security conferences of which I've never heard, even less attended. Typically the organizers of these conferences are faculty members, professors, or government agency employees who should know better than hire companies to spam for them. I suppose that hiring a third party provides plausible deniability. It's hypocritical. To be fair, I once received an apology for a spamming, which demonstrated that those involved understood integrity. It's true that it's only a minor annoyance. But, if you can't trust someone for small things, should you trust them for important ones?

Disloyal Software

Disloyal software surrounds us. This is software running on devices or computers you own and that serves interests other than yours. Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping "splashes" or previews, or popup and popunder advertisement web browser windows. When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing). Some of it is perfectly legal; some protects legal rights. At the other extreme, rootkits can subvert entire computers against their owners. The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit? Who's in Control Loyalty is a question of perspective in ownership vs control. The employer providing laptops and computers to employees doesn't want them to install things that could be liabilities or compromise the computer. The employee is using software that is restrictive but justifiably so. From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition). The developers won't benefit from implementing restrictions and developing software that does things that go counter to the interests of the user. If one does, someone somewhere will likely remove that restriction for the benefit of all. Of course, this doesn't address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories. This leads to questions of control of many other devices, such as game consoles and media players such as the iPod. Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer? It doesn't matter which computer as long as I'm not violating copyrights; possibly it's the same computer that ripped the CDs, because the hard drive died or was upgraded, or it's the new computer I just bought. By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the "play" section can't be copied out. This restriction is utterly silly as it accomplishes nothing but annoy owners, and I'm glad that Ubuntu Linux allows direct access to the music files. DMCA Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA. As modifying consoles ("modding") is often done for that purpose, the act of "modding" has become suspicious in itself. Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself. This has a chilling effect on the recycling of perfectly good hardware with better software. For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn't illegal as well for most people due to licensing issues with the Microsoft compiler. The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia). Disloyal devices are distasteful, and the current law heavily favors copyright owners. Of course, it's not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones. I recommend watching Ron Buskey's security seminar about cell phones. Web Me Up If you think you're using only free software, you're wrong every time you use the web and allow scripting. The potentially ultimate disloyal software is the one web sites push to your browser. Active content (JavaScript, Flash, etc...) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks). Every time you visit a web page nowadays, you download and run software that is not free: * it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant). * It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time. * there is no point to redistributing copies, as the copies running from the web sites you need to use won't change. * Releasing your "improvements" to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts. Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers; the delivery method tainted them. Give me some AIR The Adobe Integrated Runtime ("AIR") is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way. CERIAS webmaster Ed Finkler developed the "Spaz" application with it, and licensed it with the New BSD license. I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting. It's a question of control and trust. I can't trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit. However, I may trust static code that is free software, to not be disloyal by design. If it is disloyal, it is possible to fix it and redistribute the improved code. AIR could deliver that, as Ed demonstrated. The problem with AIR is that I will have to trust a web developer with the security of my desktop. AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ). The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality. Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox. People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can't be trusted to care if my desktop is compromised through their code, or to be competent to prevent it. Even the security FAQ for AIR downplays significant risks. For example, it says "The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most." This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack. In addition, there probably will be both implementation and design vulnerabilities found in AIR itself. Either way, AIR is a development to watch. P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities? Server applications are notoriously difficult to write securely. Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers...

Solving some of the Wrong Problems

[tags]cybersecurity research[/tags]
As I write this, I'm sitting in a review of some university research in cybersecurity. I'm hearing about some wonderful work (and no, I'm not going to identify it further). I also recently received a solicitation for an upcoming workshop to develop “game changing” cyber security research ideas. What strikes me about these efforts -- representative of efforts by hundreds of people over decades, and the expenditure of perhaps hundreds of millions of dollars -- is that the vast majority of these efforts have been applied to problems we already know how to solve.

Let me recast this as an analogy in medicine. We have a crisis of cancer in the population. As a result, we are investing huge amounts of personnel effort and money into how to remove diseased portions of lungs, and administer radiation therapy. We are developing terribly expensive cocktails of drugs to treat the cancer...drugs that sometimes work, but make everyone who takes them really ill. We are also investing in all sorts of research to develop new filters for cigarettes. And some funding agencies are sponsoring workshops to generate new ideas on how to develop radical new therapies such as lung transplants. Meanwhile, nothing is being spent to reduce tobacco use; if anything, the government is one of the largest purchasers of tobacco products! Insane, isn't it? Yes, some of the work is great science, and it might lead to some serendipitous discoveries to treat liver cancer or maybe even heart disease, but it still isn't solving the underlying problems. It is palliative, with an intent to be curative -- but we aren't appropriately engaging prevention!

Oh, and second-hand smoke endangers many of us, too.

We know how to prevent many of our security problems -- least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren't using much of it.

Instead of building trustworthy systems (note -- I'm not referring to making existing systems trustworthy, which I don't think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised.

We spend huge amounts on detecting botnets and worms, and deploying firewalls to stop them, rather than constructing network-based systems with architectures that don't support such malware.

Instead of switching to languages with intrinsic features that promote safe programming and execution, we spend our efforts on tools to look for buffer overflows and type mismatches in existing code, and merrily continue to produce more questionable quality software.

And we develop almost mindless loyalty to artifacts (operating systems, browsers, languages, tools) without really understanding where they are best used -- and not used. Then we pound on our selections as the “one, true solution” and justify them based on cost or training or “open vs. closed” arguments that really don't speak to fitness for purpose. As a result, we develop fragile monocultures that have a particular set of vulnerabilities, and then we need to spend a huge amount to protect them. If you are thinking about how to secure Linux or Windows or Apache or C++ (et al), then you aren't thinking in terms of fundamental solutions.

I'm not trying to claim there aren't worthwhile topics for open research -- there are. I'm simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it.

In many parts of India, cows are sacred and cannot be harmed. They wander everywhere in villages, with their waste products fouling the streets and creating a public health problem. However, the only solution that local people are able to visualize is to hire more people to shovel effluent. Meanwhile, the cows multiply, the people feed them, and the problem gets worse. People from outside are able to visualize solutions, but the locals don't want to employ them.

Metaphorically speaking, we need to put down our shovels and get rid of our sacred cows -- maybe even get some recipes for meatloaf. :-)

Let's start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have. Will it be easy? No, but neither is quitting smoking! But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower.

[Don't forget to check out my tumble log!]

Blog Archive

Get Your Degree with CERIAS