The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Spaf giving testimony to US Congress today

Just a quick note that Eugene Spafford, Executive Director of CERIAS, will be giving testimony this morning at 10 a.m before the House Ways and Means Committee at a "Hearing on Employment Eligibility Verification Systems and the Potential Impacts on SSA's Ability to Serve Retirees, People with Disabilities, and Workers." You can view the broadcast live by visiting the hearing's page and clicking on "Click Here to View Committee Proceedings Live."

New Record for the Largest CVE Entry

Last week my script that processes and logs daily CVE changes broke. It truncated inputs larger than 16000 bytes, because I believed that no CVE entry should ever be that large, therefore indicating some sort of trouble if it ever was. Guess what... The entry for CVE-2006-4339 reached 16941 bytes, with 352 references. This is an OpenSSL issue, and highlights how much we are dependent on it. It's impressive work from MITRE's CVE team in locating and keeping track of all these references.

Virtualization Is Successful Because Operating Systems Are Weak

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems. Virtualization supports functions such as these:
  1. Availability
    • Minimized downtime for patching OSes and applications
    • Restart a crashed OS or server
  2. Scalability
    • More or different images as demand changes
  3. Isolation and compartmentalization
  4. Better hardware utilization
  5. Hardware abstraction for OSes
    • Support legacy platforms
Compare it to the list of operating system duties:
  1. Availability
    • Minimized downtime for patching applications
    • Restart crashed applications
  2. Scalability
    • More or different processes as demand changes
  3. Isolation and compartmentalization
    • Protected memory
    • Accounts, capabilities
  4. Better hardware utilization (with processes)
  5. Hardware abstraction for applications
The similarity suggests that virtualization solutions compete with operating systems. I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously. Typical operating systems lack security, reliability and ease of maintenance. They have drivers in kernel space; Windows Vista thankfully now has them in user space, and Linux is moving in that direction. The complexity is staggering. This is reflected in the security guidance; hardening guides and "benchmarks" (essentially an evaluation of configuration settings) are long and complex. The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex. The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches. What it looks like is that we have sinking boats, so we're putting them inside a bigger, more powerful boat, virtualization. In reality, virtualization typically depends on another, full-blown operating system. more OSes VMWare ESX Server runs its own OS with drivers. Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation). Microsoft's "Hyper-V" requires a full-blown Windows operating system to run it. So what we're doing is really exchanging an untrusted OS for another, that we should trust more for some reason. This other OS also needs patches, configuration and maintenance. Now we have multiple OSes to maintain! What did we gain? We don't trust OSes but we trust "virtualization" that depends on more OSes? At least ESX is "only" 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing. I'm now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory. Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don't trust operating systems enough to let them stand on their own. The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes. As for ease of management and availability vs patching, I don't see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary. ESX is an operating system anyway... I believe that all the special things that a virtualization solution does for functionality and security, as well as the "new" opportunities being researched, could be done as well by a trustworthy, properly designed OS; there may be a thesis or two in figuring out how to implement them back in an operating system. What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista: virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest. Of course, even with a better OS we'd still need virtualization for testbeds like ReAssure, and for legacy applications. Perhaps ReAssure could help test new, better operating systems. (This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium). Related reading: Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level. ACM Operating Systems Review, 41 Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure? Computer, 39

Open Source Outclassing Home Router Vendor’s Firmware

I've had an interesting new experience these last few months. I was faced with having to return a home wireless router again and trying a different model or brand, or try an open source firmware replacement. If one is to believe reviews on sites like Amazon and Newegg, all home wireless routers have significant flaws, so the return and exchange game could have kept going on for a while. The second Linksys device I bought (the most expensive on the display!) had the QoS features I wanted but crashed every day and had to be rebooted, even with the latest vendor-provided firmware. It was hardly better than the Verizon-provided Westell modem, which had to be rebooted sometimes several times per day despite having simpler firmware. That was an indication of poor code quality, and quite likely security problems (beyond the obvious availability issues). I then heard about DD-WRT, an alternative firmware released under the GPL. There are other alternative firmwares as well, but I chose this one simply because it supported the Linsys router; I'm not sure which of the alternatives is the best. For several months now, not only has the device demonstrated 100% availability with v.24 (RC5), but it supports more advanced security features and is more polished. I expected difficulties because it is beta software, but had none. Neither CERIAS or I are endorsing DD-WRT, and I don't care if my home router is running vendor-provided or open source firmware, as long as it is a trustworthy and reliable implementation of the features I want. Yet, I am amazed that open source firmware has outclassed firmware for an expensive (for a home router) model of a recognized and trusted brand. Perhaps home router vendors should give up their proprietary, low-quality development efforts, and fund or contribute somehow to projects like DD-WRT and install that as default. A similar suggestion can be made if the software development is already outsourced. I believe that it might save a lot of grief to their customers, and lower the return rates on their products.

Firefox’s Super Cookies

Given all the noise that was made about cookies and programs that look for "spy cookies", the silence about DOM storage is a little surprising. DOM storage allows web sites to store all kinds of information in a persistent manner on your computer, much like cookies but with a greater capacity and efficiency. Another way that web sites store information about you is Adobe's Flash local storage; this seems to be a highly popular option (e.g., youtube stores statistics about you that way), and it's better known. Web applications such as pandora.com will even deny you access if you turn it off at the Flash management page. If you're curious, see the contents in "~/.macromedia/Flash_Player/#SharedObjects/", but most of it is not human readable. I wonder why DOM storage isn't used much after being available for a whole year; I haven't been able to find any web site or web application making use of it so far, besides a proof of concept for taking notes. Yet, it probably will be (ab)used, given enough time. There is no user interface in Firefox for viewing this information, deleting it, or managing it in a meaningful way. All you can do is turn it on or off by going to the "about:config" URL, typing "storage" in the filter and set it to true or false. Compare this to what you can do about cookies... I'm not suggesting that anyone worry about it, but I think that we should have more control over what is stored and how, and the curious or paranoid should be able to view and audit the contents without needing the tricks below. Flash local storage should also be auditable, but I haven't found a way to do it easily. Auditing DOM storage. To find out what information web sites store on your computer using DOM storage (if any), you need to find where your Firefox profile is stored. In Linux, this would be "~/.mozilla/firefox/". You should find a file named "webappsstore.sqlite". To view the contents in human readable form, install sqlite3; in Ubuntu you can use Synaptic to search for sqlite3 and get it installed. Then, the command: echo 'select * from webappsstore;' | sqlite3 webappsstore.sqlite will print contents such as (warning, there could potentially be a lot of data stored): cerias.purdue.edu|test|asdfasdf|0|homes.cerias.purdue.edu Other SQL commands can be used to delete specific entries or change them, or even add new ones. If you are a programmer, you should know better than to trust these values! They are not any more secure than cookies.

Blog Archive

Get Your Degree with CERIAS