The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Cassandra Vulnerability Updates

The Cassandra system has been much more successful and long lasting than I first imagined. Being inexperienced at the time, there were some things I got wrong, such as deleting inactive accounts (I stopped that very quickly as it made many people unhappy or unwilling to use the service), or deleting accounts that bounced several emails (several years ago this was changed to simply invalidating the email address). Recently I improved it by adding GPG signatures. Email notifications from Cassandra are now cryptographically signed. The public key is available on the standard public key servers, such as the MIT server. Things can still be improved I initially envisioned profiles as being updated regularly, perhaps with automated tools listing the applications installed on a system. I also thought that there were many applications without vulnerability entries in MITRE's CVE, the National Vulnerability Database (NVD, used to be named ICAT), or Secunia so I needed to let people enter product and vendor names that weren't linked to any vulnerabilities. However, I found that there was little correlation between the names of products in these sources, as well as between those provided by scanning tools or entered manually by users. ICAT in particular used to be quite bad for using inconsistent or misspelled names. Secunia does not separate vendor names from products and uses different names than the NVD, so Cassandra has to guess which is which based on already known vendor and product names. Because of this, Secunia entries may need reparsing when new names are learned. So, users could get a false sense of security by entering the name of the products they use, but never get notified because of a mismatch! On top of it, bad names are listed by the autocomplete feature, so users can be mislead by someone else's mistakes or misfortune. A Cassandra feature that helped somewhat with this problem was the notion of canonical and variant names. All variants point to a single canonical name for a vendor or a product. However, these need to be entered manually and maintained over time, so I didn't enter many. It gets worse. Profiles are quite static in practice; this leads to other problems. Companies merge, get bought or otherwise change names. Sometimes companies also decide to change the names of their products for other reasons, or names are changed in the NVD. So, profiles can silently drift off-course and not give the alerts needed. All these factors result in product or vendor names in Cassandra that don't point to any vulnerability entries. I call these names "orphans"; I recently realized that Cassandra contained hundreds of orphaned names. And they will be improved I am planning on implementing two new features in Cassandra: Profile auto-correction and product name vetting.
  • Auto-correction: If Cassandra recognizes a name change in the NVD or Secunia, or if it changes the way it recognizes vendor names from products in Secunia, it will attempt to change matching entries in your profiles.
  • Vetting: all the product names in Cassandra will be verified to point to at least one entry in the NVD or Secunia; those that don't and can't be updated will get deleted. This means that when you create a new profile, Cassandra won't suggest an "orphaned" name. If your profile contains an orphaned name that gets deleted, you should receive an email if you have email notifications turned on.
Note that you should still verify your profiles periodically, because Cassandra will not detect all name changes -- this is difficult because a name change may look just like a new product. If you have a product name that isn't in Cassandra, I suggest using the keywords feature. Cassandra will then search the title and description of the entries to find matches (note to self: pehaps keywords should search product and vendor names as well -- that would help catch all variants of a name. Also, consider using string matching algorithms to recognize names).

Fun video

[tags]the Internet[/tags]
Satire is sometimes a great way to get a point across. Or multiple points. I think this little clip is incredibly funny and probably insightful.

Items In the news

[tags]news, cell phones, reports, security vulnerabilities, hacking, computer crime, research priorities, forensics, wiretaps[/tags]
The Greek Cell Phone Incident
A great story involving computers and software, even though the main hack was against cell phones:
IEEE Spectrum: The Athens Affair. From this we can learn all sorts of lessons about how to conduct a forensic investigation, retention of logs, wiretapping of phones, and more.

Now, imagine VoIP and 802.11 networking and vulnerabilities in routers and.... -- the possibilities get even more interesting. I suspect that there's a lot more eavesdropping going on than most of us imagine, and certainly more than we discover.

NRC Report Released
Last week, the National Research Council announced the release of a new report: Towards a Safer and More Secure Cyberspace. The report is notable in a number of ways, and should be read carefully by anyone interested in cyber security. I think the authors did a great job with the material, and they listened to input from many sources.

There are 2 items I specifically wish to note:

  1. I really dislike the “Cyber Security Bill of Rights” listed in the report. It isn't that I dislike the goals they represent -- those are great. The problem is that I dislike the “bill of rights” notion attached to them. After all, who says they are “rights”? By what provenance are they granted? And to what extremes do we do to enforce them? I believe the goals are sound, and we should definitely work towards them, but let's not call them “rights.”
  2. Check out Appendix B. Note all the other studies that have been done in recent years pointing out that we are in for greater and greater problems unless we start making some changes. I've been involved with several of those efforts as an author -- including the PITAC report, the Infosec Research Council Hard Problems list, and the CRA Grand Challenges. Maybe the fact that I had no hand in authoring this report means it will be taken seriously, unlike all the rest. :-) More to the point, people who put off the pain and expense of trying to fix things because “Nothing really terrible has happened yet” do not understand history, human nature, or the increasing drag on the economy and privacy from current problems. The trends are fairly clear in this report: things are not getting better.

Evolution of Computer Crime
Speaking of my alleged expertise at augury, I noted something in the news recently that confirmed a prediction I made nearly 8 years ago at a couple of invited talks: that online criminals would begin to compete for “turf.” The evolution of online crime is such that the “neighborhood” where criminals operate overlaps with others. If you want the exclusive racket on phishing, DDOS extortion, and other such criminal behavior, you need to eliminate (or absorb) the competition in your neighborhood. But what does that imply when your “turf” is the world-wide Internet?

The next step is seeing some of this spill over into the physical world. Some of the criminal element online is backed up by more traditional organized crime in “meat space.” They will have no compunction about threatening -- or disabling -- the competition if they locate them in the real world. And they may well do that because they also have developed sources inside law enforcement agencies and they have financial resources at their disposal. I haven't seen this reported in the news (yet), but I imagine it happening within the next 2-3 years.

Of course, 8 years ago, most of my audiences didn't believe that we'd see significant crime on the net -- they didn't see the possibility. They were more worried about casual hacking and virus writing. As I said above, however, one only needs to study human nature and history, and the inevitability of some things becomes clear, even if the mechanisms aren't yet apparent.

The Irony Department
GAO reported a little over a week ago that DHS had over 800 attacks on their computers in two years. I note that the report is of detected attacks. I had one top person in DC (who will remain nameless) refer to DHS as “A train wreck crossed with a nightmare, run by inexperienced political hacks” when referring to things like TSA, the DHS cyber operations, and other notable problems. For years I (and many others) have been telling people in government that they need to set an example for the rest of the country when it comes to cyber security. It seems they've been listening, and we've been negligent. From now on, we need to stress that they need to set a good example.

[posted with ecto]

Diversity

[tags]diversity, complexity, monocultures[/tags]
In my last post, I wrote about the problems brought about by complexity. Clearly, one should not take the mantra of “simplification” too far, and end up with a situation where everything is uniform, simple, and (perhaps) inefficient. In particular, simplification shouldn't be taken to the point where diversity is sacrificed for simple uniformity.


Nature penalizes monocultures in biological systems. Monocultures are devastated by disease and predators because they have insufficient diversity to resist. The irish potato famine, the emerald ash borer, and the decimation of the Aztecs by smallpox are all examples of what happens when diversity is not present. Nature naturally promotes diversity to ensure a robust population.

We all practice diversity in our everyday lives. Diversity of motor vehicles, for instance supports fitness for purpose -- a Camero, is not useful for hauling dozens of large boxes of materials. For that, we use a truck. However, for one person to get from point A to point B in an economical fashion, a truck is not the best choice. It might be cheaper and require less training to use the same vehicle for everything, but there are advantages to diversity. Or tableware -- we have (perhaps) too many forks and spoon types in a formal placesetting, but try eating soup with a fork and you discover that some differentiation is useful!

In computing, competition has resulted in advances in hardware and software design. Choice among products has kept different approaches moving forward. Competition for research awards from DARPA and NSF has encouraged deeper thought and more focused proposals (and resultant development). Diversity in operating systems and programming languages brought many advancements in the era 1950-2000. However, expenses and attempts to cut staff have led to widespread homogenization of OS, applications, and languages over approximately the last decade.

Despite the many clear benefits of promoting diversity, too many organizations have adopted practices that prevent diversity of software and computing platforms. For example, the OMB/DoD Common Desktop initiative is one example where the government is steering personnel towards a monoculture that is more maintainable day-to-day, but which is probably more vulnerable to zero-day attacks and malware.

Disadvantages of homogeneity:

  • greater susceptibility to zero-day vulnerabilities and attacks
  • “box canyon” effect of being locked into a vendor for future releases
  • reduced competition to improve quality
  • reduced competition to reduce price and/or improve services
  • reduced number of algorithms and approaches that may be explored
  • reduced fitness for particular tasks
  • simpler for adversaries to map and understand networks and computer use
  • increased likelihood that users will install unauthorized software/hardware from outside sources


Advantages of homogeneity:

  • larger volume for purchases
  • only one form of tool, training, etc needed for support
  • better chance of compatible upgrade path
  • interchangeability of users and admins
  • more opportunities for reuse of systems

Disadvantages of heterogeneity:

  • more complexity so possibly more vulnerabilities
  • may not be as interoperable
  • may require more training to administer
  • may not be reusable to the same extent as homogeneous systems


Advantages of heterogeneity:

  • when at a sufficient level greater resistance to malware
  • highly unlikely that all systems will be vulnerable to a single new attack
  • increased competition among vendors to improve price, quality and performance
  • greater choice of algorithms and tools for particular tasks
  • more emphasis on standards for interoperability
  • greater likelihood of customization and optimization for particular tasking
  • greater capability for replacement systems if a vendor discontinues a product or support


Reviewing the above lists makes clear that entities concerned with self-continuation and operation will promote diversity, despite some extra expense and effort. The potential disadvantages of diversity are all things that can be countered with planning or budget. The downsides of monocultures, however, cannot be so easily addressed.

Dan Geer wrote an interesting article for Queue Magazine about diversity, recently. It is worth a read.

The simplified conclusion: diversity is good to have.

Complexity, virtualization, security, and an old approach

[tags]complexity,security,virtualization,microkernels[/tags]
One of the key properties that works against strong security is complexity. Complexity poses problems in a number of ways. The more complexity in an operating system, for instance, the more difficult it is for those writing and maintaining it to understand how it will behave under extreme circumstances. Complexity makes it difficult to understand what is needed, and thus to write fault-free code. Complex systems are more difficult to test and prove properties about. Complex systems are more difficult to properly patch when faults are found, usually because of the difficulty in ensuring that there are no side-effects. Complex systems can have backdoors and trojan code implanted that is more difficult to find because of complexity. Complex operations tend to have more failure modes. Complex operations may also have longer windows where race conditions can be exploited. Complex code also tends to be bigger than simple code, and that means more opportunity for accidents, omissions and manifestation of code errors.

It is simple that complexity creates problems.

Saltzer and Schroeder identified it in their 1972 paper in CACM. They referred to “economy of mechanism” as their #1 design principle for secure systems.

Some of the biggest problems we have now in security (and arguably, computing) are caused by “feature creep” as we continue to expand systems to add new features. Yes, those new features add new capabilities, but often the additions are foisted off on everyone whether they want them or not. Thus, everyone has to suffer the consequences of the next exapnded release of Linux, Windows (Vista), Oracle, and so on. Many of the new features are there as legitimate improvements for everyone, but some are of interest to only a minority of users, and others are simply there because the designers thought they might be nifty. And besides, why would someone upgrade unless there were lots of new features?

Of course, this has secondary effects on complexity in addition to the obvious complexity of a system with new features. One example has to do with backwards compatibility. Because customers are unlikely to upgrade to the new, improved product if it means they have to throw out their old applications and data, the software producers need to provide extra code for compatibility with legacy systems. This is not often straight-forward -- it adds new complexity.

Another form of complexity has to do with hardware changes. The increase in software complexity has been one motivating factor for hardware designers, and has been for quite some time. Back in the 1960s when systems began to support time sharing, virtual memory became a necessity, and the hardware mechanisms for page and segment tables needed to be designed into systems to maintain reasonable performance. Now we have systems with more and more processes running in the background to support the extra complexity of our systems, so designers are adding extra processing cores and support for process scheduling.

Yet another form of complexity is involved with the user interface. The typical user (and especially the support personnel) now have to master many new options and features, and understand all of their interactions. This is increasingly difficult for someone of even above-average ability. It is no wonder that the average home user has myriad problems using their systems!

Of course, the security implications of all this complexity have been obvious for some time. Rather than address the problem head-on by reducing the complexity and changing development methods (e.g., use safer tools and systems, with more formal design), we have recently seen a trend towards virtualization. The idea is that we confine our systems (operating systems, web services, database, etc) in a virtual environment supported by an underlying hypervisor. If the code breaks...or someone breaks it...the virtualization contains the problems. At least, in theory. And now we have vendors providing chipsets with even more complicated instruction sets to support the approach. But this is simply adding yet more complexity. And that can't be good in the long run. Already attacks have been formulated to take advantage of these added “features.”

We lose many things as we make systems more complex. Besides security and correctness, we also end up paying for resources we don't use. And we are also paying for power and cooling for chips that are probably more powerful than we really need. If our software systems weren't doing so much, we wouldn't need quite so much power “under the hood” in the hardware.

Although one example is hardly proof of this general proposition, consider the results presented in 86 Mac Plus Vs. 07 AMD DualCore. A 21-year old system beat a current top-of-the-line system on the majority of a set of operations that a typical user might perform during a work session. On your current system, do a “ps” or run the task manager. How many of those processes are really contributing to the tasks you want to carry out? Look at the memory in use -- how much of what is in use is really needed for the tasks you want to carry out?

Perhaps I can be accused of being a reactionary ( a nice word meaning “old fart:”), but I remember running Unix in 32K of memory. I wrote my first full-fledged operating system with processes, a file system, network and communication drivers, all in 40K. I remember the community's efforts in the 1980s and early 1990s to build microkernels. I remember the concept of RISC having a profound impact on the field as people saw how much faster a chip could be if it didn't need to support complexity in the instruction set. How did we get from there to here?

Perhaps the time is nearly right to have another revolution of minimalism. We have people developing low-power chips and tiny operating systems for sensor-based applications. Perhaps they can show the rest of us some old ideas made new.

And for security? Well, I've been trying for several years to build a system (Poly^2) that minimalizes the OS to provide increased security. To date, I haven't had much luck in getting sufficient funding to really construct a proper prototype; I currently have some funding from NSF to build a minimal version, but the funding won't allow anything close to a real implementation. What I'm trying to show is too contrary to conventional wisdom. It isn't of interest to the software or hardware vendors because it is so contrary to their business models, and the idea is so foreign to most of the reviewers at funding agencies who are used to build ever more complex systems.

Imagine a system with several dozen (or hundred) processor cores. Do we need process scheduling and switching support if we have a core for each active process? Do we need virtual memory support if we have a few gigabytes of memory available per core? Back in the 1960s we couldn't imagine such a system, and no nation or company could afford to build one. But now that wouldn't even be particularly expensive compared to many modern systems. How much simpler, faster, and more secure would such a system be? In 5 years we may be able to buy such a system on a single chip -- will we be ready to use it, or will we still be chasing 200 million line operating systems down virtual rat holes?

So, I challenge my (few) readers to think about minimalism. If we reduce the complexity of our systems what might we accomplish? What might we achieve if we threw out the current designs and started over from a new beginning and with our current knowledge and capabilities?

[Small typo fixed 6/21 -- thanks cfr]

Copyright © 2007 by E. H. Spafford
[posted with ecto]

Blog Archive

Get Your Degree with CERIAS