One of the key properties that works against strong security is complexity. Complexity poses problems in a number of ways. The more complexity in an operating system, for instance, the more difficult it is for those writing and maintaining it to understand how it will behave under extreme circumstances. Complexity makes it difficult to understand what is needed, and thus to write fault-free code. Complex systems are more difficult to test and prove properties about. Complex systems are more difficult to properly patch when faults are found, usually because of the difficulty in ensuring that there are no side-effects. Complex systems can have backdoors and trojan code implanted that is more difficult to find because of complexity. Complex operations tend to have more failure modes. Complex operations may also have longer windows where race conditions can be exploited. Complex code also tends to be bigger than simple code, and that means more opportunity for accidents, omissions and manifestation of code errors.
It is simple that complexity creates problems.
Saltzer and Schroeder identified it in their 1972 paper in CACM. They referred to “economy of mechanism” as their #1 design principle for secure systems.
Some of the biggest problems we have now in security (and arguably, computing) are caused by “feature creep” as we continue to expand systems to add new features. Yes, those new features add new capabilities, but often the additions are foisted off on everyone whether they want them or not. Thus, everyone has to suffer the consequences of the next exapnded release of Linux, Windows (Vista), Oracle, and so on. Many of the new features are there as legitimate improvements for everyone, but some are of interest to only a minority of users, and others are simply there because the designers thought they might be nifty. And besides, why would someone upgrade unless there were lots of new features?
Of course, this has secondary effects on complexity in addition to the obvious complexity of a system with new features. One example has to do with backwards compatibility. Because customers are unlikely to upgrade to the new, improved product if it means they have to throw out their old applications and data, the software producers need to provide extra code for compatibility with legacy systems. This is not often straight-forward—it adds new complexity.
Another form of complexity has to do with hardware changes. The increase in software complexity has been one motivating factor for hardware designers, and has been for quite some time. Back in the 1960s when systems began to support time sharing, virtual memory became a necessity, and the hardware mechanisms for page and segment tables needed to be designed into systems to maintain reasonable performance. Now we have systems with more and more processes running in the background to support the extra complexity of our systems, so designers are adding extra processing cores and support for process scheduling.
Yet another form of complexity is involved with the user interface. The typical user (and especially the support personnel) now have to master many new options and features, and understand all of their interactions. This is increasingly difficult for someone of even above-average ability. It is no wonder that the average home user has myriad problems using their systems!
Of course, the security implications of all this complexity have been obvious for some time. Rather than address the problem head-on by reducing the complexity and changing development methods (e.g., use safer tools and systems, with more formal design), we have recently seen a trend towards virtualization. The idea is that we confine our systems (operating systems, web services, database, etc) in a virtual environment supported by an underlying hypervisor. If the code breaks…or someone breaks it…the virtualization contains the problems. At least, in theory. And now we have vendors providing chipsets with even more complicated instruction sets to support the approach. But this is simply adding yet more complexity. And that can’t be good in the long run. Already attacks have been formulated to take advantage of these added “features.”
We lose many things as we make systems more complex. Besides security and correctness, we also end up paying for resources we don’t use. And we are also paying for power and cooling for chips that are probably more powerful than we really need. If our software systems weren’t doing so much, we wouldn’t need quite so much power “under the hood” in the hardware.
Although one example is hardly proof of this general proposition, consider the results presented in 86 Mac Plus Vs. 07 AMD DualCore. A 21-year old system beat a current top-of-the-line system on the majority of a set of operations that a typical user might perform during a work session. On your current system, do a “ps” or run the task manager. How many of those processes are really contributing to the tasks you want to carry out? Look at the memory in use—how much of what is in use is really needed for the tasks you want to carry out?
Perhaps I can be accused of being a reactionary ( a nice word meaning “old fart:”), but I remember running Unix in 32K of memory. I wrote my first full-fledged operating system with processes, a file system, network and communication drivers, all in 40K. I remember the community’s efforts in the 1980s and early 1990s to build microkernels. I remember the concept of RISC having a profound impact on the field as people saw how much faster a chip could be if it didn’t need to support complexity in the instruction set. How did we get from there to here?
Perhaps the time is nearly right to have another revolution of minimalism. We have people developing low-power chips and tiny operating systems for sensor-based applications. Perhaps they can show the rest of us some old ideas made new.
And for security? Well, I’ve been trying for several years to build a system (Poly^2) that minimalizes the OS to provide increased security. To date, I haven’t had much luck in getting sufficient funding to really construct a proper prototype; I currently have some funding from NSF to build a minimal version, but the funding won’t allow anything close to a real implementation. What I’m trying to show is too contrary to conventional wisdom. It isn’t of interest to the software or hardware vendors because it is so contrary to their business models, and the idea is so foreign to most of the reviewers at funding agencies who are used to build ever more complex systems.
Imagine a system with several dozen (or hundred) processor cores. Do we need process scheduling and switching support if we have a core for each active process? Do we need virtual memory support if we have a few gigabytes of memory available per core? Back in the 1960s we couldn’t imagine such a system, and no nation or company could afford to build one. But now that wouldn’t even be particularly expensive compared to many modern systems. How much simpler, faster, and more secure would such a system be? In 5 years we may be able to buy such a system on a single chip—will we be ready to use it, or will we still be chasing 200 million line operating systems down virtual rat holes?
So, I challenge my (few) readers to think about minimalism. If we reduce the complexity of our systems what might we accomplish? What might we achieve if we threw out the current designs and started over from a new beginning and with our current knowledge and capabilities?
[Small typo fixed 6/21—thanks cfr]
Copyright © 2007 by E. H. Spafford
[posted with ecto]
[tags]phishing, web redirection[/tags]
Jim Horning suggested a topic to me a few weeks ago as a result of some email I sent him.
First, as background, consider that phishing and related frauds are increasingly frequent criminal activities on the WWW. The basic mechanism is to fool someone into visiting a WWW page that looks like it belongs to a legitimate organization with which the user does business. The page has fields requesting sensitive information from the user, which is then used by the criminals to commit credit card fraud, bank fraud or identity theft.
Increasingly, we have seen that phishing email and sites are also set up to insert malware into susceptible hosts. IE on Windows is the prime target, but attacks are out there for many different browsers and systems. The malware that is dropped can be bot clients, screen scrapers (to capture keystrokes at legitimate pages), and html injectors (to modify legitimate pages to ask for additional information). It is important to try to keep from getting any of this malware onto your system. One aspect of this is to be careful clicking on URLs in your email, even if they seem to come from trusted sources because email can be spoofed, and mail can be sent by bots on known machines.
How do you check a URL? Well, there are some programs that help, but the low-tech way is to look at the raw text of a URL before you visit it, to ensure that it references the site and domain you expected.
But consider the case of short-cut URLs. There are many sites out there offering variations on this concept, with the two I have seen used most often being “TinyURL” and “SnipURL”. The idea is that if you have a very long URL that may get broken when sent in email, or that is simply too difficult to remember, you submit it to one of these services and you get a shortened URL back. With some services, you can even suggest a nickname. So, for example, short links to the top level of my blog are <http://tinyurl.com/2geym5>, <http://snipurl.com/1ms17> and <http://snurl.com/spafblog>.
So far, this is really helpful. As someone who has had URLs mangled in email, I like this functionality.
But now, let’s look at the dark side. If Jim gets email that looks like it is from me, with a message that says “Hey Jim, get a load of this!” with one of these short URLs, he cannot tell by looking at the URL whether it points somewhere safe or not. If he visits it, it could be a site that is dangerous to visit (Well, most URLs I send out are dangerous in one way or another, but I mean dangerous to his computer. ). The folks at TinyURL have tried to address this by adding a feature so that if you visit <http://preview.tinyurl.com/2geym5> you will get a preview of what the URL resolves to; you can set this (with cookies) as your default. That helps some.
But now step deeper into paranoia. Suppose one of these sites was founded by fraudsters with the intent of luring people into using it. Or the site gets acquired by fraudsters, or hijacked. The code could be changed so that every time someone visits one of these URLs, some code at the redirect site determines the requesting browser, captures some information about the end system, then injects some malicious javacode or ActiveX before passing the connection to the “real” site. Done correctly, this would result in largely transparent compromise of the user system. According to the SnipURL statistics page, as of midnight on May 30th there have been nearly a billion clicks on their shortened URLs. That’s a lot of potential compromises!
Of course, one of the factors to make this kind of attack work is for the victim to be running a vulnerable browser. Unfortunately, there have been many vulnerabilities found for both IE and Firefox, as well as some of the less well-known browsers. With users seeking more functionality in their browsers, and web designers seeking more latitude in what they deliver, we are likely to continue to see browser exploits. Thus, there is likely to be enough of a vulnerable population to make this worthwhile. (And what browser are you using to read this with?)
I should make it clear that I am not suggesting that any of these services really are being used maliciously or for purposes of fraud. I am a happy and frequent user of both TinyURL and SnipURL myself. I have no reason to suspect anything untoward from those sites, and I certainly don’t mean to suggest anything sinister. (But note that neither can I offer any assurances about their motives, coding, or conduct.) Caveat emptor.
This post is simply intended as commentary on security practices. Thinking about security means looking more deeply into possible attack vectors. And one of the best ways to commit such attacks is to habituate people into believing something is safe, then exploiting that implicit trust relationship for bad purposes.
Hmm, reminds me of a woman I used to date. She wasn’t what she appeared, either…. But that’s a story for a different post.
[posted with ecto]
In my last post, I ranted about a government site making documents available only in Word. A few people have said to me “Get over it—use OpenOffice instead of the Microsoft products.” The problem is that those are potentially dangerous too—there is too much functionality (some of it may be undocumented, too) in Word (and Office) documents.
Now, we have a virus specific to OpenOffice. We’ve had viruses that run in emulators, too. Trying to be compatible with something fundamentally flawed is not a security solution. That’s also my objection to virtualization as a “solution” to malware.
I don’t mean to be unduly pejorative, but as the saying goes, even if you put lipstick on a pig, it is still a pig.
Word and the other Office components are useful programs, but if MS really cared about security, they would include a transport encoding that didn’t include macros and potentially executable attachments—and encourage its use! RTF is probably that encoding for text documents, but it is not obvious to the average user that it should be used instead of .doc format for exchanging files. And what is there for Excel, Powerpoint, etc?
Earlier, I wrote about the security risks of using Microsoft Word documents as a presentation and encoding format for sending files via email (see posts here and here). Files in “.doc” format contain macros, among other things, that could be executable. They also have metadata fields that might give away sensitive information, and a lot of undocumented cruft that may be used in the process of exploiting security. It is no wonder that exotic exploits are showing up for Word documents. And only today it was revealed that the latest version of Office 2007 may not have even gotten the most recent patch set.
Want to find some vulnerabilities in Word? Then take a look at the list of US-CERT alerts on that software; my search returns almost 400 hits. Some of these are not yet patched, and there are likely many as-yet unpatched flaws still in there.
Clearly, the use of Word as a document exchange medium is Bad (that’s with a definite capital B). People who understand good security practices do not exchange Word files unless they are doing collaborative editing, and even then it is better to use RTF (if one continues to be beholden to Microsoft formats). Good security hygiene means warning others, and setting a good example.
Now, consider that DHS has released BAA07-09 to solicit research and prototypes to get fixes for current cyber infrastructure vulnerabilities. I could rant about how they claim it is for R&D but is really a BAA for further product development for fundamentally flawed software that cannot be fixed. But that isn’t the worst part. No, the BAA is only available as Word documents!
Update: A response from Dr. Douglas Maughn at DHS points out that the site I indicated for the BAA is actually FedBizOps rather than DHS. The DHS posting site actually has it in PDF…although the FedBizOps link is the one I’ve seen in several articles (and in a posting in SANS NewsBites).
Of course, it would be great if DHS could get the folks at FedBizOps to clean up their act, but at least in this case, DHS—or rather, DHSARPA—got it right. I stand corrected.
One of our students who works in biometrics passed along two interesting article links. This article describes how a password-protected, supposedly very secure USB memory stick was almost trivially hacked. This second article by the same author describes how a USB stick protected by a biometric was also trivially hacked. I’m not in a position to recreate the procedure described on those pages, so I can’t say for certain that the reality is as presented. (NB: simply because something is on the WWW doesn’t mean it is true, accurate, or complete. The rumor earlier this week about a delay in the iPhone release is a good example.) However, the details certainly ring true.
We have a lot of people who are “security experts” or who are marketing security-related products who really don’t understand what security is all about. Security is about reducing risk of untoward events in a given system. To make this work, one needs to actually understand all the risks, the likelihood of them occurring, and the resultant losses. Securing one component against obvious attacks is not sufficient. Furthermore, failing to think about relatively trivial physical attacks is a huge loophole—theft, loss or damage of devices is simple, and the skills to disassemble something to get at the components inside is certainly not a restricted “black art.” Consider the rash of losses and thefts of disks (and enclosing laptops) we have seen over the last year or two, with this one being one of the most recent.
Good security takes into account people, events, environment, and the physical world. Poor security is usually easy to circumvent by attacking one of those avenues. Despite publicity to the contrary, not all security problems are caused by weak encryption and buffer overflows!
[posted with ecto]
[tags]Google, spam, 419[/tags]
I recently blogged about some unsolicited email I received from a recruiter at Google. Much to my surprised, I was shortly thereafter contacted by two senior executives at Google (both of whom I know). Each apologized for the contact I had received; one assured me he would put in a positive recommendation if I wanted that sys admin position.
I have been assured that there will be some re-examination made of how these contacts are made. So, score one for my blog changing the world! Or something like it.
[posted with ecto]
Today I received email from a google.com address. The sender said he had found me by doing a search on the WWW. He indicated he hoped I wasn’t offended by his sending unsolicited email. However, he had a great offer for me, one that I was uniquely qualified for, and then offered a couple of URLs.
Does that sound familiar?
My first thought was that it was a 419 scam (the usual “I am the son of the crown prince of Nigeria…” letters). However, after checking out the mail headers and the enclosed URLs, it appears to be a (semi) legit letter from a Google recruiter. He was asking if I was open to considering a new, exciting position with Google.
And what exciting new position does the Google recruiter think I’m ideally suited for? Starting system administrator…..
And by the way, sending email to “email@example.com” gets an automated response that states, in no uncertain terms, that Google never sends spam and that I should take my complaints elsewhere.
Gee, think this is a new career possibility for me?
[posted with ecto]
[tags]cyber security reseach, PITAC[/tags]
I strongly urge you to read Jim Horning’s blog entry about a recent Congressional hearing on cyber security research—his blog is Nothing is as simple as we hope it will be. (Jim posts lots of interesting items—you should add his blog to your list.)
I have been visiting Federal offices and speaking before Congress for almost 20 years trying to raise some awareness of the importance of addressing information security research. More recently, I was a member of the President’s Information Technology Advisory Committee (PITAC). We studied the current funding of cybersecurity research and the magnitude of the problem. Not only was our report largely ignored by both Congress and the President, the PITAC was disbanded. For whatever reason, the current Administration is markedly unsupportive of cyber security research, and might even be classed as hostile to those who draw attention to this lack of support.
Of course, there are many other such reports from other august groups that state basically the same as the PITAC report. No matter who has issued the reports, Congress and the Executive Branch have largely failed to address the issues.
Thus, it is heartening to read of Chairman Langevin’s comments. However, I’m not going to get my hopes up.
Be sure to also read Dan Geer’s written testimony. It touches on many of the same themes he has spoken about in recent years, including his closing keynote at our annual CERIAS Security Symposium (save the dates—March 19 & 20, 2008—for the next symposium).
Copyright © 2007 by E. H. Spafford
[posted with ecto]
[tags]Windows,MacOS, security flaws, patches, press coverage[/tags]
There’s been a lot of froth in the press about a vulnerability discovered in a “Hack the Mac” contest conducted recently. (Example stories here and here.) I’m not really sure where this mini-hysteria is coming from—there isn’t really anything shocking here.
First of all, people shouldn’t be surprised that there are security flaws in Apple products. After all, those are complex software artifacts, and the more code and functionality present, the more likely it is the case that there will be flaws present—including serious flaws leading to security problems. Unless special care is taken in design and construction (not evident in any widely-used system) vulnerabilities are likely to be present.
Given that, the discovery of one serious flaw doesn’t necessarily mean there are hundreds more lurking beneath the surface and that MacOS X is as bad (or worse) than some other systems. Those bloggers and journalists who have some vulture genomes seem particularly prone to making sweeping announcements after each Apple-based flaw (and each Linux bug) is disclosed or a story about vulnerabilities is published. Yes, there are some problems, and there are undoubtedly more yet to be found. That doesn’t mean that those systems are inherently dangerous or even as buggy and difficult to protect as, for example, Windows XP. Drawing such conclusions based on one or two data points is not appropriate; these same people should likewise conclude that eating at restaurants anywhere in the US is dangerous because someone got food poisoning at a roadside stand in Mexico last year!
To date, there appear to be fewer flaws in Apple products than we have seen in some other software. Apple MacOS X is built on a sturdy base (BSD Unix) and doesn’t have a huge number of backwards compatibility features, which is often a source of flaws in other vendors’ products. Apple engineers, too, seem to be a little more careful and savvy about software quality issues than other vendors, as least as evidenced by the relative number of crashes and “blue screen” events in their products. The result is that MacOS X is pretty good right out of the box.
Of course, this particular flaw is not with MacOS X, but with Java code that is part of the Quicktime package for WWW browsers. The good news is that it is not really a MacOS problem; the bad news is that it is a serious bug that got widely distributed; and the worse news is that it potentially affects other browsers and operating systems.
I have been troubled by the fact that we (CERIAS, and before that COAST) have been rebuffed on every attempt over the last dozen years to make any contact with security personnel inside Apple. I haven’t seen evidence that they are really focused on information security in the way that other major companies such as Sun, HP and Microsoft are, although the steady patching of flaws that have not yet been widely reported outside the company does seem to indicate some expertise and activity somewhere inside Apple. Problems such as this Quicktime flaw don’t give warm fuzzy feelings about that, however.
Apple users should not be complacent. There are flaws yet to be discovered, and users are often the weakest link. Malware, including viruses, can get into MacOS X and cause problems, although they are unlikely to ever be of the number and magnitude as bedevil Windows boxes (one recent article noted that vendors are getting around 125 new malware signatures a day—the majority are undoubtedly for Windows platforms). And, of course, Mac machines (and Linux and….) also host browsers and other software that execute scripts and enable attacks. Those who use MS Word have yet more concerns.
The bottom line. No system is immune to attacks. All users should be cautious and informed. Apple systems still appear to be safer than their counterparts running Windows XP (the jury is out on Vista so far), and are definitely easier to maintain and use than similarly secured systems running Linux. You should continue to use the system that is most appropriate for your needs and abilities, and that includes your abilities to understand and configure security features to meet your security needs. For now, my personal systems continue to be a MacBook Pro (with XP and Vista running under Parallels) and a Sun Solaris machine. Your own milage should—and probably will—vary.