It is well-known that I am a long-time user of Apple Macintosh computers, and I am very leery of Microsoft Windows and Linux because of the many security problems that continue to plague them. (However, I use Windows, and Linux, and Solaris, and a number of other systems for some things—I believe in using the right tool for each task.) Thus, it is perhaps no surprise that a few people have written to me with a “Nyah, nyah” message after reading a recent article claiming that Windows is the most secure OS over the last six months. However, any such attitude evidences a certain lack of knowledge of statistics, history, and the underlying Symantec report itself. It is possible to lie with statistics—or, at the least, be significantly misled, if one is not careful.
First of all, the news article reported that —in the reporting period—Microsoft software had 12 serious vulnerabilities plus another 27 less serious vulnerabilities. This was compared with 1 serious vulnerability in Apple software out of a total of 43 vulnerabilities. To say that this confirms the point because there were fewer vulnerabilities reported in MS software (39 vs. 43) without noting the difference in severity is clearly misleading. After all, there were 12 times as many severe vulnerabilities in MS software as in Apple software (and more than in some or all of the others systems, too—see the full report).
Imagine reading a report in the newspaper on crime statistics. The report says that Portland saw one killing and 42 instances of littering, while Detroit had 27 instances of jaywalking and 12 instances of rape and homicide. If the reporter concluded that Detroit was the safer place to live and work, would you agree? Where do you think you would feel safer? Where would you be safer (assuming the population sizes were similar; in reality, Portland is about 2/3 the population of Detroit)?
More from a stochastic point of view, if we assume that the identification of flaws is more or less a random process with some independence, then it is not surprising if there are intervals where the relative performance in that period does not match the overall behavior. So, we should not jump to overall conclusions when there are one or two observational periods where one system dominates another in contrast to previous behavior. Any critical decisions we might wish to make about quality and safety should be based on a longer baseline; in this case, the Microsoft products continue to be poor compared to some other systems, including Apple. We might also want to factor in the size of the exposed population, the actual amount of damages and other such issues.
By analogy, imagine you are betting on horses. One horse you have been tracking, named Redmond, has not been performing well. In nearly every race that horse has come in at or below the middle of the pack, and often comes in last, despite being a crowd favorite. The horse looks good, and lots of people bet on it, but it never wins. Then, one day, in a close heat, Redmond wins! In a solid but unexciting race, Redmond comes in ahead of multiple-race winner #2 (Cupertino) by a stride. Some long-time bettors crow about the victory, and say they knew that Remond was the champ. So, you have money to gamble with. Are you going to bet on Redmond to win or place in each of the next 5 races?
Last of all, I could not find a spot in the actual Symantec report where it was stated that any one system is more secure than another—that is something stated by the reporter (Andy Patrizio) who wrote the article. Any claim that ANY system with critical flaws is “secure” or “more secure” is an abuse of the term. That is akin to saying that a cocktail with only one poison is more “healthful” than a cocktail with six poisons. Both are lethal, and neither is healthful under any sane interpretation of the words.
So, in conclusion, let me note that any serious flaws reported are not a good thing, and none of the vendors listed (and there are more than simply Microsoft and Apple) should take pride in the stated results. I also want to note that although I would not necessarily pick a MS platform for an application environment where I have a strong need for security, neither would I automatically veto it. Properly configure and protect any system and it may be a good candidate in a medium or low threat environment. As well, the people at Microsoft are certainly devoting lots of resources to try to make their products better (although I think they are trapped by some very poor choices made in the past).
Dr. Dan Geer made a riveting and thought-provoking presentation on cyber security trends and statistics as the closing keynote address of this year’s annual CERIAS Security Symposium. His presentation materials will shortly be linked into the symposium WWW site, and a video of his talk is here. I recommend that you check that out as additional material, if you are interested in the topic.
[tags]security marketplace, firewalls, IDS, security practices, RSA conference[/tags]
As I’ve written here before, I believe that most of what is being marketed for system security is misguided and less than sufficient. This has been the theme of several of my invited lectures over the last couple of years, too. Unless we come to realize that current “defenses” are really attempts to patch fundamentally faulty designs, we will continue to fail and suffer losses. Unfortunately, the business community is too fixated on the idea that there are quick fixes to really investigate (or support) the kinds of long-term, systemic R&D that is needed to really address the problems.
Thus, I found the RSA conference and exhibition earlier this month to be (again) discouraging this year. The speakers basically kept to a theme that (their) current solutions would work if they were consistently applied. The exhibition had hundreds of companies displaying wares that were often indistinguishable except for the color of their T-shirts—anti-virus, firewalls (wireless or wired), authentication and access control, IDS/IPS, and vulnerability scanning. There were a couple of companies that had software testing tools, but only 3 of those, and none marketing suites of software engineering tools. A few companies had more novel solutions—I was particular impressed by a few that I saw, such as the policy and measurement-based offerings by CoreTrace, ProofSpace, and SignaCert. (In the interest of full disclosure, SignaCert is based around one of my research ideas and I am an advisor to the company.) There were also a few companies with some slick packaging of older ideas (Yoggie being one such example) that still don’t fix underlying problems, but that make it simpler to apply some of the older, known technologies.
I wasn’t the only one who felt that RSA didn’t have much new to offer this year, either.
When there is a vendor-oriented conference that has several companies marketing secure software development suites that other companies are using (not merely programs to find flaws in C and Java code), when there are booths dedicated to secured mini-OS systems for dedicated tasks, and when there are talks scheduled about how to think about limiting functionality of future offerings so as to minimize new threats, then I will have a sense that the market is beginning to move in the direction of maturity. Until then, there are too many companies selling snake oil and talismans—and too many consumers who will continue to buy those solutions because they don’t want to give up their comfortable but dangerous behaviors. And any “security” conference that has Bill Gates as keynote speaker—renowned security expert that he is—should be a clue about what is more important for the conference attendees: real security, or marketing.
Think I am too cynical? Watch the rush into VoIP technologies continue, and a few years from now look at the amount of phishing, fraud, extortion and voice-spam we will have over VoIP, and how the market will support VoIP-enabled versions of some of the same solutions that were in Moscone Center this year. Or count the number of people who will continue to mail around Word documents, despite the growing number of zero-day and unpatched exploits in Word. Or any of several dozen current and predictable dangers that aren’t “glitches”—they are the norm. if you really pay attention to what happens, then maybe you’ll become cynical, too.
If not, there’s always next year’s RSA Conference.
A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:
Year | Vulnerabilities | %Increase |
---|---|---|
2002 | 1959 | N/A |
2003 | 1281 | -35% |
2004 | 2367 | 85% |
2005 | 4876 | 106% |
2006 | 6605 | 35% |
Average yearly increase (including the 2002-2003 decline): 48%
6605*1.48= 9775
So, that’s not quite 9999, but fairly close. There’s enough variance that hitting 9999 in 2007 seems a plausible event. If not in 2007, then it seems likely that we’ll hit 9999 in 2008. So, what does it matter?
MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities: CVE-YEAR-XXXX. Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format. We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow. When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds. A yearly number of 9999 seemed astronomical; I’m sure that anyone who would have brought up that as a concern back then would have been laughed at. I felt at the time that it would take a security apocalypse to reach that. Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.
Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught. I’m impressed.