Posts in R&D

Cassandra Vulnerability Updates

The Cassandra system has been much more successful and long lasting than I first imagined.  Being inexperienced at the time, there were some things I got wrong, such as deleting inactive accounts (I stopped that very quickly as it made many people unhappy or unwilling to use the service), or deleting accounts that bounced several emails (several years ago this was changed to simply invalidating the email address).  Recently I improved it by adding GPG signatures.  Email notifications from Cassandra are now cryptographically signed. The public key is available on the standard public key servers, such as the MIT server.


Things can still be improved


I initially envisioned profiles as being updated regularly, perhaps with automated tools listing the applications installed on a system.  I also thought that there were many applications without vulnerability entries in MITRE’s CVE, the National Vulnerability Database (NVD, used to be named ICAT), or Secunia so I needed to let people enter product and vendor names that weren’t linked to any vulnerabilities.  However, I found that there was little correlation between the names of products in these sources, as well as between those provided by scanning tools or entered manually by users.  ICAT in particular used to be quite bad for using inconsistent or misspelled names.  Secunia does not separate vendor names from products and uses different names than the NVD, so Cassandra has to guess which is which based on already known vendor and product names.  Because of this, Secunia entries may need reparsing when new names are learned.  So, users could get a false sense of security by entering the name of the products they use, but never get notified because of a mismatch!  On top of it, bad names are listed by the autocomplete feature, so users can be mislead by someone else’s mistakes or misfortune.  A Cassandra feature that helped somewhat with this problem was the notion of canonical and variant names.  All variants point to a single canonical name for a vendor or a product.  However, these need to be entered manually and maintained over time, so I didn’t enter many.

It gets worse.  Profiles are quite static in practice;  this leads to other problems.  Companies merge, get bought or otherwise change names. Sometimes companies also decide to change the names of their products for other reasons, or names are changed in the NVD. So, profiles can silently drift off-course and not give the alerts needed. All these factors result in product or vendor names in Cassandra that don’t point to any vulnerability entries.  I call these names “orphans”;  I recently realized that Cassandra contained hundreds of orphaned names.


And they will be improved


I am planning on implementing two new features in Cassandra:  Profile auto-correction and product name vetting.

  • Auto-correction: If Cassandra recognizes a name change in the NVD or Secunia, or if it changes the way it recognizes vendor names from products in Secunia, it will attempt to change matching entries in your profiles.
  • Vetting: all the product names in Cassandra will be verified to point to at least one entry in the NVD or Secunia; those that don’t and can’t be updated will get deleted. This means that when you create a new profile, Cassandra won’t suggest an “orphaned” name. If your profile contains an orphaned name that gets deleted, you should receive an email if you have email notifications turned on.

Note that you should still verify your profiles periodically, because Cassandra will not detect all name changes—this is difficult because a name change may look just like a new product.  If you have a product name that isn’t in Cassandra, I suggest using the keywords feature.  Cassandra will then search the title and description of the entries to find matches (note to self: pehaps keywords should search product and vendor names as well—that would help catch all variants of a name.  Also, consider using string matching algorithms to recognize names).

This Week at CERIAS

CERIAS Reports & Papers

CERIAS Weblogs

The Vulnerability Protection Racket

TippingPoint’s Zero Day Initiative (ZDI) gives interesting data.  TippingPoint’s ZDI has made public its “disclosure pipeline” on August 28, 2006.  As of today, it has 49 vulnerabilities from independent researchers, which have been waiting on average 114 days for a fix.  There are also 12 vulnerabilities from TippingPoint’s researchers as well.  With those included, the average waiting time for a fix is 122 days, or about 4 months!  Moreover, 56 out of 61 are high severity vulnerabilities.  These are from high profile vendors: Microsoft, HP, Novell, Apple, IBM Tivoli, Symantec, Computer Associates, Oracle…  Some high severity issues have been languishing for more than 9 months.

Hum.  ZDI is supposed to be a “best-of-breed model for rewarding security researchers for responsibly disclosing discovered vulnerabilities. ”  How is it responsible to take 9 months to fix a known but secret high severity vulnerability?  It’s not directly ZDI’s fault that the vendors are taking so long, but then it’s not providing much incentive either to the vendors.  This suggests that programs like ZDI’s have a pernicious effect.  They buy the information from researchers, who are then forbidden from disclosing the vulnerabilities.  More vulnerabilities are found due to the monetary incentive, but only people paying for protection services have any peace of mind.  The software vendors don’t care much, as the vulnerabilities remain secret.  The rest of us are worse off than before because more vulnerabilities remain secret for an unreasonable length of time.

Interestingly, this is what was predicted several years ago in “Market for Software Vulnerabilities?  Think Again” (2005) Kannan K and Telang R, Management Science 51, pp. 726-740.  The model predicted worse social consequences from these programs than no vulnerability handling at all due to races with crackers, increased vulnerability volume, and unequal protection of targets.  This makes another conclusion of the paper interesting and likely valid:  CERT/CC offering rewards to vulnerability discoverers should provide the best outcomes, because information would be shared systematically and equally.  I would add that CERT/CC is also in a good position to find out if a vulnerability is being exploited in the wild, in which case it can release an advisory and make vulnerability information public sooner.  A vendor like TippingPoint has a conflict of interest in doing so, because it decreases the value of their protection services.

I tip my hat to TippingPoint for making their pipeline information public.  However, because they provide no deadlines to vendors or incentives for responsibly patching the vulnerabilities, the very existence of their services and similar ones from other vendors are hurting those who don’t subscribe.  That’s what makes vulnerability protection services a racket. 

 

PHPSecInfo v0.2 now available

PHPSecInfo Screenshot PHPSecInfo Screenshot

The newest version of PHPSecInfo, version 0.2, is now available.  Here are the major changes:

  • Added link to “more info” in output.  These lead to pages on the phpsec.org site giving more details on the test and what to do if you have a problem
  • Modified CSS to improve readability and avoid license issue with PHP (the old CSS was derived from the output of phpinfo())
  • New test: PhpSecInfo_Test_Session_Save_Path
  • Added display of “current” and “recommended” settings in test result output
  • Various minor changes and bug fixes; see the CHANGELOG for details

-Download now

-Join the mailing list

 

What’s New at CERIAS

I haven’t posted an update lately of new content on our site, so here’s a bit of a make-up post:

CERIAS Reports & Papers

CERIAS Hotlist

CERIAS News

CERIAS Security Seminar Podcast

2007: The year of the 9,999 vulnerabilities?

A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:

YearVulnerabilities%Increase
20021959N/A
20031281-35%
2004236785%
20054876106%
2006660535%



Average yearly increase (including the 2002-2003 decline): 48%

6605*1.48= 9775

So, that’s not quite 9999, but fairly close.  There’s enough variance that hitting 9999 in 2007 seems a plausible event.  If not in 2007, then it seems likely that we’ll hit 9999 in 2008.  So, what does it matter?



MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities:  CVE-YEAR-XXXX.  Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format.  We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow.  When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds.  A yearly number of 9999 seemed astronomical;  I’m sure that anyone who would have brought up that as a concern back then would have been laughed at.  I felt at the time that it would take a security apocalypse to reach that.  Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.



Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught.  I’m impressed.

Vulnerability disclosure grace period needs to be short, too short for patches

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.

 

Security Vigilantes Becoming Small-Time Terrorists

Vulnerability disclosure is such a painful issue.  However, some people are trying to make it as painful as possible.  They slap and kick people with the release of 0-day vulnerabilities, and tell them it’s for their own good.  In their fantasies, sometime in the future, we’ll be thanking them.  In reality, they make me feel sympathy for the vendors. 

They cite disillusionment with the “responsible disclosure” process.  They believe that this process forces them somehow to wait indefinitely on the pleasure of the vendor.  Whereas it is true that many vendors won’t and don’t fix known issues unless they are known publicly or are threatened with a public disclosure, it bemuses me that these people are unwilling to give the vendor a chance and wait a few weeks.  They use the excuse of a few bad vendors, or a few occurrences of delays in fixes, even “user smugness”, to systematically treat vendors and their clients badly.  This shows recklessness, impatience, intransigence, bad judgment and lack of discernment. 

I agree that reporting vulnerabilities correctly is a thankless task.  Besides my previous adventure with a web application, when reporting a few vulnerabilities to CERT/CC, I received no replies ever, not even an automated receipt.  It was like sending messages into a black hole.  Some vendors can become defensive and unpleasant instead.  However, that doesn’t provide a justification for not being gallant, and first giving an opportunity for the opposite side to behave badly.  If you don’t do at least that, then you are part of the problem.  As in many real life problems, the first one to use his fists is the loser.

What these security vigilantes are really doing is using as hostages the vendor’s clients, just to make an ideological point.  That is, they use the threat of security exploits to coerce or intimidate vendors and society for the sake of their objectives.  They believe that the ends justify the means.  Blackmail is done for personal gain, so what they are doing doesn’t fit the blackmail category, and it’s more than simple bullying.  Whereas the word “terrorism” has been overused and brandished too often as a scarecrow, compare the above to the definition of terrorism.  I realize that using this word, even correctly, can raise a lot of objections.  If you accept that a weaker form of terrorism is the replacement of physical violence with other threats, then it would be correct to call these people “small-time terrorists” (0-day pun intended).  Whatever you want to call them, in my opinion they are no longer just vigilantes, and certainly not heroes.  The only thing that can be said for them is, at least they didn’t try to profit directly from the disclosures.

Finally, let me make clear that I want to be informed, and I want disclosures to happen.  However, I’m certain that uncivil 0-day disclosures aren’t part of the answer.  There is an interesting coverage of this and related issues at C/NET.

PHPSecInfo: New release (0.1.2), new plans

First off, a new build of PHPSecInfo is out: Version 0.1.2, build 20061218. Here’s what’s new:

  • Code is now licensed under “New BSD” license. See LICENSE

  • Added PhpSecInfo_Test_Core_Allow_Url_Include to test for allow_url_include in PHP5.2 and above

  • fix bug in post_max_size check where upload_max_size value was being checked

  • change curl file_support test to recommend upgrading to newest version of PHP rather than disabling support in cURL for ‘file://’ protocol

  • removed =& calls that force pass by reference in PHP4, so as to not throw PHP5 STRICT notices. It means passing objects by value in PHP4, but this seems acceptable for our purposes (memory usage isn’t terribly high).

  • Fixed bug in PhpSecInfo_Test_Session_Use_Trans_Sid where wrong ini key was requested (Thanks Mark Wallert)

  • New, detailed README file with explanations and basic usage instructions - Now providing an md5 hash for releases

Here’s what I’m planning to do in the next few releases:

  1. More detailed test results, including the current and recommended settings
  2. A web-based “glossary” with more details on each test & how to fix problems
  3. More tests!!! I especially need your help with this one!

I’m also going to look into options to reformat the test result structure, so it plays more nicely with templating systems. No promises on how this will go, but we’ll see.

 

VMworld 2006:  ReAssure (CERIAS), VIX and Lab Manager (VMware)

The conference is surprisingly huge (6000 people).  Virtualization is obviously important to IT now.  I am looking forward to the security-related talks (I’ll post about them later).  Here are a few notes from the sessions I attended:

  • Saturday a VMware team shot a video of yours truly talking about ReAssure (of course I became tongue-tied when the camera was turned on!).  It will be presented at the general session Wednesday morning.  I hope it generates interest in ReAssure!
  • The VIX API on Tuesday morning was a very interesting session.  It will enable the remaining automation functionality of ReAssure.  It allows to automate the powering on and off of virtual machines, the taking of snapshots, transfering files (e.g., results) between the host and guest OS, and even starting programs in the guest OS!  It was introduced with VMWare server 1.0 last summer, but I hadn’t noticed.  It is still work in progress though;  there’s support only for C, Perl and COM (no Python, although I was told that there was a source forge project for that).
  • The VMware lab manager (introduced last summer) is very much like ReAssure.  Except, ReAssure doesn’t have IP conflicts, and in ReAssure all experiments (“deployed configurations”) are independent and their traffic is isolated with VLANs.  In some respects, VMware lab manager is more sophisticated, and in others it is more primitive.  For example, all networks in Lab Manager are flat (and even, all experiments share the same network, apparently), whereas ReAssure supports complex networks.  To resolve IP conflicts, Lab Manager uses “fenced networks” which is a NAT hack.  Lab Manager is also limited to fibre channel NAS, and is tied to VMware ESX while disabling most of what makes ESX flexible and interesting (ReAssure uses the VMware server freeware).  I’m excited about the VIX API (see above) because will bring ReAssure beyond lab manager, by allowing snapshots, suspend and resume functionality, etc…I wonder what I need to do to make ReAssure more well-known and adopted.  I haven’t found any bugs in it for a while, so I think I’ll officially release the first final (not beta) version very soon (e.g., Friday or next week).