The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Vulnerability disclosure grace period needs to be short, too short for patches

Share:

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.

 

Comments

Posted by Pascal Meunier
on Friday, January 5, 2007 at 09:31 AM

P.S.:  There are other factors that should come into play in deciding how long the grace period should be.  Please don’t take my post as a proposal for the grace period before public disclosure to be only one or two days.  Right now my thoughts are only that it should be short, much shorter than proposed earlier, but I’m not comfortable with a specific number.

Posted by Pascal Meunier
on Friday, January 5, 2007 at 11:49 AM

P.P.S.: The above model leads to a paradox:  the more security researchers there are, the less we are secure.  This is because the model does not take into account the ratio of “good” researchers to malicious ones.  A more complex model should show that the higher the ratio, the less simultaneous discoveries matter, because they occur mostly between good researchers.  In addition, the more good researchers there are, the quicker they will find vulnerabilities before the malicious researchers do, which will increase the time that vendors have to fix them and issue patches.  That would also increase the validity of the responsible disclosure process.  If malicious researchers were to become insignificant in numbers compared to good ones, we would not only be more secure, but also less worried about attacks.  In conclusion, vendors and government should hire and pay for as many vulnerability discoverers as possible (pardon me for stating the obvious).  The point is, we have the choice between dealing with more and more unpatched vulnerabilities that remain unpatched longer, and more compromises, or hiring more secure programming and vulnerability discovery talent.

Posted by Claudio Telmon
on Saturday, January 6, 2007 at 08:04 AM

I don’t understand your mathematics. You say that 10000 vulnerabilities is most surely an underestimation. So you’re not conservative: with 20000 vulnerabilities (and the same number of researchers) you would have a longer grace period.
Moreover, now vulnerabilities are reported for applications that were not considered some years ago. Only basic tools, os, and applications were considered woth the effort. Many refused to discuss things like “users_adm/start1.php in IMGallery 2.5”.

However, this ma be marginal. Suppose that your model is totally wrong. Vulnerabilities are in a product from the beginning, and are discovered during the product lifetime. If half of the vulnerabilities were found in the fist year, not many would still have vulnerabilities to be found after a few years. Office would need some 1000 vulnerabilities if we still find Office 97 vulnerabilities, and 500 of them should be found in the first year. The last Office 97 critical vulnerability in Securityfocus archive is http://www.securityfocus.com/bid/21869
Now, suppose that there are *many more* vulnerabilities, so that the number of vulnerabilities found is not relevant to the total number… then you would not have any “birthday effect”, and collisions would have totally different reasons, e.g. researchers concentrate on the most interesting products at the same time, and find the most obvious vulnerabilities…

Posted by Pascal Meunier
on Monday, January 8, 2007 at 04:11 AM

Claudio,
“So you’re not conservative: with 20000 vulnerabilities (and the same number of researchers) you would have a longer grace period.”

It’s true that the smaller the ratio of the number of researchers to the number of vulnerabilities to be found, the longer is the grace period.  However, I initially thought that 10000 was a conservative estimate in the number of vulnerabilities.  I think I get your point though:  a more conservative *prediction* would be obtained from the model by assuming more vulnerabilities.  Certainly, the fact that this number has to be guessed is a significant weakness.  However, see below my explanations about this number not being the total number of vulnerabilities in existence (which I’m sorry for not explaining when I posted).

“Suppose that your model is totally wrong. Vulnerabilities are in a product from the beginning, and are discovered during the product lifetime. “
I believe you describe an exponential decay model for vulnerabilities in a single product.  Whereas it’s compatible with the calculations I did (and it may seem to imply it), I don’t require it.  I think the exponential decay model doesn’t apply because the skills of people change with time.  For example, before people understood format string vulnerabilities, none of them could be found.  So, even though I did not specify it above (sorry), in this model the “number of vulnerabilities to be found” is not simply the number of vulnerabilities that exist, but rather the number of vulnerabilities that researchers are likely to find given the tools and knowledge available.  New products, and new versions of existing products, also enter the marketplace, and the usage of some others becomes small enough that a vulnerability in one of them is no longer a concern.  By considering each year independently, I tried to divorce the discussion of the collisions from exactly how many vulnerabilities in a given product there are year after year.


“Now, suppose that there are *many more* vulnerabilities, so that the number of vulnerabilities found is not relevant to the total number…”
I assume you mean negligible compared to the total number.  Then it seems that the main source of collisions could come from other factors than a “birthday effect”, such as which application/OS is a popular target.  Then how to answer the question “how likely is it that two researchers find the same vulnerability in the same time period?”  In reality, by limiting the research subjects to the most interesting products, you narrow back down the number of vulnerabilities to be found in that time period.  So, given that smaller pool, how often are collisions going to happen?  I think the birthday mathematics apply again.  The vagueness in the definition of “the vulnerabilities to be found” is what makes the above calculations relevant no matter what are the exact reasons why a vulnerability is in that group or not.  This is why I tied that number so closely to the vulnerabilities historically found each year. 

It is true though that the model doesn’t take into account how easy it is to find a given vulnerability.

Thank you for the very interesting and thoughtful comments, and giving me the opportunity to explain! 

Posted by john at nist.org
on Wednesday, January 10, 2007 at 07:54 PM

This is simply a mathematical model and doesn’t take in to account a lot of variables.

For example, with all the ‘fuzzing’ tools being released it is much more likely that two researchers will discover the same vulnerability shortly after a new feature is added to one of these tools.

Discoveries also tend to run in trends. So if a buffer overflow is found in in a certain routine in IE7 a lot of people start looking for similar bugs in the same areas.

Nor does it take in to account challenges such as the $8,000 iDefense bounty on Vista and IE7 (see: http://www.nist.org/news.php?extend.199). This gives targeted incentive which will probably increase the odds of discoveries within shorter windows of time.

But the basic presumption is sound, we need to shorten the window of time from vendor notification to full disclosure.

Posted by Pascal Meunier
on Thursday, January 11, 2007 at 10:41 AM

John,
I agree with the points you bring up.  The model shouldn’t be used to derive a precise figure for the window of time given to a vendor (what I call the grace period), but it does suggest that it should be decreased.
Thanks for your comments!

Leave a comment

Commenting is not available in this section entry.