Posts in Policies & Law
Page Content
Vulnerability disclosure grace period needs to be short, too short for patches
One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.
That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well. The key word here is "possible", not "likely", or so I thought when I started writing this post. After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities. How likely is it that two security researchers will find the same vulnerability?
Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem. Let's assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found. In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database). Let's assume that the number of vulnerabilities available to be found in a year is about 10 000; this is most surely an underestimation. I'll assume that all of these are equally likely to be found. An additional twist on the birthday problem is that people are entering and leaving the room; not all X are present at the same time. This is because we worry about two vulnerabilities being found within the grace period given to a vendor.
If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision. Let's say that the grace period given to a vendor is one month, so Y = X/12. Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed. For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia). Let's assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006. The probability that two researchers can find the same vulnerability over a given time period is:
In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account. Has it always been like this?
Let's assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities. If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years? The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.
So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long. These calculations are of course very approximative, but they should be useful enough to serve as guidelines. They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point.
In any case, we can't afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches; vendors need to take responsibility, even if the vulnerability is not publicly known. This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now. These figures are a serious problem for managing security with patches, as opposed to secure coding from the start: I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant. Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn't mean it should be zero. Note: the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently. The odds of that specific ocurrence are much smaller. However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.
| Grace Period | Probability |
|---|---|
| 1 month | 0.9998 |
| 1 week | 0.37 |
| 1 day | 0.01 |
Let's assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities. If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years? The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.
| Year | Vulnerabilities Reported | Probability |
|---|---|---|
| 1988-1996 | 0 | |
| 1997 | 252 | 0.02 |
| 1998 | 246 | 0.02 |
| 1999 | 918 | 0.08 |
| 2000 | 1018 | 0.09 |
| 2001 | 1672 | 0.15 |
| 2002 | 1959 | 0.16 |
| 2003 | 1281 | 0.11 |
| 2004 | 2363 | 0.20 |
| 2005 | 4876 | 0.36 |
| 2006 | 6560 | 0.46 |
In any case, we can't afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches; vendors need to take responsibility, even if the vulnerability is not publicly known. This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now. These figures are a serious problem for managing security with patches, as opposed to secure coding from the start: I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant. Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn't mean it should be zero. Note: the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently. The odds of that specific ocurrence are much smaller. However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.
Security Vigilantes Becoming Small-Time Terrorists
Vulnerability disclosure is such a painful issue. However, some people are trying to make it as painful as possible. They slap and kick people with the release of 0-day vulnerabilities, and tell them it's for their own good. In their fantasies, sometime in the future, we'll be thanking them. In reality, they make me feel sympathy for the vendors.
They cite disillusionment with the "responsible disclosure" process. They believe that this process forces them somehow to wait indefinitely on the pleasure of the vendor. Whereas it is true that many vendors won't and don't fix known issues unless they are known publicly or are threatened with a public disclosure, it bemuses me that these people are unwilling to give the vendor a chance and wait a few weeks. They use the excuse of a few bad vendors, or a few occurrences of delays in fixes, even "user smugness", to systematically treat vendors and their clients badly. This shows recklessness, impatience, intransigence, bad judgment and lack of discernment.
I agree that reporting vulnerabilities correctly is a thankless task. Besides my previous adventure with a web application, when reporting a few vulnerabilities to CERT/CC, I received no replies ever, not even an automated receipt. It was like sending messages into a black hole. Some vendors can become defensive and unpleasant instead. However, that doesn't provide a justification for not being gallant, and first giving an opportunity for the opposite side to behave badly. If you don't do at least that, then you are part of the problem. As in many real life problems, the first one to use his fists is the loser.
What these security vigilantes are really doing is using as hostages the vendor's clients, just to make an ideological point. That is, they use the threat of security exploits to coerce or intimidate vendors and society for the sake of their objectives. They believe that the ends justify the means. Blackmail is done for personal gain, so what they are doing doesn't fit the blackmail category, and it's more than simple bullying. Whereas the word "terrorism" has been overused and brandished too often as a scarecrow, compare the above to the definition of terrorism. I realize that using this word, even correctly, can raise a lot of objections. If you accept that a weaker form of terrorism is the replacement of physical violence with other threats, then it would be correct to call these people "small-time terrorists" (0-day pun intended). Whatever you want to call them, in my opinion they are no longer just vigilantes, and certainly not heroes. The only thing that can be said for them is, at least they didn't try to profit directly from the disclosures.
Finally, let me make clear that I want to be informed, and I want disclosures to happen. However, I'm certain that uncivil 0-day disclosures aren't part of the answer. There is an interesting coverage of this and related issues at C/NET.
A few comments on errors
I wrote a post for Dave Farber's IP list on the use of lie detectors by the government. My basic point was that some uses of imperfect technology are ok, if we understand the kind of risks and errors we are encountering. I continue to see people without an understanding of the difference between Type I and Type II errors, and the faulty judgements made as a result.
What follows is a (slightly edited) version of that post:
The following is a general discussion. I am in no sense an expert on lie detectors, but this is how it was explained to me by some who are very familiar with the issue.
Lie detectors have a non-zero rate of error. As with many real-world systems, these errors manifest as Type I errors (alpha error, false positive) and Type II errors (beta error, false negative), and instances of "can't tell." It's important to understand the distinction because the errors and ambiguities in any system may not be equally likely, and the consequences may be very different. An example I give to my students is after going over a proof that writing a computer virus checker that accurately detects all computer viruses is equivalent to solving the halting problem. I then tell them that I can provide them with code that identifies every program infected with a computer virus in constant running time. They think this is a contradiction of the proof. I then write on the board the equivalent of "Begin; print "Infected!"; End." -- identify every program as infected. There are no Type II errors. However, there are many Type I errors, and thus this is not a useful program. But I digress (slightly)...
I have been told that lie detectors more frequently exhibit Type I errors because subjects may be nervous or have a medical condition, and that Type II errors generally result from training or drugs (or both) by the subject, although some psychological disorders allow psychopaths to lie undetectably. Asking foil questions (asking the subject to lie, and asking a surprise question to get a reaction) helps to identify individuals with potential for Type II errors. Proper administration (e.g., reviewing the questions with the subject prior to the exam, and revising them as necessary to prevent ambiguity), helps to minimize Type I errors. [Example. When granting a security clearance, you want to weed out people who might be more likely to commit major crimes, or who have committed them already and not yet been discovered (they may be more prone to blackmail, or future offenses). Thus, you might ask "Have you committed any crimes you haven't disclosed on your application?" Someone very literal-minded might think back to speeding down the Interstate this morning, lying to buy beers at age 20, and so on, and thus trigger a reaction. Instead, the examiner should explain before the exam that the question is meant to expose felonies, and not traffic violations and misdemeanors.] "Can't tell" situations are resolved by giving the exam again at a later time, or by reporting the results as ambiguous.
In a criminal investigation, any error can be a problem if the results are used as evidence.*** For instance, if I ask "Did you commit the robbery?" and there is a Type I error, I would erroneously conclude you were guilty. Our legal system does not allow this kind of measurement as evidence, although the police may use a negative result to clear you of suspicion. (This is not generally a good step to take in some crimes, because some psychopaths are able to lie so as to generate Type II errors.) If I asked "Are you going to hijack this plane?" then you might be afraid of the consequences of a false reading, or have a fear of flying, and there would be a high Type I error rate. Thus, this probably won't be a good mechanisms to screen passengers, either. (However, current TSA practice in other areas is to have lots of Type I errors in hopes of pushing Type II to zero. Example is not letting *any* liquids or gels on planes, even if harmless, so as to keep any liquid explosives from getting on board.)
When the US government uses lie detectors in security clearances, they aren't seeking to identify EXACTLY which individuals are likely to be a problem. Instead, they are trying to reduce the likelihood that people with clearances will pose a risk and it is judged as acceptable to have some Type I errors in the process of minimizing Type II errors. So, as with many aspects of the adjudication process, they simply fail to grant a clearance to people who score too highly in some aspect of the test -- include a lie detector test. They may also deny clearances to people who have had too many close contacts with foreign nationals, or who have a bad history of going into debt, or any of several other factors based on prior experience and analysis. Does that mean that those people are traitors? No, and the system is set to not draw that specific conclusion. If you fail a lie detector test for a clearance, you aren't arrested for treason! * The same if your evaluation score on the background investigation rates too high for lifestyle issues. What it DOES mean is that there are heightened indications of risk, and unless you are "special" for a particular reason,** the government chooses to not issue a clearance so as to reduce the risk. Undoubtedly, there are some highly qualified, talented individuals who are therefore not given clearances. However, they aren't charged with anything or deprived of fundamental rights or liberties. Instead, they are simply not extended a special classification. The end result is that people who pass through the process to the end are less likely to be security risks than an equal sample drawn from the regular population.
The same screening logic is used in other places. For instance, consider blood donation. One of the questions that will keep a donation from being used in transfusions is if the donor is a male and indicates that he has had sex with another male since (I think) 1980. Does that mean that everyone in that category has hepatitis or HIV? No! It only means that those individuals, as a class, are at higher risk, and they are a small enough subset that it is worth excluding them from the donor pool to make the donations safer in aggregate. Other examples include insurance premiums (whether to grant insurance to high risk individuals), and even personal decisions ("I won't date women with multiple facial piercings and more than 2 cats -- too crazy.") These are general exclusions that almost certainly include some Type I errors, but the end result is (in aggregate) less risk.
Back to lie detectors. There are two cases other than initial screening that are of some concern. The first is on periodic rechecks (standard procedure), and when instituting new screening on existing employees, as was done at some of the national labs a few years ago. In the case of periodic rechecks, the assumption is that the subject passed the exam before, and a positive reading now is either an indication that something has happened, or is a false positive. Examiners often err on the side of the false positive in this case rather than triggering a more in-depth exam; Aldrich Ames was one such case (unfortunately), and he wasn't identified until more damage was done. If one has a clearance and goes back for a re-exam and "flunks" it, that person is often given multiple opportunities to retake it. A continued inability to pass the exam should trigger a more in-depth investigation, and may result in a reassignment or a reduction in access until a further determination is made. In some cases (which I have been told are rare) someone's clearance may be downgraded or revoked. However, even in those cases, no statement of guilt is made -- it is simply the case that a privilege is revoked. That may be traumatic to the individual subject, certainly, but so long as it is rare it is sound risk management in aggregate for the government.
The second case, that of screening people for the first time after they have had access already and are "trusted" in place, is similar in nature except it may be viewed as unduly stressful or insulting by the subjects -- as was the case when national lab employees were required to pass a polygraph for the first time, even though some had had DoE clearances for decades. I have no idea how this set of exams went.
Notes to the above
* -- Although people flunking the test may not be charged with treason, they may be charged with falsifying data on the security questionnaire! I was told of one case where someone applying for a cleared position had a history of drug use that he thought would disqualify him, so he entered false information on his clearance application. When he learned of the lie detector test, he panicked and convinced his brother to go take the test for him. The first question was "Are you Mr. X?" and the stand-in blew the pens off the charts. Both individuals were charged and pled guilty to some kind of charges; neither will ever get a security clearance! :-)
** -- Difficult and special cases are when someone, by reason of office or designation, is required to be cleared. So, for instance, you are nominated to be the next Secretary of Defense, but you can't pass the polygraph as part of the standard clearance process. This goes through a special investigation and adjudication process that was not explained to me, and I don't know how this is handled.
*** -- In most Western world legal systems. In some countries, fail the polygraph and get a bullet to the head. The population is interchangeable, so no need to get hung up on quibbles over individual rights and error rates. Sociopaths who can lie and get away with it (Type II errors) tend to rise to the top of the political structures in these countries, so think of it as evolution in action....
So, bottom line: yes, lie detector tests generate errors, but the process can be managed to reduce the errors and serve as a risk reduction tool when used in concert with other processes. That's how it is used by the government.
Reporting Vulnerabilities is for the Brave
I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn't have access to the source code or configuration information). As luck would have it, the web site got hacked. I had to talk to a detective in the resulting police investigation. Nothing bad happened to me, but it could have, for two reasons.
The first reason is that whenever you do something "unnecessary", such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn't have? It's normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.
A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble -- the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it...). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.
The second reason that bad things could have happened to me is that I'm stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism. Why anonymously? Because student vulnerability reporters are akin to whistleblowers. They are quite vulnerable to retaliation from the administrators of web sites (especially if it's a faculty web site that is used for grading). In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem. Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don't yet either). They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions. Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities.
So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited. I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student. My superiors also requested that I cooperate with the detective. Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized? Thankfully, the student bravely decided to step forward and defused the situation.
As a consequence of that experience, I intend to provide the following instructions to students (until something changes):
Edit (5/24/06): Most of the comments below are interesting, and I'm glad you took the time to respond. After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer. There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service.
- If you find strange behaviors that may indicate that a web site is vulnerable, don't try to confirm if it's actually vulnerable.
- Try to avoid using that system as much as is reasonable.
- Don't tell anyone (including me), don't try to impress anyone, don't brag that you're smart because you found an issue, and don't make innuendos. However much I wish I could, I can't keep your anonymity and protect you from police questioning (where you may incriminate yourself), a police investigation gone awry and miscarriages of justice. We all want to do the right thing, and help people we perceive as in danger. However, you shouldn't help when it puts you at the same or greater risk. The risk of being accused of felonies and having to defend yourself in court (as if you had the money to hire a lawyer -- you're a student!) is just too high. Moreover, this is a web site, an application; real people are not in physical danger. Forget about it.
- Delete any evidence that you knew about this problem. You are not responsible for that web site, it's not your problem -- you have no reason to keep any such evidence. Go on with your life.
- If you decide to report it against my advice, don't tell or ask me anything about it. I've exhausted my limited pool of bravery -- as other people would put it, I've experienced a chilling effect. Despite the possible benefits to the university and society at large, I'm intimidated by the possible consequences to my career, bank account and sanity. I agree with HD Moore, as far as production web sites are concerned: "There is no way to report a vulnerability safely".
Edit (5/24/06): Most of the comments below are interesting, and I'm glad you took the time to respond. After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer. There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service.
Illinois WiFi piggybacker busted
Ars Technica's Eric Bangeman posted a pointer and commentary about a case in Illinois where a WiFi piggybacker got caught and fined. This is apparently the third conviction in the US (two in Florida and this one) in the last 9 months. The Rockford Register reports:
In a prepared statement, Winnebago County State's Attorney Paul Logli said, "With the increasing use of wireless computer equipment, the people of Winnebago County need to know that their computer systems are at risk. They need to use encryption or what are known as firewalls to protect their data, much the same way locks protect their homes."Firewall? I guess they didn't prepare the statement enough, but the intent is clear. Still, it seems that the focus is on the consumer's responsibility to lock down their network, ignoring the fact that the equipment that's churned out by manufacturers is far too difficult to secure in the best of circumstances, let alone when you have legacy gear that won't support WPA. Eric seems to agree:
Personally, I keep my home network locked down, and with consumer-grade WAPs so easy to administer, there's really no excuse for leaving them running with the default (open) settings."Easy" is very relative. It's "easy" for guys like us, and probably a lot of the Ars audience, but try standing in the networking hardware aisle at Best Buy for about 15 minutes and listen to the questions most customers ask. As I've touched on before, expecting them to secure their setups is just asking for trouble.


