Every once in a while, I receive spam for security conferences of which I’ve never heard, even less attended. Typically the organizers of these conferences are faculty members, professors, or government agency employees who should know better than hire companies to spam for them. I suppose that hiring a third party provides plausible deniability. It’s hypocritical. To be fair, I once received an apology for a spamming, which demonstrated that those involved understood integrity.
It’s true that it’s only a minor annoyance. But, if you can’t trust someone for small things, should you trust them for important ones?
Disloyal software surrounds us. This is software running on devices or computers you own and that serves interests other than yours. Examples are DVD firmware that insists on making you watch the silly FBI warning or prevents you from skipping “splashes” or previews, or popup and popunder advertisement web browser windows. When people discuss malware or categories of software, there is usually little consideration for disloyal software (I found this interesting discussion of Trusted Computing). Some of it is perfectly legal; some protects legal rights. At the other extreme, rootkits can subvert entire computers against their owners. The question is, when can you trust possibly disloyal software, and when does it become malware, such as the Sony CD copy prevention rootkit?
Who’s in Control
Loyalty is a question of perspective in ownership vs control. The employer providing laptops and computers to employees doesn’t want them to install things that could be liabilities or compromise the computer. The employee is using software that is restrictive but justifiably so. From the perspective of someone privately owning a computer, a lesser likelihood of disloyalty is an advantage of free software (as in the FSF free software definition). The developers won’t benefit from implementing restrictions and developing software that does things that go counter to the interests of the user. If one does, someone somewhere will likely remove that restriction for the benefit of all. Of course, this doesn’t address the possibility of cleverly hidden capabilities (such as backdoors) or compromised source code repositories.
This leads to questions of control of many other devices, such as game consoles and media players such as the iPod. Why does my iPod, using Apple-provided software, not allow me to copy music files to another computer? It doesn’t matter which computer as long as I’m not violating copyrights; possibly it’s the same computer that ripped the CDs, because the hard drive died or was upgraded, or it’s the new computer I just bought. By using the iPod as a storage device instead of a music player, such copies can be done with Apple software, but music files in the “play” section can’t be copied out. This restriction is utterly silly as it accomplishes nothing but annoy owners, and I’m glad that Ubuntu Linux allows direct access to the music files.
DMCA
Some firmware implements copyright protection measures, and modifying it to remove those protections is made illegal by the DMCA. As modifying consoles (“modding”) is often done for that purpose, the act of “modding” has become suspicious in itself. Someone modding a DVD player to simply be able to bypass annoying splash screens, but without affecting copy protection mechanisms, would have a hard time defending herself. This has a chilling effect on the recycling of perfectly good hardware with better software. For example, I think Microsoft would still be selling large quantities of the original XBox if the compiled XBMC media player software wasn’t illegal as well for most people due to licensing issues with the Microsoft compiler. The DMCA helps law enforcement and copyright holders, but has negative effects as well (see wikipedia). Disloyal devices are distasteful, and the current law heavily favors copyright owners. Of course, it’s not clearcut, especially in devices that have responsibilities towards multiple entities, such as cell phones. I recommend watching Ron Buskey’s security seminar about cell phones.
Web Me Up
If you think you’re using only free software, you’re wrong every time you use the web and allow scripting. The potentially ultimate disloyal software is the one web sites push to your browser. Active content (JavaScript, Flash, etc…) on web pages can glue you in place and restrict what you can do and how, or deploy adversarial behaviors (e.g., pop-unders or browser attacks). Every time you visit a web page nowadays, you download and run software that is not free:
* it is often impractical to access the content of the page, or even basic form functionality, without running the software, so you do not have the freedom to run or not run it as a practical choice (in theory you do have a choice, but penalties for choosing the alternative can be significant).
* It is difficult to study given how some code can load other active content from other sites in a chain-like fashion, creating a large spaghetti, which can be changed at any time.
* there is no point to redistributing copies, as the copies running from the web sites you need to use won’t change.
* Releasing your “improvements” to the public would almost certainly violate copyrights. Even if you made useful improvements, the web site owners could change how their site works regularly, thus foiling your efforts.
Most of the above is true even if the scripts you are made to run in a browser were free software from the point of view of the web developers; the delivery method tainted them.
Give me some AIR
The Adobe Integrated Runtime (“AIR”) is interesting because it has the potential to free web technologies such as HTML, Flash and JavaScript, by allowing them to be used in a free open source way. CERIAS webmaster Ed Finkler developed the “Spaz” application with it, and licensed it with the New BSD license. I say potentially only, because AIR can be used to dynamically load software as well, with all the problems of web scripting. It’s a question of control and trust. I can’t trust possibly malicious code that I am forced to run on my machine to access a web page which I happen to visit. However, I may trust static code that is free software, to not be disloyal by design. If it is disloyal, it is possible to fix it and redistribute the improved code. AIR could deliver that, as Ed demonstrated.
The problem with AIR is that I will have to trust a web developer with the security of my desktop. AIR has two sandboxes, the Classic Sandbox that is like a web browser, and the Application Sandbox, which is compared to server-side applications except they run locally (see the AIR security FAQ). The Application Sandbox allows local file operations that are typically forbidden to web browsers, but without some of the more dangerous web browser functionality. Whereas the technological security model makes sense as a foundation, its actual security is entirely up to whoever makes the code that runs in the Application Sandbox. People who have no qualms about pushing code to my browser and forcing me to turn on scripting, thus making me vulnerable to attacks from sites I will visit subsequently, to malicious ads, or to code injected into their site, can’t be trusted to care if my desktop is compromised through their code, or to be competent to prevent it.
Even the security FAQ for AIR downplays significant risks. For example, it says “The damage potential from an injection attack in a given website is directly proportional to the value of the website itself. As such, a simple website such as an unauthenticated chat or crossword site does not have to worry much about injection attacks as much as any damage would be annoying at most.” This completely ignores scripting-based attacks against the browsers themselves, such as those performed by the well-known malwares Mpack and IcePack. In addition, there probably will be both implementation and design vulnerabilities found in AIR itself.
Either way, AIR is a development to watch.
P.S. (10/16): What if AIR attracts the kind of people that are responsible for flooding the National Vulnerability Database with PHP server application vulnerabilities? Server applications are notoriously difficult to write securely. Code that they would write for the application sandbox could be just as buggy, except that instead of a few compromised servers, there could be a large quantity of compromised personal computers…
The role of diversity in helping computer security received attention when Dan Geer was fired from @stake for his politically inconvenient considerations on the subject. Recently, I tried to “increase diversity” by buying a Ubuntu system—that is, a system that would come with Ubuntu pre-loaded. I have used Ubuntu for quite a while now and it has become my favorite for the desktop, for many reasons that I don’t want to expand upon here, and despite limitations on the manageability of multiple monitor support. I wanted a system that would come with it pre-loaded so as not to pay for an OS I won’t use, not support companies that didn’t deserve that money, and be even less of a target than if I had used MacOS X. I wanted a system that would have a pre-tested, supported Ubuntu installation. I still can’t install 7.04 on a recent Sun machine (dual opteron) because of some problems with the SATA drivers on an AMD-64 platform (the computer won’t boot after the upgrade from 6.10). I don’t want another system with only half-supported hardware or hardware that is sometimes supported, sometimes not as versions change. I suppose that I could pay up the $250 that Canonical wants for 1 year of professional support, but there is no guarantee that they would be able to get the hardware to play nicely with 7.04. With a pre-tested system, there is no such risk and there are economies of scale. Trying to get software to play nicely after buying the hardware feels very much to me like putting the “cart before the horse”; it’s a reactive approach that conflicts with best practices.
So, encouraged by the news of Dell selling Ubuntu machines, I priced out a machine and monitor. When I requested a quote, I was told that this machine was available only for individual purchase, and that I needed to go on the institutional purchase site if I wanted to buy it with one of my grants. Unfortunately, there wasn’t and still is no Ubuntu machine available for educational purchase on that site. No amount of begging changed Dell’s bizarre business practices. Dell’s representative for Purdue stated that this was due to “supply problems” and that Ubuntu machines may be available for purchase in a few months. Perhaps. The other suggestion was to buy a Dell Precision machine, but they only come with Red Hat Linux (see my point about supporting companies who deserve it), and they use ATI video hardware (ATI has a history of having bad drivers for Linux).
I then looked for desktops from other companies. System76, and apparently nobody else (using internet searches), had what I wanted, except that they were selling only up to 20” monitors. When I contacted them, they kindly and efficiently offered a 24” monitor for purchase, and sent me a quote. I forwarded the quote for purchasing.
After a while, I was notified that System76 wasn’t a registered vendor with Purdue University, and that it costs too much to add a vendor that “is not likely to be much of a repeat vendor” and that Purdue is “unwilling to spend the time/money required to set them up as a new vendor in the purchasing system.” I was also offered the possibility to buy the desktop and monitor separately, and because then the purchase would be done under different purchasing rules and with a credit card, I could buy them from System76 if I wanted… but I would have to pay a 50% surcharge imposed by Purdue (don’t ask, it doesn’t make sense to me).
Whereas Purdue may have good reasons to do that from an accounting point of view, I note that educational, institutional purchases are subject to rules and restrictions that limit or make less practical computing diversity, assuming that this is a widespread practice. This negatively impacts computing “macro-security” (security considered on a state-wide scale or larger). I’m not pretending that the policies are new or that buying a non-mainstream computer has not been problematic in the past. However, the scale of computer security problems has increased over the years, and these policies have an effect on security that they don’t have on other items purchased by Purdue or other institutions. We could benefit from being aware of the unfortunate effects of those purchasing policies; I believe that exemptions for computers would be a good thing.
Edit: I wrote the wrong version numbers for Ubuntu in the original.
Edit (9/14/07): Changed the title from “Ubuntu Linux Computers 50% More Expensive: a Barrier to Computing Diversity” to “Purchasing Policies That Create a Barrier to Computing Diversity”, as it is the policies that are the problem, and the barriers are present against many products, not just Ubuntu Linux.
So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services. You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again. Tomorrow you start again, and again, and again. Is it worth it? What difference does it make? Who cares anymore?
If you’re sick of it, you may just be getting fatigued.
If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness. Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”. On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone; there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns. Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.
The picture may certainly look bleak, with talk of “perpetual zero-days”. However, there are things you can do (of course, as in all lists not every item applies to everyone):
Use the CIS benchmarks, and if evaluation tools are available for your platform, run them. These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers. It’s encouraging, and may lift the feeling that you are sinking into helplessness. If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release). Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).
Use the NIST security checklists (hardening guides and templates). The NIST’s information technology laboratory site has many other interesting security papers to read as well.
Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless. Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it. If not, request these features from your provider. Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything. If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.
Watch CERIAS security seminars on subjects that interest you.
If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).
Good luck! Feel free to add more ideas as comments.
*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know? I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?). I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later. It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity. I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you. I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so.
The Cassandra system has been much more successful and long lasting than I first imagined. Being inexperienced at the time, there were some things I got wrong, such as deleting inactive accounts (I stopped that very quickly as it made many people unhappy or unwilling to use the service), or deleting accounts that bounced several emails (several years ago this was changed to simply invalidating the email address). Recently I improved it by adding GPG signatures. Email notifications from Cassandra are now cryptographically signed. The public key is available on the standard public key servers, such as the MIT server.
Things can still be improved
I initially envisioned profiles as being updated regularly, perhaps with automated tools listing the applications installed on a system. I also thought that there were many applications without vulnerability entries in MITRE’s CVE, the National Vulnerability Database (NVD, used to be named ICAT), or Secunia so I needed to let people enter product and vendor names that weren’t linked to any vulnerabilities. However, I found that there was little correlation between the names of products in these sources, as well as between those provided by scanning tools or entered manually by users. ICAT in particular used to be quite bad for using inconsistent or misspelled names. Secunia does not separate vendor names from products and uses different names than the NVD, so Cassandra has to guess which is which based on already known vendor and product names. Because of this, Secunia entries may need reparsing when new names are learned. So, users could get a false sense of security by entering the name of the products they use, but never get notified because of a mismatch! On top of it, bad names are listed by the autocomplete feature, so users can be mislead by someone else’s mistakes or misfortune. A Cassandra feature that helped somewhat with this problem was the notion of canonical and variant names. All variants point to a single canonical name for a vendor or a product. However, these need to be entered manually and maintained over time, so I didn’t enter many.
It gets worse. Profiles are quite static in practice; this leads to other problems. Companies merge, get bought or otherwise change names. Sometimes companies also decide to change the names of their products for other reasons, or names are changed in the NVD. So, profiles can silently drift off-course and not give the alerts needed. All these factors result in product or vendor names in Cassandra that don’t point to any vulnerability entries. I call these names “orphans”; I recently realized that Cassandra contained hundreds of orphaned names.
And they will be improved
I am planning on implementing two new features in Cassandra: Profile auto-correction and product name vetting.
Note that you should still verify your profiles periodically, because Cassandra will not detect all name changes—this is difficult because a name change may look just like a new product. If you have a product name that isn’t in Cassandra, I suggest using the keywords feature. Cassandra will then search the title and description of the entries to find matches (note to self: pehaps keywords should search product and vendor names as well—that would help catch all variants of a name. Also, consider using string matching algorithms to recognize names).
It is common practice to make forms more user-friendly by giving immediate feedback on the inputs with client-side scripting. Everyone with a bit of secure programming knowledge knows, however, that the server side needs to do the final input validation. If the two validations are not equivalent, then an input that passes client-side validation may be rejected later, confusing and annoying the customer, or the client-side validation may be needlessly restrictive. Another problem is when the form stops working if JavaScript is disabled, due to the way input validation was attempted.
I was delighted to discover that the regular expression syntax in JavaScript and Ruby match, and the matching differs only in greedy vs non-greedy behavior, and not whether a match is possible or not. This means that regular expressions describing a white list of correct inputs can be used for both (this probably works for Perl, Python and PHP as well but I haven’t checked).
In the code for ReAssure, all inputs are defined by classes that create the html for forms, as well as perform input validation. This means that the regular expression can be defined in a single place, when the class is instantiated:
def initialize(...)
(...)
@regexp = Regexp.new(/^\d+$/) # positive integer
end
This regular expression can be used to perform the initial server-side input validation:
def validate(input)
if input == nil
unescaped = default()
else
unescaped = CGI.unescapeHTML(input.to_s.strip)
end
unescaped.scan(@regexp) { |match|
return @value = match.untaint
}
if input != ''
raise 'Input "' + @ui_name + '" is not valid'
end
end
To perform client-side input validation, the onblur event is used to trigger validation when focus is lost. The idea is to make the input red and bold (for color-blind people) when validation fails, and green when it passes. The onfocus event is used to restore the input to a neutral state while editing (this is the Ruby code that generates the form html):
def form
$cgi.input('NAME'=>@name, 'VALUE'=>to_html(), 'onblur' => onblur(),
'onfocus' => onfocus())
end
def onblur()
return "if (this.value.search(/" + @regexp.source + "/) < 0)
{this.className = 'bad'} else {this.className = 'good'};"
end
def onfocus()
return "this.className = 'normal';"
end
where the classes “bad”, “good” and “normal” are specified in a style sheet (CSS).
There are cases when more validation may happen later on the server side, e.g., if an integer must match an existing key in a database that the user may be allowed to reference. Could the extra validation create a mismatch? Perhaps. However, in these cases the client-side interface should probably be a pre-screened list and not a free-form input, so the client would have to be malicious to fail server-side validation. It is also possible to add range (for numbers) and size (for strings) constraints in the “onblur” JavaScript. In the case of a password field, the JavaScript contains several checks matching the rules on the server side. So, a lone regular expression may not be sufficient for complete input validation, but it is a good starting point.
Note that the form still works even if JavaScript is disabled! As you can see, it is easy to perform client-side validation without forcing everyone to turn on JavaScript ![]()
Hear, see and speak no Evil—but pretend JavaScript is safe and force your customers to turn on JavaScript in their browsers to make your site sparkle. It’s not your problem, is it? It’s the developers of browsers that should fix their code!
Meanwhile the parade of JavaScript-based attacks continues. When even the organization responsible for CISSPs, (ISC)2, makes it impossible to update your CISSP credits without JavaScript turned on, what hope is there for shopping, banking, credit card security sites (e.g., verified by VISA) and investment sites (e.g., Fidelity) to adopt careful and responsible stances? I didn’t even get a reply from the (ISC)2 web site developers when I pointed out JavaScript issues. It’s a slick click interface party! Woohoo! Ooh, shiny!
It’s a party for attackers, that is. JavaScript is not the only problem, when any browser extension can take down the browser (or take control of it…). When will we see browsers architectured like operating systems, so that a plug-in can crash without taking the browser with it? When will plugins have configurable security policies and limited privileges, so that a bug in a plugin doesn’t compromise our computer’s security? It seems that browser architecture isn’t more advanced than Windows 95 and is about as secure, yet we poke puddles of pus with them and then prepare food, and don’t even worry about getting infected. Basic browser hygiene is provided by the NoScript Firefox extension, but when every site forces you to enable JavaScript, what’s the use? One thing is sure—I don’t see many people taking this seriously.
The premise of the “Verified by VISA” program seems fine: request a password to allow the use of a credit card online, to lower credit card fraud (besides the problem of having to manage yet another password). However, there were several problems with how I was introduced to the program:
That appeared to me more like a phishing attempt, exploiting a XSS vulnerability, than anything else. After contacting my bank, I was assured that the program was legitimate. Visa actually has a web site where you can register your card for the program:
https://usa.visa.com/personal/security/vbv/index.html
On that site, you will find that most links to explanations are broken. I get a “Sorry! The page you’ve requested cannot be found.” when clicking almost all of them (I found out later that it works if you activate JavaScript). Another issue is that you need to activate JavaScript in order to provide that sensitive information, therefore exposing your browser to exploits against the browser and to any XSS exploits (I’m not worried as much about the VISA site, which doesn’t have user-submitted content, as much as the shopping sites). If you are not using NoScript or forget to disable JavaScript afterwards, then you expose yourself to exploits from all the future sites you will visit. It’s irresponsible and unnecessary: there was nothing in the JavaScript-activated forms (or in the explanations) that couldn’t have been done with regular HTML. It’s all in the name of security…
A fundamental issue I have with this process is that commands (the registration) to reach a higher level of security are issued in-band, using the very medium and means (browser) that are semi-trusted and part of the problem we’re trying to solve (I realize that this program addresses other threats, such as the vulnerability of CC numbers stored by merchants). Moreover, doing this exposes more sensitive credentials. It is almost like hiring a thief as a courier for the new keys to the building, while giving him as well the key to the safe where all the keys are stored.
The Visa program also enables a new kind of attack against credit cards. If criminals get their hands on your last 4 SSN digits (or if they guess it, it’s only 9999 brute force attempts) and your credit card number, they could register it themselves, denying you its use! The motivation for this attack wouldn’t necessarily be financial gain, but causing you grief. I also bet that you will have a harder time proving that fraud occurred, and may get stuck with any charges made by the criminals.
The correct way of registering for this program would be by using a trusted channel, such as showing up at your bank in person to choose a password for your credit card, or through registered mail with signatures. However, these are not available options for me (I wonder if some banks offer this service, and if so, whether they are not simply using the above web site). There should also be a way to decline participation in the program, and block the future registration of the card.
In conclusion, this poorly executed program had a reverse effect on me: I now distrust my Visa card, and Visa itself, a little bit more.
Update: There doesn’t seem to be a limit on the number of times you can try to register a card, enabling the brute force finding of someone’s last 4 SSN digits (I tried 20 times. At the end I entered the correct number and it worked, proving that it still accepted attempts after 20 times). An attacker can then use the last 4 digits of your SSN elsewhere! Let’s say, your retirement accounts with Fidelity and others that accept SSNs as user IDs.
For more fun, I attempted to register my credit card again. I received a message stating that the card was already registered, but I was offered the chance to re-register it anyway and erase my previously entered password simply by entering my name, the complete SSN and phone number. Isn’t that great, now attackers could validate my entire SSN!
It gets worse. I entered an incorrect SSN, and the system accepted it. I was then prompted to enter new passwords. The system accepted the new passwords without blinking… Not only is the design flawed, but the implementation fails to properly perform the checks!
I was surprised to learn a few weeks ago that Vista added symlink support to Windows. Whereas I found people rejoicing at the new feature, I anticipate with dread a number of vulnerability announcements in products that worked fine under XP but are now insecure in the presence of symlinks in the file system. This should continue for some time still, as Windows programmers may take time to become familiar with the security issues that symlinks pose. For example, in the CreateFile function call, “If FILE_FLAG_REPARSE_POINT is not specified and:
* If an existing file is opened and it is a symbolic link, the handle returned is a handle to the target.
* If CREATE_ALWAYS, TRUNCATE_EXISTING, or FILE_FLAG_DELETE_ON_CLOSE are specified, the file affected is the target.”
(reference: MSDN, Symbolic link effects on File system functions, at: http://msdn2.microsoft.com/en-au/library/aa365682.aspx)
So, unless developers update their code to use that flag, their applications may suddenly operate on unintended files. Granted, the intent of symbolic links is to be transparent to applications, and being aware of symbolic links is not something every application needs. However, secure Windows applications (such as software installers and administrative tools) will now need to be ever more careful about race conditions that could enable an attacker to unexpectedly create symlinks. They will also need to be more careful about relinquishing elevated privileges as often as possible.
In addition, it is easy to imagine security problems due to traps planted for administrators and special users, to trick them into overwriting unintended files. UNIX administrators will be familiar with these issues, but now Windows administrators may learn painful lessons as well.
Hopefully, this will be just a temporary problem that will mostly disappear as developers and administrators adjust to this new attack vector. The questions are how quickly and how many vulnerabilities and incidents will happen in the meantime. One thing seems certain to me: MITRE’s CWE will have to add a category for that under “Windows Path Link problems”, ID 63.
TippingPoint’s Zero Day Initiative (ZDI) gives interesting data. TippingPoint’s ZDI has made public its “disclosure pipeline” on August 28, 2006. As of today, it has 49 vulnerabilities from independent researchers, which have been waiting on average 114 days for a fix. There are also 12 vulnerabilities from TippingPoint’s researchers as well. With those included, the average waiting time for a fix is 122 days, or about 4 months! Moreover, 56 out of 61 are high severity vulnerabilities. These are from high profile vendors: Microsoft, HP, Novell, Apple, IBM Tivoli, Symantec, Computer Associates, Oracle… Some high severity issues have been languishing for more than 9 months.
Hum. ZDI is supposed to be a “best-of-breed model for rewarding security researchers for responsibly disclosing discovered vulnerabilities. ” How is it responsible to take 9 months to fix a known but secret high severity vulnerability? It’s not directly ZDI’s fault that the vendors are taking so long, but then it’s not providing much incentive either to the vendors. This suggests that programs like ZDI’s have a pernicious effect. They buy the information from researchers, who are then forbidden from disclosing the vulnerabilities. More vulnerabilities are found due to the monetary incentive, but only people paying for protection services have any peace of mind. The software vendors don’t care much, as the vulnerabilities remain secret. The rest of us are worse off than before because more vulnerabilities remain secret for an unreasonable length of time.
Interestingly, this is what was predicted several years ago in “Market for Software Vulnerabilities? Think Again” (2005) Kannan K and Telang R, Management Science 51, pp. 726-740. The model predicted worse social consequences from these programs than no vulnerability handling at all due to races with crackers, increased vulnerability volume, and unequal protection of targets. This makes another conclusion of the paper interesting and likely valid: CERT/CC offering rewards to vulnerability discoverers should provide the best outcomes, because information would be shared systematically and equally. I would add that CERT/CC is also in a good position to find out if a vulnerability is being exploited in the wild, in which case it can release an advisory and make vulnerability information public sooner. A vendor like TippingPoint has a conflict of interest in doing so, because it decreases the value of their protection services.
I tip my hat to TippingPoint for making their pipeline information public. However, because they provide no deadlines to vendors or incentives for responsibly patching the vulnerabilities, the very existence of their services and similar ones from other vendors are hurting those who don’t subscribe. That’s what makes vulnerability protection services a racket.