I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information). As luck would have it, the web site got hacked. I had to talk to a detective in the resulting police investigation. Nothing bad happened to me, but it could have, for two reasons.
The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.
A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.
The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism. Why anonymously? Because student vulnerability reporters are akin to whistleblowers. They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading). In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem. Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either). They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions. Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities.
So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited. I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student. My superiors also requested that I cooperate with the detective. Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized? Thankfully, the student bravely decided to step forward and defused the situation.
As a consequence of that experience, I intend to provide the following instructions to students (until something changes):
Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond. After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer. There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service.
A popular saying is that “Reliable software does what it is supposed to do. Secure software does that and nothing else” (Ivan Arce). However, how do we get there, and can we claim that we have achieved the practice of an engineering science? The plethora of vulnerabilities found every year (thousands, and that’s just in software that matters or is publicly known) suggests not. Does that mean that we don’t know how, or that it is just not put into practice for reasons of ignorance, education, costs, market pressures, or something else?
The distinction between artisanal work and engineering work is well expressed in the SEI (Software Engineering Institute) work on capability maturity models. Levels of maturity range from 1 to 5:
Artisanal work is individual work, entirely dependent on the (unique) skills of the individual and personal level of organization. Engineering work aims to be objective, independent from one individual’s perception and does not require unique skills. It should be reproducible, predictable and systematic.
In this context, it occurred to me that the security community often suggests using methods that have artisanal characteristics. We are also somewhat hypocritical (in the academic sense of the term, not deceitful, just not thinking through critically enough). The methods that are suggested to increase security actually rely on practices we decry. What am I talking about? I am talking about black lists.
A common design error is to create a list of “bad” inputs, bad characters, or other undesirable things. This is a black list; it often fails because the enumeration is incomplete, or because the removal of bad characters from the input can result in the production of another bad input which is not caught (and so on recursively). It turns out more often than not that there is a way to circumvent or fool the black list mechanism. Black lists fail also because they are based on previous experience, and only enumerate *known* bad input. The recommended practice is the creation of white lists, that enumerate known good input. Everything else is rejected.
When I teach secure programming, I go through often repeated mistakes, and show students how to avoid them. Books on secure programming show lists upon lists of “sins” and errors to avoid. Those are blacklists that we are in effect creating in the minds of readers and students! It doesn’t stop there. Recommended development methods (solutions for repeated mistakes) also often take the form of black lists. For example, risk assessment and threat modeling require expert artisans to imagine, based on past experience, what are likely avenues of attack, and possible damage and other consequences. The results of those activities are dependent upon unique skill sets, are irreproducible (ask different people and you will get different answers), and attempt to enumerate known bad things. They build black lists into the design of software development projects.
Risk assessment and threat modeling are appropriate for insurance purposes in the physical world, because the laws of physics and gravity on earth aren’t going to change tomorrow. The experience is collected at geographical, geological and national levels, tabulated and analyzed for decades. However, in software engineering, black lists are doomed to failure, because they are based on past experience, and need to face intelligent attackers inventing new attacks. How good can that be for the future of secure software engineering?
Precious few people emphasize development and software configuration methods that result (with guarantees) in the creation of provably correct code. This of course leads into formal methods (and languages like SPARK and the correctness by construction approach), but not necessarily so. For example, I was recently educated on the existence of a software solution called AppArmor (Suse Linux, Crispin Cowan et al.). This solution is based on fairly fine-grained capabilities, and granting to an application only known required capabilities; all the others are denied. This corresponds to building a white list of what an application is allowed to do; the developers even say that it can contain and limit a process running as root. Now, it may still be possible for some malicious activity to take place within the limits of the granted capabilities (if an application was compromised), but their scope is greatly limited. The white list can be developed simply by exercising an application throughout its normal states and functions, in normal usage. Then the list of capabilites is frozen and provides protection against unexpected conditions.
We need to come up with more white list methods for both development and configuration, and move away from black lists. This is the only way that secure software development will become secure software engineering.
Edit (4/16/06): Someone pointed out the site http://blogs.msdn.com/threatmodeling/ to me. It is interesting because it shows awareness of the challenge of getting from an art to a science. It also attempts to abstract the expert knowledge into an “attack library”, which makes explicit its black list nature. However, they don’t openly acknowledge the limitations of black lists. Whereas we don’t currently have a white list design methodology that can replace threat modeling (it is useful!), it’s regrettable that the best everyone can come up with is a black list.
Also, it occurred to me since writing this post that AppArmor isn’t quite a pure white list methodology, strictly speaking. Instead of being a list of known *safe* capabilities, it is a list of *required* capabilities. The difference is that the list of required capabilities, due to the granularity of capabilities and the complexity emerging from composing different capabilities together, is a superset of what is safe for the application to be able to do. What to call it then? I am thinking of “permissive white list” for a white list that allows more than necessary, vs a “restrictive white list” for a white list that possibly prevents some safe actions, and an “exact white list” when the white list matches exactly what is safe to do, no more and no less.
The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest. Topics covered include spyware, phishing, and patching. The winning video, Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.
The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest. In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.
Well, we’re all pretty beat from this year’s Symposium, but things went off pretty well. Along with lots of running around to make sure posters showed up and stuff, I was able to give a presentation called Web Application Security - The New Battlefront. People must like ridiculous titles like that, because turnout was pretty good. Anyway, I covered the current trend away from OS attacks/vandalism and towards application attacks for financial gain, which includes web apps. We went over the major types of attacks, and I introduced a brief summary of what I feel needs to be done in the education, tool development, and app auditing areas to improve the rather poor state of affairs. I’ll expand on these topics more in the future, but you can see my slides and watch the video for now:
This story at the NYT web site (registration might be required—it seems kind of random to me) about the prevalence of “piggybacking” on open wireless networks. Most of the article deals with the theft of bandwidth, although there are a couple quotes from David Cole of Symantec about other dangers of people getting into your LAN and accessing the Internet through it. Something that really struck me, though, was the following section about a woman who approached a man with a laptop camped outside her condo building:
When Ms. Ramirez asked the man what he was doing, he said he was stealing a wireless Internet connection because he did not have one at home. She was amused but later had an unsettling thought: “Oh my God. He could be stealing my signal.”
Yet some six months later, Ms. Ramirez still has not secured her network.
There are two problems highlighted here, I think:
Think about it: if you purchased a car that came with non-functioning locks and keys, and it was your responsibility to get keys cut and locks programmed, would you be satisfied with purchase? Would it be realistic to expect most consumers to do this on their own? I think it’s not. But that’s what the manufacturers of consumer wireless equipment (and related products, like operating systems) ask of the average consumer. With expectations like that, is it really a surprise that most users choose not to bother, even when they know better?
More: Hey Neighbor, Stop Piggybacking on My Wireless - New York Times »