The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Drone “Flaw” Known Since 1990s Was a Vulnerability

Share:
"The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said." Call it what it is: it's a vulnerability that was misclassified (some might argue that it's an exposure, but there is clearly a violation of implicit confidentiality policies). This fiasco is the result of the thinking that there is no vulnerability if there is no threat agent with the capability to exploit a flaw. I argued against Spaf regarding this thinking previously; it is also widespread in the military and industry. I say that people using this operational definition are taking a huge risk if there's a chance that they misunderstood either the flaw, the capabilities of threat agents, present or future, or if their own software is ever updated. I believe that for software that is this important, an academic definition of vulnerability should be used: if it is possible that a flaw could conceptually be exploited, it's not just a flaw, it's a vulnerability, regardless of the (assumed) capabilities of the current threat agents. I maintain that (assuming he exists for the sake of this analogy) Superman is vulnerable to kryptonite, regardless of an (assumed) absence of kryptonite on earth.

The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:

"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.

A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.

Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "

Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.

An old canard reappears (sort of)

Share:

I have a set of keywords registered with Google Alerts that result in a notification whenever they show up in a new posting. This helps me keep track of some particular topics of interest.

One of them popped up recently with a link to a review and some comments about a book I co-authored (Practical Unix & Internet Security, 3rd Edition). The latest revision is over 6 years old, but still seems to be popular with many security professionals; some of the specific material is out of date, but much of the general material is still applicable and is likely to be applicable for many years yet to come. At the time we wrote the first edition of the book there were only one or two books on computer security, so we included more material to make this a useful text and reference.

In general, I don't respond to reviews of my work unless there is an error of fact, and not always even then. If people like the book, great. If they don't, well, they're entitled to their opinions -- no matter how ignorant and ill-informed they may be. grin   

This particular posting included reviews from Amazon that must have been posted about the 2nd edition of the book, nearly a decade old, although their dates as listed on this site make it look like they are recent. I don't recall seeing all of the reviews before this.

One of the responses in this case was somewhat critical of me rather than the book: the text by James Rothschadl. I'm not bothered by his criticism of my knowledge of security issues. Generally, hackers who specialize in the latest attacks dismiss anyone not versed in their tools as ignorant, so I have heard this kind of criticism before. It is still the case that the "elite" hackers who specialize in the latest penetration tools think that they are the most informed about all things security. Sadly, some decision-makers believe this too, much to their later regret, usually because they depend on penetration analysis as their primary security mechanism.

What triggered this blog posting was when I read the comments that included the repetition of erroneous information originally in the book Underground by Suelette Dreyfus. In that book, Ms. Dreyfus recounted the exploits of various hackers and miscreants -- according to them. One such claim, made by a couple of hackers, was that they had broken into my account circa 1990. I do not think Ms. Dreyfus sought independent verification of this, because the story is not completely correct. Despite this, some people have gleefully pointed this out as "Spaf got hacked."

There are two problems with this tale. First, the computer account they broke into was on the CS department machines at Purdue. It was not a machine I administered (and for which I did not have administrator rights) -- it was on shared a shared faculty machine. Thus, the perps succeeded in getting into a machine run by university staff that happened to have my account name but which I did not maintain. That particular instance came about because of a machine crash, and the staff restored the system from an older backup tape. There had been a security patch applied between the backup and the crash, and the staff didn't realize that the patch needed to be reapplied after the backup.

But that isn't the main problem with this story: rather, the account they broke into wasn't my real account! My real account was on another machine that they didn't find. Instead, the account they penetrated was a public "decoy" account that was instrumented to detect such behavior, and that contained "bait" files. For instance, the perps downloaded a copy of what they thought was the Internet Worm source code. It was actually a copy of the code with key parts missing, and some key variables and algorithms changed such that it would partially compile but not run correctly. No big deal.

Actually, I got log information on the whole event. It was duly provided to law enforcement authorities, and I seem to recall that it helped lead to the arrest of one of them (but I don't recall the details about whether there was a prosecution -- it was 20 years ago, after all).

At least 3 penetrations of the decoy account in the early 1990s provided information to law enforcement agencies, as well as inspired my design of Tripwire. I ran decoys for several years (and may be doing so to this day grin. I always had a separate, locked down account for personal use, and even now keep certain sensitive files encrypted on removable media that is only mounted when the underlying host is offline. I understand the use of defense-in-depth, and the use of different levels of protection for different kinds of information. I have great confidence in the skills of our current system admins. Still, I administer a second set of controls on some systems. But i also realize that those defenses may not be enough against really determined, resourced attacks. So, if someone wants to spend the time and effort to get in, fine, but they won't find much of interest -- and they may be providing data for my own research in the process!

Talking to the Police All the Time

Share:
I started writing this entry while thinking about the "if you have nothing to hide, then you have nothing to fear" fallacy. What do you say to someone who says that they have nothing to hide, or that some information about them is worthless anyway, so they don't care about some violation of their privacy? What do you say to a police officer who says that if you have nothing to hide then you have nothing to fear by answering questions? It implies that if you refuse to answer then you're probably "not innocent". That "pleading the 5th" is now used as a joke to admit guilt in light banter, is a sign of how pervasive the fallacy has become. It's followed closely with "where there's smoke there's probably fire" when discussing someone's arrest, trial, or refusal to answer questions. However, in the field of information security, it is an everyday occurrence to encounter people who don't realize the risks to which they are exposed. So why is this fallacy so irritating?

Those kind of statements expose naïveté or, if intended as a manipulative statement, perversity. It takes a long time to explain the risks and convince others that they are real, and that they are really exposed to them, and what the consequences might be. Even if you could somehow manage to explain it convincingly on the spot, before you're done, chances are that you'll be dismissed as a "privacy nut". In addition, you rarely have that kind of time to make a point in a normal discussion. So, that fallacy is often a successful gambit simply because it discourages someone from trying to explain why it's so silly.

You may buy some time by mentioning anecdotes such as the man falsely accused of arson because by coincidence, he bought certain things in a store at a certain time (betrayed by his grocery loyalty card) [1]. Or, there's the Indiana woman who bought for her sick family just a little too much medication containing pseudoephedrine, an ingredient used in the manufacture of crystal meth [2]. Possibilities for the misinterpretation of data or the inappropriate enforcement of bad laws are multiplied by the ways in which it can be obtained. Police can stick a GPS-tracking device on anyone they want without getting a search warrant [3] or routinely use your own phone's GPS [4]. Visiting a web page, regardless of whether you used an automated spider, clicked on a linked manually, perhaps even being tricked into doing it, or were framed by a malicious or compromised web site, can trigger an FBI raid [5] (remember goatse? Except it's worse, with a criminal record for you). There are also the dumb things people post themselves, for example on Facebook, causing them to lose jobs, opportunities for jobs, or even get arrested [6].

Regardless, people always think that happens only to others, that "they were dumb and I'm not" or that they are isolated incidents. This is why I was delighted to find this video of a law professor explaining why talking to police can be a bad idea [7]. Even though I knew that "everything you say can be used against you", I was surprised to learn that nothing you say can be used in your defense. This asymmetry is a rather convincing argument for exercising 5th amendment rights. Then there are the chances that even though you are innocent, due to the stress or excitement you will exaggerate or say something stupid. For example, you might say you've never touched a gun in your life -- except you did once a long time ago when you were a teen maybe, and forgot about it but there's a photo proving that you lied (apparently, that you didn't mean to lie matters little). People say stupid things in less stressful circumstances. Why take the chance? There are also coincidences that look rather damning and bad for you. Police sometimes make mistakes as well. The presentation is well-made and is very convincing; I recommend viewing it.

There are so many ways in which private information can be misinterpreted and used against you or to your disadvantage, and not just by police. Note that I agree that we need an effective police; however, there's a line between that and a surveillance society making you afraid to speak your mind in private, afraid to buy certain things at the grocery store, afraid to go somewhere or visit a web site, or afraid of chatting online with your friends, because you never know who will use anything you say or do against you and put it in the wrong context. In effect, you may be speaking to the police all the time but don't realize it. Even though considering each method separately, it can be argued that technically there isn't a violation of the 5th amendment, the cumulative effect may violate its intent.

Then, after I wrote most of this entry, Google CEO Eric Schmidt declared that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place" [8]. I'm afraid that's a realistic assessment, even if it's a lawful activity, given the "spying guides" published by the likes of Yahoo!, Verizon, Sprint, Cox, SBC, Cingular, Nextel, GTE, Voicestream for law enforcement, and available at Cryptome [9]. The problem is that you'll then live a sad life devoid of personal liberties. The alternative shrug and ignorance of the risks is bliss, until it happens to you.

[1] Brandon Sprague (2004) Fireman attempted to set fire to house, charges say. Times Snohomish County Bureau, Seattle Times. Accessed at http://seattletimes.nwsource.com/html/localnews/2002055245_arson06m.html

[2] Mark Nestmann (2009) Yes, You ARE a Criminal…You Just Don't Know it Yet. In "Preserving your privacy and more", November 23 2009. Accessed at http://nestmannblog.sovereignsociety.com/2009/11/yes-you-are-a-criminalyou-just-dont-know-it-yet.html

[3] Chris Matyszczyk (2009) Court says police can use GPS to track anyone. Accessed at http://news.cnet.com/8301-17852_3-10237353-71.html

[4] Christopher Soghoian (2009) 8 Million Reasons for Real Surveillance Oversight. Accessed at http://paranoia.dubfire.net/2009/12/8-million-reasons-for-real-surveillance.html

[5] Declan McCullagh (2009) FBI posts fake hyperlinks to snare child porn suspects. Accessed at: http://news.cnet.com/8301-13578_3-9899151-38.html

[6] Mark Nestmann (2009) Stupid Facebook Tricks. In "Preserving your privacy and more", November 27 2009. Accessed at http://nestmannblog.sovereignsociety.com/2009/11/stupid-facebook-tricks.html

[7] James Duane (2008) Don't Talk to Police. http://www.youtube.com/watch?v=i8z7NC5sgik

[8] Ryan Tate (2009) Google CEO: Secrets Are for Filthy People. Accessed at http://gawker.com/5419271/google-ceo-secrets-are-for-filthy-people

[9] Cryptome. Accessed at http://cryptome.org/
Last edited Jan 25, as per emumbert1's suggestion (see comments).