The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Drone “Flaw” Known Since 1990s Was a Vulnerability

Share:
"The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said." Call it what it is: it's a vulnerability that was misclassified (some might argue that it's an exposure, but there is clearly a violation of implicit confidentiality policies). This fiasco is the result of the thinking that there is no vulnerability if there is no threat agent with the capability to exploit a flaw. I argued against Spaf regarding this thinking previously; it is also widespread in the military and industry. I say that people using this operational definition are taking a huge risk if there's a chance that they misunderstood either the flaw, the capabilities of threat agents, present or future, or if their own software is ever updated. I believe that for software that is this important, an academic definition of vulnerability should be used: if it is possible that a flaw could conceptually be exploited, it's not just a flaw, it's a vulnerability, regardless of the (assumed) capabilities of the current threat agents. I maintain that (assuming he exists for the sake of this analogy) Superman is vulnerable to kryptonite, regardless of an (assumed) absence of kryptonite on earth.

The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:

"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.

A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.

Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "

Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.

Comments

Posted by Scott
on Monday, January 4, 2010 at 03:04 PM

It is very easy to sit on the sidelines and make these comments. After all, you are thousands of miles away from the incident and do not know the operations environment where these systems are used.

First, it was published that the command and control (C2) aspects of the systems are run on encrypted communications channels. By encrypting C2, anyone intercepting the signals will not be able to command, control, destroy, or otherwise abuse the drones. Anyone who is intercepting the signal only sees what the camera is transmitting.

These drones can be used for different purposes. They can be in the intelligence gathering business; they could be flying visual cover for ground forces; they can find bombing targets or areas where humans are to prevent bombs from hitting civilian targets; or any number of other tasks requiring visual assistance. The person intercepting the camera does not know why the image is being transmitted, just that it is transmitted.

If the reports are correct in saying that the enemy is moving away from where where the drones are scanning, don’t you think that this is better for the troops? Attackers are moved away from where they can fire on convoys, thus keeping solders safer.

Looking deeper into the reports, CENTCOM admits to ordering these drones to fly randomly. Originally it was to keep the enemy guessing when seeing them from the ground. With the worry about intercepted video signals, the randomness of the flights become more important. Again, the enemy does not know why the drone is in the area, but they are. Ever think of the possibility of using the drones to heard the enemy by flying in certain patterns to convince them to move to certain areas? While I do not know if this is what the military is doing, it sounds like a way to concentrate as many of them in one place before using the necessary armaments.

The one thing that has not made big front page headlines is that the US did not have a soldier killed in Iraq during the month of December. This is wonderful news. I wonder if changing tactics helped in that effort. I hope the military can keep this going!

It is also very easy to sit the ivory tower of academics and preach updates when the view at the bottom of the tower is not as pretty. All systems used by the military has to be compatible with the lowest common denominator of equipment. This means that 10 year old humvees shipped from the US that were owned by a National Guard post has to be able to read the drone signals. These systems cannot be updated because of their age. New systems have to be installed.

Now multiply that by the number of humvees in a unit times the number of units in a division and the number of divisions in force. Thousands of humvees, tanks, and other reception facilities have to be able to receive this signal in real time. These facilities also include foreign forces with whom the US is sharing this intelligence. There are a lot of reception stations that would have to be upgraded in both hardware and software in order to fix one issue.

The cost of replacing all of the hardware and updating all of the software will run into the hundreds of millions of dollars and will have to include additional money for training technicians and the solders in the field to use the new system. Maintenance procedures will have to updates (another cost) and the viability of using the new equipment in the current environments will have to be studied—probably while in the field.

Remember, the theater of operations is the desert. There is a lot of sand in the desert and the last I heard, the sand tends to not play well with electronic circuitry. And don’t forget about the harsh winters that will wreak havoc on everyone and everything in the mountains between Pakistan and Afghanistan!

When the wars started, the US did not have enough money to up-armor the humvees or even the soldiers. Seven years and quite a few billions of dollars later, you want the military to spend more money to upgrade systems whose risk has the potential to be mitigated in other ways?

With all due respect, the academic view has its place. But in the “Real Word” with issues that reach far beyond software update policies and into my taxpaying pocket, the issue has a lot of sizzle, but the stake you crave may be less satisfying than expected.

Posted by Pascal Meunier
on Monday, January 4, 2010 at 03:46 PM

Scott,
Thanks for the interesting reply.  I think you’re missing the point I was trying to make.  I did not suggest any software updates.  My point was that we wouldn’t be having this discussion if vulnerabilities were viewed differently.  There might not be any talk of updates, which as you point out, are very difficult to make, if the known “flaw” had been called a vulnerability from the start.  If anything, what you say emphasizes the need to avoid underestimating the capabilities of threat agents.  Using threat models for software that needs to stay compatible in the fields for a very long time, as you point out, is risky because software security is full of surprises.  IMO, predicting the capabilities of threat agents is a little like trying to predict the stock market for the next 10-15 years (or the equivalent period). 

BTW, unencrypted video may be a game of “you know that they know”, but at the very least it provides the ennemy with information they wouldn’t have otherwise.  I didn’t bring up CC channels, although those may also have issues (I’m not in a position to know).

Posted by Landforms
on Monday, January 11, 2010 at 03:54 PM

Any flaw is kryptonite when dealing with the armed services.  This should have been taken care of when it was found.  There are smart people out there, even on the side we consider evil. When there is a will, there is a way.

Posted by Kathy Nguyen
on Wednesday, January 13, 2010 at 06:03 AM

We can speculate on all the different reasons why this was not dealt with earlier.

To “hope” that a security vulnerability will not be discovered, is a really bad security practice. A vulnerability of this magnitude was deemed to be taken advantage of by the enemy.

Regarding tax money, the cost of fixing this now will be much greater than if it was dealt with at an earlier stage.

Posted by A
on Monday, January 18, 2010 at 03:13 PM

Schneier had a good post on the topic:

http://www.wired.com/politics/security/commentary/securitymatters/2009/12/securitymatters_1223

Posted by Pascal Meunier
on Tuesday, January 19, 2010 at 07:28 AM

“A”, thanks for the link.  The difficulty of managing strong encryption in a multi-national force and the absence of a “light”, more manageable encryption that would be acceptable to NSA are interesting points in Bruce Schneier’s commentary.

Posted by Cerebus
on Tuesday, January 19, 2010 at 10:22 AM

To understand the design decisions in any system you have to understand both the system requirements and the system constraints. 

One requirement of this system is that it must be able to provide video feeds to virtually any legitimate user—usually called the “unanticipated user.”  I.e., the drone must service an audience whose members are not known ahead of time.

One constraint of this system is that some receivers cannot transmit.  I.e., the drone must service an audience where some members cannot communicate.  SOCOM operates like this all the time.

Another constraint of this system is that some receivers operate in the field for long stretches of time—sometimes months.  I.e., re-keying opportunities for some receivers are severely limited.

With these in mind, building a key management system to support delivering encrypted drone video isn’t exactly a walk in the park.  You can’t do key agreement because some receivers can’t transmit.  You can’t do key transport because you don’t know who the receivers are ahead of time, and those that can’t transmit can’t ID themselves).  You could do pre-placed symmetric keys, but with some receivers having severely limited opportunities to re-key establishing a working schedule is impossible; a key change while a team is in the field renders that team unable to receive, and perhaps unable to complete their mission.

Systems engineers routinely practice risk management.  This starts with simple questions:  What’s the intelligence value of the drone video?  What’s the risk to an operation if a team in the field can’t get the video?  What’s the risk to an operation if the adversary does get access to the vide?

The truth is, while the instantaneous value may be high, the *half-life* of that value is very short—minutes, maybe hours, infrequently days, rarely longer.  The mission impact of *not* getting the video may be very high—measured in casualties.  The mission impact of the adversary getting the video is actually pretty low—they may get a short warning, but not much else. 

Taken together, the decision *not* to encrypt the video and instead mitigate the problem with operational changes makes good sense, counter-intuitive as it may seem.

Posted by Pascal Meunier
on Tuesday, January 19, 2010 at 11:05 AM

Thanks Cerebus, that was enlightening.  It makes sense explained that way.

Posted by john
on Wednesday, January 20, 2010 at 01:48 PM

These are all some excellent points and obviously the decision to not encrypt is not a simple one.

Pascal, you bring up an excellent point that a different mindset probably would have prevented this type of exploit.

I realize, as Cerebus pointed out, developing a system to provide encrypted video is not a walk in the park. However, having video encryption in place seems like a very basic construct of a “good” remote video system.

I can not see the argument for not putting something scalable in place initially, let alone over time, as the world becomes more computer literate.

I realize this means a massive update, but ultimately, thats what we will have to do next time anyway. Except it will be without the benefit of years of real life field experience.

Posted by Alex
on Tuesday, February 2, 2010 at 02:37 PM

First off, great discussion. Secondly, I agree that developing a system to provide encrypted video is not as easy as it sounds, but it is a necessity nowadays in my opinion.

Also, Scott is right. Drones can be used for different purposes, for good purposes. Yes, they can be also used in areas where they cause ‘damage’ (to call it like that), but we must realize that using them for good benefits much more than using them for evil purposes. At least that’s what I think.

They can fulfill many number of tasks where humans lack the ability to complete the task. (finding bombs is a good example).

Posted by Franke UK
on Sunday, February 7, 2010 at 03:59 PM

Interesting, so is one of the comments intimating that the use of randomly flying drones assisted in the reduction of fatalities in Iraq during December?
That’a an extremely interesting correlation?

Posted by Reco
on Sunday, February 7, 2010 at 11:51 PM

Security through obscurity never works.  Let to the government to adopt an outdated methodology.

Posted by Koray Tok
on Wednesday, June 23, 2010 at 07:00 PM

There might not be any talk of updates, which as you point out, are very difficult to make, if the known “flaw” had been called a vulnerability from the start.  If anything, what you say emphasizes the need to avoid underestimating the capabilities of threat agents.  Using threat models for software that needs to stay compatible in the fields for a very long time, as you point out, is risky because software security is full of surprises.

Leave a comment

Commenting is not available in this section entry.