[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices. Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article. That response is a bit long, but worth reading.
Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.” I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.
Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems. It has been shown again and again (and not only in IT) that this is mistaken. It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation. You can’t depend on best practices and people doing the right thing all the time. You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems. Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.
I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either). In the meantime, go take a look at Noam’s response piece. And if you’re in the US, have a happy Thanksgiving.
[posted with ecto]
[tags]cryptography, information security, side-channel attacks, timing attacks,security architecture[/tags]
There is a history of researchers finding differential attacks against cryptography algorithms. Timing and power attacks are two of the most commonly used, and they go back a very long time. One of the older, “classic” examples in computing was the old Tenex password-on-a-page boundary attack. Many accounts of this can be found various places online such as here and here (page 25). These are varieties of an attack known as side-channel attacks—they don’t attack the underlying algorithm but rather take advantage of some side-effect of the implementation to get the key. A search of the WWW finds lots of pages describing these.
So, it isn’t necessarily a surprise to see a news report of a new such timing attack. However, the article doesn’t really give much detail, nor does it necessarily make complete sense. Putting branch prediction into chips is something that has been done for more than twenty years (at least), and results in a significant speed increase when done correctly. It requires some care in cache design and corresponding compiler construction, but the overall benefit is significant. The majority of code run on these chips has nothing to do with cryptography, so it isn’t a case of “Security has been sacrificed for the benefit of performance,” as Seifert is quoted as saying. Rather, the problem is more that the underlying manipulation of cache and branch prediction is invisible to the software and programmer. Thus, there is no way to shut off those features or create adequate masking alternatives. Of course, too many people who are writing security-critical software don’t understand the mapping of code to the underlying hardware so they might not shut off the prediction features even if they had a means to do so.
We’ll undoubtedly hear more details of the attack next year, when the researchers disclose what they have found. However, this story should serve to simply reinforce two basic concepts of security: (1) strong encryption does not guarantee strong security; and (2) security architects need to understand—and have some control of—the implementation, from high level code to low level hardware. Security is not collecting a bunch of point solutions together in a box…it is an engineering task that requires a system-oriented approach.
[posted with ecto]
[tags]malicious code, wikipedia, trojan horse,spyware[/tags]
Frankly, I am surprised it has taken this long for something like this to happen: Malicious code planted in Wikipedia.
The malicious advertisement on MySpace from a while back was a little similar. Heck, there were trojan archives posted on the Usenet binary groups over 20 years ago that also bring this back to mind—I recall an instance of a file damage program being posted as an anti-virus update in the early 1980s!
Basically, anyone seeking “victims” for spyware, trojans, or other nastiness wants effective propagation of code. So, find a high-volume venue that has a trusting and or naive user population, and find a way to embed code there such that others will download it or execute it. Voila!
Next up: viruses on YouTube?
[posted with ecto]
Once again, Scott Adams cuts to the heart of the matter. Here’s a great explanation of what’s what with electronic voting machines.
In my earlier posts on passwords, I noted that I approach on-line password “vaults” with caution. I have no reason to doubt that the many password services, secure email services, and other encrypted network services are legitimate. However, I am unable to adequately verify that such is the case for anything I would truly want to protect. It is also possible that some employee has compromised the software, or a rootkit has been installed, so even if the service was designed to be legitimate, it is nonetheless compromised without the rightful owners knowledge.
For a similar reason, I don’t use the same password at multiple sites—I use a different password for each, so if one site is “dishonest” (or compromised) I don’t lose security at all my sites.
For items that I don’t value very much, the convenience of an online vault service might outweigh my paranoia—but that hasn’t happened yet.
Today I ran across this:
MyBlackBook [ver 1.85 live] - Internet’s First Secure & Confidential Online Sex Log!
My first thought is “Wow! What a way to datamine information on potential hot dates!”
That quickly led to the realization that this is an *incredible* tool for collecting blackmail information. Even if the people operating it are legit (and I have no reason to doubt that they are anything but honest), this site will be a prime target for criminals.
It may also be a prime target for lawyers seeking information on personal damages, divorce actions, and more.
My bottom line: don’t store things remotely online, even in “secure” storage, unless you wouldn’t mind that they get published in a blog somewhere—or worse. Of course, storing online locally with poor security is not really that much better…..
See this account of how someone modified some roadside signs that were password protected. Oops! Not the way to protect a password. Even the aliens know that.
I wrote a post for Dave Farber’s IP list on the use of lie detectors by the government. My basic point was that some uses of imperfect technology are ok, if we understand the kind of risks and errors we are encountering. I continue to see people without an understanding of the difference between Type I and Type II errors, and the faulty judgements made as a result.
What follows is a (slightly edited) version of that post:
The following is a general discussion. I am in no sense an expert on lie detectors, but this is how it was explained to me by some who are very familiar with the issue.
Lie detectors have a non-zero rate of error. As with many real-world systems, these errors manifest as Type I errors (alpha error, false positive) and Type II errors (beta error, false negative), and instances of “can’t tell.” It’s important to understand the distinction because the errors and ambiguities in any system may not be equally likely, and the consequences may be very different. An example I give to my students is after going over a proof that writing a computer virus checker that accurately detects all computer viruses is equivalent to solving the halting problem. I then tell them that I can provide them with code that identifies every program infected with a computer virus in constant running time. They think this is a contradiction of the proof. I then write on the board the equivalent of “Begin; print “Infected!”; End.” —identify every program as infected. There are no Type II errors. However, there are many Type I errors, and thus this is not a useful program. But I digress (slightly)...
I have been told that lie detectors more frequently exhibit Type I errors because subjects may be nervous or have a medical condition, and that Type II errors generally result from training or drugs (or both) by the subject, although some psychological disorders allow psychopaths to lie undetectably. Asking foil questions (asking the subject to lie, and asking a surprise question to get a reaction) helps to identify individuals with potential for Type II errors. Proper administration (e.g., reviewing the questions with the subject prior to the exam, and revising them as necessary to prevent ambiguity), helps to minimize Type I errors. [Example. When granting a security clearance, you want to weed out people who might be more likely to commit major crimes, or who have committed them already and not yet been discovered (they may be more prone to blackmail, or future offenses). Thus, you might ask “Have you committed any crimes you haven’t disclosed on your application?” Someone very literal-minded might think back to speeding down the Interstate this morning, lying to buy beers at age 20, and so on, and thus trigger a reaction. Instead, the examiner should explain before the exam that the question is meant to expose felonies, and not traffic violations and misdemeanors.] “Can’t tell” situations are resolved by giving the exam again at a later time, or by reporting the results as ambiguous.
In a criminal investigation, any error can be a problem if the results are used as evidence.*** For instance, if I ask “Did you commit the robbery?” and there is a Type I error, I would erroneously conclude you were guilty. Our legal system does not allow this kind of measurement as evidence, although the police may use a negative result to clear you of suspicion. (This is not generally a good step to take in some crimes, because some psychopaths are able to lie so as to generate Type II errors.) If I asked “Are you going to hijack this plane?” then you might be afraid of the consequences of a false reading, or have a fear of flying, and there would be a high Type I error rate. Thus, this probably won’t be a good mechanisms to screen passengers, either. (However, current TSA practice in other areas is to have lots of Type I errors in hopes of pushing Type II to zero. Example is not letting *any* liquids or gels on planes, even if harmless, so as to keep any liquid explosives from getting on board.)
When the US government uses lie detectors in security clearances, they aren’t seeking to identify EXACTLY which individuals are likely to be a problem. Instead, they are trying to reduce the likelihood that people with clearances will pose a risk and it is judged as acceptable to have some Type I errors in the process of minimizing Type II errors. So, as with many aspects of the adjudication process, they simply fail to grant a clearance to people who score too highly in some aspect of the test—include a lie detector test. They may also deny clearances to people who have had too many close contacts with foreign nationals, or who have a bad history of going into debt, or any of several other factors based on prior experience and analysis. Does that mean that those people are traitors? No, and the system is set to not draw that specific conclusion. If you fail a lie detector test for a clearance, you aren’t arrested for treason! * The same if your evaluation score on the background investigation rates too high for lifestyle issues. What it DOES mean is that there are heightened indications of risk, and unless you are “special” for a particular reason,** the government chooses to not issue a clearance so as to reduce the risk. Undoubtedly, there are some highly qualified, talented individuals who are therefore not given clearances. However, they aren’t charged with anything or deprived of fundamental rights or liberties. Instead, they are simply not extended a special classification. The end result is that people who pass through the process to the end are less likely to be security risks than an equal sample drawn from the regular population.
The same screening logic is used in other places. For instance, consider blood donation. One of the questions that will keep a donation from being used in transfusions is if the donor is a male and indicates that he has had sex with another male since (I think) 1980. Does that mean that everyone in that category has hepatitis or HIV? No! It only means that those individuals, as a class, are at higher risk, and they are a small enough subset that it is worth excluding them from the donor pool to make the donations safer in aggregate. Other examples include insurance premiums (whether to grant insurance to high risk individuals), and even personal decisions (“I won’t date women with multiple facial piercings and more than 2 cats—too crazy.”) These are general exclusions that almost certainly include some Type I errors, but the end result is (in aggregate) less risk.
Back to lie detectors. There are two cases other than initial screening that are of some concern. The first is on periodic rechecks (standard procedure), and when instituting new screening on existing employees, as was done at some of the national labs a few years ago. In the case of periodic rechecks, the assumption is that the subject passed the exam before, and a positive reading now is either an indication that something has happened, or is a false positive. Examiners often err on the side of the false positive in this case rather than triggering a more in-depth exam; Aldrich Ames was one such case (unfortunately), and he wasn’t identified until more damage was done. If one has a clearance and goes back for a re-exam and “flunks” it, that person is often given multiple opportunities to retake it. A continued inability to pass the exam should trigger a more in-depth investigation, and may result in a reassignment or a reduction in access until a further determination is made. In some cases (which I have been told are rare) someone’s clearance may be downgraded or revoked. However, even in those cases, no statement of guilt is made—it is simply the case that a privilege is revoked. That may be traumatic to the individual subject, certainly, but so long as it is rare it is sound risk management in aggregate for the government.
The second case, that of screening people for the first time after they have had access already and are “trusted” in place, is similar in nature except it may be viewed as unduly stressful or insulting by the subjects—as was the case when national lab employees were required to pass a polygraph for the first time, even though some had had DoE clearances for decades. I have no idea how this set of exams went.
Notes to the above
*—Although people flunking the test may not be charged with treason, they may be charged with falsifying data on the security questionnaire! I was told of one case where someone applying for a cleared position had a history of drug use that he thought would disqualify him, so he entered false information on his clearance application. When he learned of the lie detector test, he panicked and convinced his brother to go take the test for him. The first question was “Are you Mr. X?” and the stand-in blew the pens off the charts. Both individuals were charged and pled guilty to some kind of charges; neither will ever get a security clearance!
**—Difficult and special cases are when someone, by reason of office or designation, is required to be cleared. So, for instance, you are nominated to be the next Secretary of Defense, but you can’t pass the polygraph as part of the standard clearance process. This goes through a special investigation and adjudication process that was not explained to me, and I don’t know how this is handled.
***—In most Western world legal systems. In some countries, fail the polygraph and get a bullet to the head. The population is interchangeable, so no need to get hung up on quibbles over individual rights and error rates. Sociopaths who can lie and get away with it (Type II errors) tend to rise to the top of the political structures in these countries, so think of it as evolution in action….
So, bottom line: yes, lie detector tests generate errors, but the process can be managed to reduce the errors and serve as a risk reduction tool when used in concert with other processes. That’s how it is used by the government.
This is a great blog posting: Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security. The data and links are comprehensive, and the message is right on. There is a tone of rant to the message, but it is justified.
I was thinking of writing something like this, but Noam has done it first, and maybe more completely in some areas than I would have. I probably would have also said something about the terrible state of Federal support for infosec research, however, and also mentioned the PITAC report on cyber security.
[posted with ecto]