Posts by spaf

On Opinion, Jihad, and E-voting

[tags]Florida recount, e-voting, voting machines, Yasinsac, scientific bias[/tags]

As many of us were enjoying Thanksgiving with our families, we heard news of the largest single-day casualties of sectarian violence in Iraq. The UN reports a growing number of kidnappings and executions, often with bodies left unidentified.  As a result of the bombings on November 23rd, reprisals included executing people in front of their families, and individuals being doused in kerosene and immolated.

Many of us no doubt spent a few moments wondering how it was possible for presumably civilized, well-educated people to have such deep-seated hatred that they would attack someone simply because he or she had a Sunni-like name, or lived in a Shiite neighborhood.  We have wondered the same thing when hearing stories of Tutsi massacres in Rwanda in 1994, of the millions killed by the Khmer Rouge in Cambodia in the 1970s, the “ethnic cleansing” in the former Yugoslavia, and on and on (including the current problems in Darfur).  Of course, the ignorant fear of differences continues to show up in the news, whether it is genocide around the world, or an angry rant by an out-of-control comedian.

So, it comes as an unpleasant surprise to see prejudice based on appearance of legitimate opinion directed against a friend and colleague, and on the pages and WWW site of the NY Times, no less.  On November 24th, an editorial by Paul Krugman described some of the problems with the count of the votes cast in Sarasota, Florida in the most recent elections.  There appears to be a clear instance of some sort of failure, most likely with the electronic voting machines used in the race.  The result is an undervote (no votes cast) of about 18,000 in the race for US House—a race decided by under 400 votes.  The candidates and some voter groups are challenging the election decision through the courts, and the State of Florida is conducting an independent study to determine the causes of what happened.  Mr. Krugman implied that Professor Alec Yasinsac, of Florida State, chosen to lead the independent study, would not provide a valid report because of his apparent support for some Republican candidates for office in recent elections.

I’ve known Alec for nearly a decade.  I have never had any doubt about his integrity as a scientist or as a person.  Those who know Alec and have worked with him generally hold him in high regard (cf. Avi Rubin’s comments).  Alec has built his academic career pursing scientific truths.  He knows all too well that producing a biased report would end that career, as if the idea of providing a cover-up would even cross his mind.  In fact, Alec has reached out to many of us, privately, in the CS/security community, for advice and counsel as he prepares his group at SAIT (and it is a university group—not simply Alec) to do this task.  He’s doing all this for the right reasons—he’s concerned about the accuracy and fairness of electronic voting machines, and he sees this as a chance to rend the veil of secrecy that vendors and state agencies have traditionally drawn around these methods.  As with many of us, he is deeply concerned about the impact on our Republic unless we can regain and keep public confidence in the fairness of our voting technologies.

(Note added 11/27:  I am not implying that criticism by Mr. Krugman is in any senses equivalent to genocide practiced by others.  Instead, I am trying to illustrate that they are both based on the same underlying premise, that of denigrating others because of their beliefs without actually considering them as individuals.  That is the point of similarity, and one that seemed quite clear to me as I considered both news items—Iraq and Krugman’s editorial—at the same time.)

Having Opinions vs. Bias

First of all, it is important to understand that having opinions does not mean that one is unalterably biased, or cannot produce valid results.  In fact, everyone has opinions of some sort, although possibly not on any particular topic.  It may be possible to find people who really have no opinions of any kind about voting equipment as well as who won the elections in question, but those people are likely to be uneducated or poorly motivated to perform an evaluation of the technology.  That would not be a good result.

Why is it wrong for someone to have expressed support for a particular candidate?  That is one of the freedoms we cherish in this country—to have freedom of expression.  Why should anyone be less capable or trustworthy because of what may be an expression of support for a particular candidate, or even a particular political party?  Does that mean that Mr. Krugman and others believe that we can’t get a fair trial if we didn’t support a particular judge?  That we can’t expect equal treatment from a doctor who suspects that we voted for someone she didn’t?  That the police and firefighters we call to our aid shouldn’t help us because of the signs in our front yard supporting someone of a different political party?  Mr. Krugman’s (and others) accusation of bias isn’t conceptually any different than these examples ... or burning the home of someone who happens to go to a different mosque or church. If someone is incapable of carrying out his or her professional duties because of expressions of opinion, then only the most ignorant and apathetic would still be employed.

I have consulted with government officials in both the Clinton and Bush administrations.  I am not registered with any political party, and I have never voted a straight party ticket in any election during the 32 years I’ve been voting.  Does that mean I have no opinion?  Hardly—I’ve had an opinion about every candidate I voted for, and usually I had a definite opinion about those I didn’t vote for.  But having an opinion is very different from allowing bias to color one’s professional conduct, for me or for anyone else working in information assurance.  As you can infer, I find it personally offensive to impugn someone’s professional honesty simply because of exercise of freedom of expression.
Bias is when one is unable or unwilling to consider all the alternatives when formulating a theory, and when experiments to validate or refute that theory are arbitrarily manipulated and selectively disclosed.  If that were to happen in this study of the Florida voting machines, then it would require that all the study participants collaborate in that deception.  Furthermore, it would require that the presentation of the results be done in a way that obfuscates the deception.  Given the professional and personal credentials of some of the people involved, this seems extraordinarily unlikely—and they know how closely their report will be scrutinized.  Instead, it is likely that this effort will provide us all with additional ammunition in our efforts to get more reliable voting technology.  I know Alec is seeking as much transparency and peer review as he can get for this effort—and those are the methods by which all of science is judged for accuracy.  True bias would more likely to be present if the study was conducted by the vendor of the systems in question, or funded and conducted by staff of one of the campaigns.  The SAIT personnel making up the study team are neither of these.

Alec has a Constitutional right to vote for—and support—whomever he wishes. There is no reason he should stifle what he believes so long as he keeps it separate from his professional efforts, as he as done to date:  His academic career has underscored his integrity and ability as a scientist.  His prior 20 years as a decorated Marine officer attest to his patriotism and self-sacrifice. He is a concerned professional, a talented scholar, a resident of Florida, a veteran who has sworn a solemn oath to uphold and protect the US Constitution against all enemies foreign and domestic, and someone who votes. Alec is very qualified to lead this examination for the citizens of the state of Florida.  We should all be thankful to have someone with his qualifications taking the lead.

As a closing thought on this topic, let me question whether Mr. Krugman and others would be equally vocal if the person chosen as the lead scientist for this effort was supportive of candidates aligned with the Democratic Party, or the Green Party, or the LIbertarians?  Or is it possible that these people’s own biases—believing that apparent supporters of Republicans (or perhaps only Florida Republicans) are intrinsically untrustworthy—are producing clearly questionable conclusions?

A Comment about Paper

I have seen reference to a comment (that I can no longer find for a link) that another reason Alec is unsuitable for this review task is because he believes that paperless voting machines can be used in a fair vote.  I have no idea if Alec has stated this or believes precisely this.  However, anyone applying rigorous logic would have to agree that it IS possible to have a fair vote using paperless voting machines.  It IS also possible to corrupt a vote using paper ballots.  However, what is possible is not necessarily something that is feasible to apply on a national scale on a recurring basis.

Key to voting technology is to minimize error and the potential of fraud while also meeting other constraints such as ensuring voter confidence, allowing independent voting access for the disabled, supporting transparency, and doing all this with reasonably affordable, fault-tolerant procedures that can be carried out by average citizens.

The majority of scientists and technologists who have looked at the problem, and who understand all the constraints, view a combination of some computing technology coupled with voter-verified paper audit trails (VVPAT) as a reasonable approach to satisfying all the variables.  A totally paperless approach would be too costly (because the extraordinary engineering required for assurance), and would be unlikely to be believed as fair by the overwhelming majority of voters (because cryptographic methods are too difficult for the lay person to understand).  Meanwhile, a completely paper-based system is prone to errors in counting, spoiled ballots from voters who don’t understand or who make mistakes, and not independently accessible to all disabled voters.  As with any engineering problem, there is no perfect solution.  Instead, we need to fully understand the risks and tradeoffs, and seek to optimize the solution given the constraints.

Closing Thoughts

The ACM has adopted a position that endorses the use of VVPAT or equivalent technologies, and has been actively involved in voting machine technology issues for many years.  As chair of the USACM, ACM’s US Public Policy committee, that doesn’t make me biased, but it definitely means I have a basis for having professional opinions.

Let’s all seek the truth with open minds,  and strive to see each other as fellow citizens with valid opinions rather than as enemies whose ideology makes them targets for vilification.  It is our diversity and tolerance that make us strong, and we should celebrate that rather than use it as an excuse to attack others.

Good luck, Alec.

[posted with ecto]

Community Comments & Feedback to Security Absurdity Article

[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices.  Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article.  That response is a bit long, but worth reading.

Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.”  I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.

Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems.  It has been shown again and again (and not only in IT) that this is mistaken.  It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation.  You can’t depend on best practices and people doing the right thing all the time.  You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems.  Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.

I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either).  In the meantime, go take a look at Noam’s response piece.  And if you’re in the US, have a happy Thanksgiving.

[posted with ecto]

Yet another timing attack

[tags]cryptography, information security, side-channel attacks, timing attacks,security architecture[/tags]
There is a history of researchers finding differential attacks against cryptography algorithms.  Timing and power attacks are two of the most commonly used, and they go back a very long time.  One of the older, “classic” examples in computing was the old Tenex password-on-a-page boundary attack. Many accounts of this can be found various places online such as here and here (page 25).  These are varieties of an attack known as side-channel attacks—they don’t attack the underlying algorithm but rather take advantage of some side-effect of the implementation to get the key.  A search of the WWW finds lots of pages describing these.

So, it isn’t necessarily a surprise to see a news report of a new such timing attack.  However, the article doesn’t really give much detail, nor does it necessarily make complete sense.  Putting branch prediction into chips is something that has been done for more than twenty years (at least), and results in a significant speed increase when done correctly.  It requires some care in cache design and corresponding compiler construction, but the overall benefit is significant.  The majority of code run on these chips has nothing to do with cryptography, so it isn’t a case of “Security has been sacrificed for the benefit of performance,” as Seifert is quoted as saying.  Rather, the problem is more that the underlying manipulation of cache and branch prediction is invisible to the software and programmer. Thus, there is no way to shut off those features or create adequate masking alternatives.  Of course, too many people who are writing security-critical software don’t understand the mapping of code to the underlying hardware so they might not shut off the prediction features even if they had a means to do so.

We’ll undoubtedly hear more details of the attack next year, when the researchers disclose what they have found.  However, this story should serve to simply reinforce two basic concepts of security: (1) strong encryption does not guarantee strong security; and (2) security architects need to understand—and have some control of—the implementation, from high level code to low level hardware.  Security is not collecting a bunch of point solutions together in a box…it is an engineering task that requires a system-oriented approach.
[posted with ecto]

Irony: See Wikipedia

[tags]malicious code, wikipedia, trojan horse,spyware[/tags]
Frankly, I am surprised it has taken this long for something like this to happen: Malicious code planted in Wikipedia.
The malicious advertisement on MySpace from a while back was a little similar.  Heck, there were trojan archives posted on the Usenet binary groups over 20 years ago that also bring this back to mind—I recall an instance of a file damage program being posted as an anti-virus update in the early 1980s!

Basically, anyone seeking “victims” for spyware, trojans, or other nastiness wants effective propagation of code.  So, find a high-volume venue that has a trusting and or naive user population, and find a way to embed code there such that others will download it or execute it.  Voila!

Next up: viruses on YouTube?

[posted with ecto]

The Dilbert Blog: Electronic Voting Machines

Once again, Scott Adams cuts to the heart of the matter.  Here’s a great explanation of what’s what with electronic voting machines.

The Dilbert Blog: Electronic Voting Machines

Now THIS is how to have secure passwords!

Someone sent the following to me as an example of how to ensure secure passwords

Microsoft claims this message is an error.  However, I think we all can see this is simply a form of extreme password security of the sort I wrote about in this post.

Who do you trust?

In my earlier posts on passwords, I noted that I approach on-line password “vaults” with caution.  I have no reason to doubt that the many password services, secure email services, and other encrypted network services are legitimate.  However, I am unable to adequately verify that such is the case for anything I would truly want to protect.  It is also possible that some employee has compromised the software, or a rootkit has been installed, so even if the service was designed to be legitimate, it is nonetheless compromised without the rightful owners knowledge.

For a similar reason, I don’t use the same password at multiple sites—I use a different password for each, so if one site is “dishonest” (or compromised) I don’t lose security at all my sites.

For items that I don’t value very much, the convenience of an online vault service might outweigh my paranoia—but that hasn’t happened yet.

Today I ran across this:
MyBlackBook [ver 1.85 live] - Internet’s First Secure & Confidential Online Sex Log!

My first thought is “Wow!  What a way to datamine information on potential hot dates!” grin 

That quickly led to the realization that this is an *incredible* tool for collecting blackmail information.  Even if the people operating it are legit (and I have no reason to doubt that they are anything but honest), this site will be a prime target for criminals.

It may also be a prime target for lawyers seeking information on personal damages, divorce actions, and more.

My bottom line: don’t store things remotely online, even in “secure” storage, unless you wouldn’t mind that they get published in a blog somewhere—or worse.  Of course, storing online locally with poor security is not really that much better…..

A great example of how NOT to save passwords

See this account of how someone modified some roadside signs that were password protected.  Oops!  Not the way to protect a password.  Even the aliens know that.

ZUG: Comedy Articles: Electronic Road Signs and Me:

A few comments on errors

I wrote a post for Dave Farber’s IP list on the use of lie detectors by the government.  My basic point was that some uses of imperfect technology are ok, if we understand the kind of risks and errors we are encountering.  I continue to see people without an understanding of the difference between Type I and Type II errors, and the faulty judgements made as a result.

What follows is a (slightly edited) version of that post:


The following is a general discussion.  I am in no sense an expert on lie detectors, but this is how it was explained to me by some who are very familiar with the issue.

Lie detectors have a non-zero rate of error.  As with many real-world systems, these errors manifest as Type I errors (alpha error, false positive) and Type II errors (beta error, false negative), and instances of “can’t tell.”  It’s important to understand the distinction because the errors and ambiguities in any system may not be equally likely, and the consequences may be very different.  An example I give to my students is after going over a proof that writing a computer virus checker that accurately detects all computer viruses is equivalent to solving the halting problem.  I then tell them that I can provide them with code that identifies every program infected with a computer virus in constant running time.  They think this is a contradiction of the proof.  I then write on the board the equivalent of “Begin; print “Infected!”; End.” —identify every program as infected.  There are no Type II errors.  However, there are many Type I errors, and thus this is not a useful program.  But I digress (slightly)...

I have been told that lie detectors more frequently exhibit Type I errors because subjects may be nervous or have a medical condition, and that Type II errors generally result from training or drugs (or both) by the subject, although some psychological disorders allow psychopaths to lie undetectably.  Asking foil questions (asking the subject to lie, and asking a surprise question to get a reaction) helps to identify individuals with potential for Type II errors.  Proper administration (e.g., reviewing the questions with the subject prior to the exam, and revising them as necessary to prevent ambiguity), helps to minimize Type I errors.  [Example.  When granting a security clearance, you want to weed out people who might be more likely to commit major crimes, or who have committed them already and not yet been discovered (they may be more prone to blackmail, or future offenses).  Thus, you might ask “Have you committed any crimes you haven’t disclosed on your application?”  Someone very literal-minded might think back to speeding down the Interstate this morning, lying to buy beers at age 20, and so on, and thus trigger a reaction.  Instead, the examiner should explain before the exam that the question is meant to expose felonies, and not traffic violations and misdemeanors.]  “Can’t tell” situations are resolved by giving the exam again at a later time, or by reporting the results as ambiguous.

In a criminal investigation, any error can be a problem if the results are used as evidence.***  For instance, if I ask “Did you commit the robbery?” and there is a Type I error, I would erroneously conclude you were guilty.  Our legal system does not allow this kind of measurement as evidence, although the police may use a negative result to clear you of suspicion.  (This is not generally a good step to take in some crimes, because some psychopaths are able to lie so as to generate Type II errors.)  If I asked “Are you going to hijack this plane?” then you might be afraid of the consequences of a false reading, or have a fear of flying, and there would be a high Type I error rate.  Thus, this probably won’t be a good mechanisms to screen passengers, either.  (However, current TSA practice in other areas is to have lots of Type I errors in hopes of pushing Type II to zero.  Example is not letting *any* liquids or gels on planes, even if harmless, so as to keep any liquid explosives from getting on board.)

When the US government uses lie detectors in security clearances, they aren’t seeking to identify EXACTLY which individuals are likely to be a problem.  Instead, they are trying to reduce the likelihood that people with clearances will pose a risk and it is judged as acceptable to have some Type I errors in the process of minimizing Type II errors.  So, as with many aspects of the adjudication process, they simply fail to grant a clearance to people who score too highly in some aspect of the test—include a lie detector test.  They may also deny clearances to people who have had too many close contacts with foreign nationals, or who have a bad history of going into debt, or any of several other factors based on prior experience and analysis.  Does that mean that those people are traitors?  No, and the system is set to not draw that specific conclusion.  If you fail a lie detector test for a clearance, you aren’t arrested for treason! *  The same if your evaluation score on the background investigation rates too high for lifestyle issues.  What it DOES mean is that there are heightened indications of risk, and unless you are “special” for a particular reason,**  the government chooses to not issue a clearance so as to reduce the risk.  Undoubtedly, there are some highly qualified, talented individuals who are therefore not given clearances.  However, they aren’t charged with anything or deprived of fundamental rights or liberties.  Instead, they are simply not extended a special classification.  The end result is that people who pass through the process to the end are less likely to be security risks than an equal sample drawn from the regular population.

The same screening logic is used in other places.  For instance, consider blood donation.  One of the questions that will keep a donation from being used in transfusions is if the donor is a male and indicates that he has had sex with another male since (I think) 1980.  Does that mean that everyone in that category has hepatitis or HIV?  No!  It only means that those individuals, as a class, are at higher risk, and they are a small enough subset that it is worth excluding them from the donor pool to make the donations safer in aggregate.  Other examples include insurance premiums (whether to grant insurance to high risk individuals), and even personal decisions (“I won’t date women with multiple facial piercings and more than 2 cats—too crazy.”)  These are general exclusions that almost certainly include some Type I errors, but the end result is (in aggregate) less risk.

Back to lie detectors.  There are two cases other than initial screening that are of some concern.  The first is on periodic rechecks (standard procedure), and when instituting new screening on existing employees, as was done at some of the national labs a few years ago.  In the case of periodic rechecks, the assumption is that the subject passed the exam before, and a positive reading now is either an indication that something has happened, or is a false positive.  Examiners often err on the side of the false positive in this case rather than triggering a more in-depth exam;  Aldrich Ames was one such case (unfortunately), and he wasn’t identified until more damage was done.  If one has a clearance and goes back for a re-exam and “flunks” it, that person is often given multiple opportunities to retake it.  A continued inability to pass the exam should trigger a more in-depth investigation, and may result in a reassignment or a reduction in access until a further determination is made.  In some cases (which I have been told are rare) someone’s clearance may be downgraded or revoked.  However, even in those cases, no statement of guilt is made—it is simply the case that a privilege is revoked.  That may be traumatic to the individual subject, certainly, but so long as it is rare it is sound risk management in aggregate for the government.

The second case, that of screening people for the first time after they have had access already and are “trusted” in place, is similar in nature except it may be viewed as unduly stressful or insulting by the subjects—as was the case when national lab employees were required to pass a polygraph for the first time, even though some had had DoE clearances for decades.  I have no idea how this set of exams went.

Notes to the above

*—Although people flunking the test may not be charged with treason, they may be charged with falsifying data on the security questionnaire!  I was told of one case where someone applying for a cleared position had a history of drug use that he thought would disqualify him, so he entered false information on his clearance application.  When he learned of the lie detector test, he panicked and convinced his brother to go take the test for him.  The first question was “Are you Mr. X?” and the stand-in blew the pens off the charts.  Both individuals were charged and pled guilty to some kind of charges; neither will ever get a security clearance! grin

**—Difficult and special cases are when someone, by reason of office or designation, is required to be cleared.  So, for instance, you are nominated to be the next Secretary of Defense, but you can’t pass the polygraph as part of the standard clearance process.  This goes through a special investigation and adjudication process that was not explained to me, and I don’t know how this is handled.

***—In most Western world legal systems.  In some countries, fail the polygraph and get a bullet to the head.  The population is interchangeable, so no need to get hung up on quibbles over individual rights and error rates.  Sociopaths who can lie and get away with it (Type II errors) tend to rise to the top of the political structures in these countries, so think of it as evolution in action….


So, bottom line:  yes, lie detector tests generate errors, but the process can be managed to reduce the errors and serve as a risk reduction tool when used in concert with other processes.  That’s how it is used by the government.

Security expert recommends ‘Net diversity - Network World

I recently did an interview with Network World magazine.  The topics discussed might well be of interest to readers of this blog.
[tags]network security,risk management,diversity,security trends[/tags]

[posted with ecto]