The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

On Opinion, Jihad, and E-voting

Share:

[tags]Florida recount, e-voting, voting machines, Yasinsac, scientific bias[/tags]

As many of us were enjoying Thanksgiving with our families, we heard news of the largest single-day casualties of sectarian violence in Iraq. The UN reports a growing number of kidnappings and executions, often with bodies left unidentified.  As a result of the bombings on November 23rd, reprisals included executing people in front of their families, and individuals being doused in kerosene and immolated.

Many of us no doubt spent a few moments wondering how it was possible for presumably civilized, well-educated people to have such deep-seated hatred that they would attack someone simply because he or she had a Sunni-like name, or lived in a Shiite neighborhood.  We have wondered the same thing when hearing stories of Tutsi massacres in Rwanda in 1994, of the millions killed by the Khmer Rouge in Cambodia in the 1970s, the “ethnic cleansing” in the former Yugoslavia, and on and on (including the current problems in Darfur).  Of course, the ignorant fear of differences continues to show up in the news, whether it is genocide around the world, or an angry rant by an out-of-control comedian.

So, it comes as an unpleasant surprise to see prejudice based on appearance of legitimate opinion directed against a friend and colleague, and on the pages and WWW site of the NY Times, no less.  On November 24th, an editorial by Paul Krugman described some of the problems with the count of the votes cast in Sarasota, Florida in the most recent elections.  There appears to be a clear instance of some sort of failure, most likely with the electronic voting machines used in the race.  The result is an undervote (no votes cast) of about 18,000 in the race for US House—a race decided by under 400 votes.  The candidates and some voter groups are challenging the election decision through the courts, and the State of Florida is conducting an independent study to determine the causes of what happened.  Mr. Krugman implied that Professor Alec Yasinsac, of Florida State, chosen to lead the independent study, would not provide a valid report because of his apparent support for some Republican candidates for office in recent elections.

I’ve known Alec for nearly a decade.  I have never had any doubt about his integrity as a scientist or as a person.  Those who know Alec and have worked with him generally hold him in high regard (cf. Avi Rubin’s comments).  Alec has built his academic career pursing scientific truths.  He knows all too well that producing a biased report would end that career, as if the idea of providing a cover-up would even cross his mind.  In fact, Alec has reached out to many of us, privately, in the CS/security community, for advice and counsel as he prepares his group at SAIT (and it is a university group—not simply Alec) to do this task.  He’s doing all this for the right reasons—he’s concerned about the accuracy and fairness of electronic voting machines, and he sees this as a chance to rend the veil of secrecy that vendors and state agencies have traditionally drawn around these methods.  As with many of us, he is deeply concerned about the impact on our Republic unless we can regain and keep public confidence in the fairness of our voting technologies.

(Note added 11/27:  I am not implying that criticism by Mr. Krugman is in any senses equivalent to genocide practiced by others.  Instead, I am trying to illustrate that they are both based on the same underlying premise, that of denigrating others because of their beliefs without actually considering them as individuals.  That is the point of similarity, and one that seemed quite clear to me as I considered both news items—Iraq and Krugman’s editorial—at the same time.)

Having Opinions vs. Bias

First of all, it is important to understand that having opinions does not mean that one is unalterably biased, or cannot produce valid results.  In fact, everyone has opinions of some sort, although possibly not on any particular topic.  It may be possible to find people who really have no opinions of any kind about voting equipment as well as who won the elections in question, but those people are likely to be uneducated or poorly motivated to perform an evaluation of the technology.  That would not be a good result.

Why is it wrong for someone to have expressed support for a particular candidate?  That is one of the freedoms we cherish in this country—to have freedom of expression.  Why should anyone be less capable or trustworthy because of what may be an expression of support for a particular candidate, or even a particular political party?  Does that mean that Mr. Krugman and others believe that we can’t get a fair trial if we didn’t support a particular judge?  That we can’t expect equal treatment from a doctor who suspects that we voted for someone she didn’t?  That the police and firefighters we call to our aid shouldn’t help us because of the signs in our front yard supporting someone of a different political party?  Mr. Krugman’s (and others) accusation of bias isn’t conceptually any different than these examples ... or burning the home of someone who happens to go to a different mosque or church. If someone is incapable of carrying out his or her professional duties because of expressions of opinion, then only the most ignorant and apathetic would still be employed.

I have consulted with government officials in both the Clinton and Bush administrations.  I am not registered with any political party, and I have never voted a straight party ticket in any election during the 32 years I’ve been voting.  Does that mean I have no opinion?  Hardly—I’ve had an opinion about every candidate I voted for, and usually I had a definite opinion about those I didn’t vote for.  But having an opinion is very different from allowing bias to color one’s professional conduct, for me or for anyone else working in information assurance.  As you can infer, I find it personally offensive to impugn someone’s professional honesty simply because of exercise of freedom of expression.
Bias is when one is unable or unwilling to consider all the alternatives when formulating a theory, and when experiments to validate or refute that theory are arbitrarily manipulated and selectively disclosed.  If that were to happen in this study of the Florida voting machines, then it would require that all the study participants collaborate in that deception.  Furthermore, it would require that the presentation of the results be done in a way that obfuscates the deception.  Given the professional and personal credentials of some of the people involved, this seems extraordinarily unlikely—and they know how closely their report will be scrutinized.  Instead, it is likely that this effort will provide us all with additional ammunition in our efforts to get more reliable voting technology.  I know Alec is seeking as much transparency and peer review as he can get for this effort—and those are the methods by which all of science is judged for accuracy.  True bias would more likely to be present if the study was conducted by the vendor of the systems in question, or funded and conducted by staff of one of the campaigns.  The SAIT personnel making up the study team are neither of these.

Alec has a Constitutional right to vote for—and support—whomever he wishes. There is no reason he should stifle what he believes so long as he keeps it separate from his professional efforts, as he as done to date:  His academic career has underscored his integrity and ability as a scientist.  His prior 20 years as a decorated Marine officer attest to his patriotism and self-sacrifice. He is a concerned professional, a talented scholar, a resident of Florida, a veteran who has sworn a solemn oath to uphold and protect the US Constitution against all enemies foreign and domestic, and someone who votes. Alec is very qualified to lead this examination for the citizens of the state of Florida.  We should all be thankful to have someone with his qualifications taking the lead.

As a closing thought on this topic, let me question whether Mr. Krugman and others would be equally vocal if the person chosen as the lead scientist for this effort was supportive of candidates aligned with the Democratic Party, or the Green Party, or the LIbertarians?  Or is it possible that these people’s own biases—believing that apparent supporters of Republicans (or perhaps only Florida Republicans) are intrinsically untrustworthy—are producing clearly questionable conclusions?

A Comment about Paper

I have seen reference to a comment (that I can no longer find for a link) that another reason Alec is unsuitable for this review task is because he believes that paperless voting machines can be used in a fair vote.  I have no idea if Alec has stated this or believes precisely this.  However, anyone applying rigorous logic would have to agree that it IS possible to have a fair vote using paperless voting machines.  It IS also possible to corrupt a vote using paper ballots.  However, what is possible is not necessarily something that is feasible to apply on a national scale on a recurring basis.

Key to voting technology is to minimize error and the potential of fraud while also meeting other constraints such as ensuring voter confidence, allowing independent voting access for the disabled, supporting transparency, and doing all this with reasonably affordable, fault-tolerant procedures that can be carried out by average citizens.

The majority of scientists and technologists who have looked at the problem, and who understand all the constraints, view a combination of some computing technology coupled with voter-verified paper audit trails (VVPAT) as a reasonable approach to satisfying all the variables.  A totally paperless approach would be too costly (because the extraordinary engineering required for assurance), and would be unlikely to be believed as fair by the overwhelming majority of voters (because cryptographic methods are too difficult for the lay person to understand).  Meanwhile, a completely paper-based system is prone to errors in counting, spoiled ballots from voters who don’t understand or who make mistakes, and not independently accessible to all disabled voters.  As with any engineering problem, there is no perfect solution.  Instead, we need to fully understand the risks and tradeoffs, and seek to optimize the solution given the constraints.

Closing Thoughts

The ACM has adopted a position that endorses the use of VVPAT or equivalent technologies, and has been actively involved in voting machine technology issues for many years.  As chair of the USACM, ACM’s US Public Policy committee, that doesn’t make me biased, but it definitely means I have a basis for having professional opinions.

Let’s all seek the truth with open minds,  and strive to see each other as fellow citizens with valid opinions rather than as enemies whose ideology makes them targets for vilification.  It is our diversity and tolerance that make us strong, and we should celebrate that rather than use it as an excuse to attack others.

Good luck, Alec.

[posted with ecto]

Community Comments & Feedback to Security Absurdity Article

Share:

[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices.  Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article.  That response is a bit long, but worth reading.

Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.”  I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.

Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems.  It has been shown again and again (and not only in IT) that this is mistaken.  It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation.  You can’t depend on best practices and people doing the right thing all the time.  You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems.  Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.

I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either).  In the meantime, go take a look at Noam’s response piece.  And if you’re in the US, have a happy Thanksgiving.

[posted with ecto]

Yet another timing attack

Share:

[tags]cryptography, information security, side-channel attacks, timing attacks,security architecture[/tags]
There is a history of researchers finding differential attacks against cryptography algorithms.  Timing and power attacks are two of the most commonly used, and they go back a very long time.  One of the older, “classic” examples in computing was the old Tenex password-on-a-page boundary attack. Many accounts of this can be found various places online such as here and here (page 25).  These are varieties of an attack known as side-channel attacks—they don’t attack the underlying algorithm but rather take advantage of some side-effect of the implementation to get the key.  A search of the WWW finds lots of pages describing these.

So, it isn’t necessarily a surprise to see a news report of a new such timing attack.  However, the article doesn’t really give much detail, nor does it necessarily make complete sense.  Putting branch prediction into chips is something that has been done for more than twenty years (at least), and results in a significant speed increase when done correctly.  It requires some care in cache design and corresponding compiler construction, but the overall benefit is significant.  The majority of code run on these chips has nothing to do with cryptography, so it isn’t a case of “Security has been sacrificed for the benefit of performance,” as Seifert is quoted as saying.  Rather, the problem is more that the underlying manipulation of cache and branch prediction is invisible to the software and programmer. Thus, there is no way to shut off those features or create adequate masking alternatives.  Of course, too many people who are writing security-critical software don’t understand the mapping of code to the underlying hardware so they might not shut off the prediction features even if they had a means to do so.

We’ll undoubtedly hear more details of the attack next year, when the researchers disclose what they have found.  However, this story should serve to simply reinforce two basic concepts of security: (1) strong encryption does not guarantee strong security; and (2) security architects need to understand—and have some control of—the implementation, from high level code to low level hardware.  Security is not collecting a bunch of point solutions together in a box…it is an engineering task that requires a system-oriented approach.
[posted with ecto]

VMworld 2006: How virtualization changes the security equation

Share:

This session was very well attented (roughly 280 people), which is encouraging.  In the following, I will mix all the panel responses together without differentiating the sources.

It was said that virtualization can make security more acceptable, by contrast to past security solutions and suggested practices that used to be hard to deploy or adopt.  Virtual appliances can help security by introducing more boundaries between various data center functions, so if one is compromised the entire data center hasn’t been compromised.  One panel member argued that virtual appliances (VA) can leverage the expertise of other people.  So, presumably if you get a professional VA it may be installed better and more securely than an average system admin could, and you could pass liability on to them (interestingly, someone else told me outside this session that liability issues were what stopped them from publishing or selling virtual appliances).

I think you may also inherit problems due to the vendor philosophy of delivering functional systems over secure systems.  As always, the source of the virtual appliances, the processes used to create them, the requirements that they were designed to meet, should be considered in evaluating the trust that can be put into them.  Getting virtual appliances doesn’t necessarily solve the hardening problem.  Except, now instead of having one OS to harden, you have to repeat the process N times, where N is the number of virtual appliances you deploy.

As a member of the panel argued, virtualization doesn’t make things better or worse, it still all depends on the practices, processes, procedures, and policies used in managing the data center and the various data security and recovery plans.  Another pointed out that people shouldn’t assume that virtual appliances or virtualization provide security out-of-the-box.  Out of all malicious software, currently 4-5% check if they are running inside a virtual machine;  this may become more common.

It was said that security is not the reason why people are deploying virtualization now.  Virtualization is not as strong as using several different physical, specialized machines, due to the shared resources and shared communication channels.  Virtualization would be much more useful on the client side than on the data center for improving security.  Nothing else of interest was said.

Unfortunately, there was no time for me to ask what the panel thought of the idea of opening VMware to plugins that could perform various security functions (taint tracking and various attack protection schemes, IDS, auditing, etc…).  After the session one of the panel members mentioned that this was being looked at, and that it raised many problems, but would not elaborate.  In my opinion, it could trump the issue of Microsoft (supposedly) closing Windows to security vendors, but they thought of everything!  Microsoft’s EULA forbids running certain versions of Windows on virtual machines.  I wonder about the wisdom of this, as restricting the choices of security solutions can only hurt Microsoft and their users.  Is this motivated by the fear of people cracking the DRM mechanism(s)?  Surely just the EULA can’t prevent that—crackers will do what they want.  As Windows could simply check to see if it is running inside a VM, DRMed content could be protected by refusing to be performed under those conditions, without making all of Windows unavailable.  The fact that the most expensive version of Windows allows running inside a virtual machine (even though performing DRMed content is still forbidden) hints that it’s mostly due to marketing greed, but on the whole I am puzzled by those policies.  It certainly won’t help security research and forensic investigations (are forensic examinators exempt from the licensing/EULA restrictions?  I wonder).

VMworld 2006:  Teaching (security) using virtual labs

Share:

This talk by Marcus MacNeill (Surgient) discussed the Surgient Virtual Training Lab used by CERT-US to train military personnel in security best practices, etc…  I was disappointed because the talk didn’t discuss the challenges of teaching security, and the lessons learned by CERT doing so, but instead focused on how the product could be used in a teaching environment.  Not surprisingly, the Surgient product resembles both VMware’s lab manager and ReAssure.  However, the Surgient product doesn’t support the sharing of images, and stopping and restarting work, e.g. development work by users (from what I saw—if it does it wasn’t mentioned).  They mentioned that they had patented technologies involved, which is disturbing (raise your hand if you like software patents).  ReAssure meets (or will soon, thanks to the VIX API) all of the requirements he discussed for teaching, except for student shadowing (seeing what a student is attempting to do).  So, I would be very interested in seeing teaching labs using ReAssure as a support infrastructure.  There are of course other teaching labs using virtualization that have been developed at other universities and colleges;  the challenge is of course to be able to design courses and exercises that are portable and reusable.  We can all gain by sharing these, but for that we need a common infrastructure where all these exercises would be valid.