[tags]Vista, Windows, security,flaws,Microsoft[/tags]
Update: additions added 4/19 and 4/24, at the end.
Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million. That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources. (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc. I don’t know if that is true, but the reality certainly was/is a substantial number.)
Without a doubt, people inside Microsoft took the issue seriously. They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts. Security of the Microsoft code base suddenly became a Very Big Deal.
Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult. The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch. Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set. The number of flaws maybe hasn’t really decreased all that much from years ago.
Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training. The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration. The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago. The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year. The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.
Microsoft seems to project the attitude that they have solved the security problem.
If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn? Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!! There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.
Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved. There is greater awareness inside Microsoft about security and privacy issues. I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues. But questions remain: did Microsoft get its money’s worth? Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws? Why does it seem that security is no longer a priority? What does that portend for Vista, Longhorn, and Office 2007? (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior. )
I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there. I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”
What do you think?
[posted with ecto]
Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security. I do not know who the current members might be.
Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post. He pointed out several issues that I think are valid and deserve airing here;
Many of my questions still remain unanswered, including Mr. Nash’s condition….
[tags]monocultures, compliance, standard configurations, desktops, OMB[/tags]
Another set of news items, and another set of “nyah nyah” emails to me. This time, the press has been covering a memo out of the OMB directing all Federal agencies to adopt a mandatory baseline configuration for Windows machines. My correspondents have misinterpreted the import of this announcement to mean that the government is mandating a standard implementation of Windows on all Federal machines. To the contrary, it is mandating a baseline security configuration for only those machines that are running Windows. Other systems can still be used (and should be).
What’s the difference? Quite a bit. The OMB memo is about ensuring that a standard, secure baseline is the norm on any machine running Windows. This is because there are so many possible configuration options that can be set (and set poorly for secure operation), and because there are so many security add-ons, it has not been uncommon for attacks to occur because of weak configurations. As noted in the memo, the Air Force pioneered some work in decreeing security baseline configurations. By requiring that certain minimum security configuration settings were in place on every Windows machines, there was a reduction in incidents.
From this, and other studies, including some great work at NIST to articulate useful policies, we get the OMB memo.
This is actually an excellent idea. Unfortunately, the minimum is perhaps a bit too “minimum.” For instance, replacing IE 6 under XP with Firefox would probably be a step up in security. However, to support common applications and uses, the mandated configuration can only go so far without requiring lots of extra (costly) work or simply breaking things. And if too many things get broken, people will find ways around the secure configuration—after all, they need to get their work done! (This is often overlooked by novice managers focused on “fixing” security.)
Considering the historical problems with Linux and some other systems, and the complexity of their configuration, minimum configurations for those platforms might not be a bad idea, either. However, they are not yet used in large enough numbers to prompt such a policy. Any mechanism or configuration where the complexity is beyond the ken of the average user should have a set, minimum, safe configuration.
Note my use of the term “minimum” repeatedly. If the people in charge of enforcing this new policy prevent clueful people from setting stronger configurations, then that is a huge problem. Furthermore, if there are no provisions for understanding when the minimum configuration might lead to weakness or problems and needs to be changed, that would also be awful. As with any policy, implementation can be good or be terrible.
Of course, mandating the use of Windows (2000, XP, Vista or otherwise) on all desktops would not be a good idea for anyone other than Microsoft and those who know no other system. In fact, mandating the use of ANY OS would be a bad idea. Promoting diversity and heterogeneity is valuable for many reasons, not least of which are:
These advantages are not offset by savings in training or bulk purchasing, as some people would claim. They are 2nd order effects and difficult to measure directly, but their absence is noted….usually too late.
But what about interoperability? That is where standards and market pressure come to bear. If we have a heterogeneous environment, then the market should help ensure that standards are developed and adhered to so as to support different solutions. That supports competition, which is good for the consumer and the marketplace.
And security with innovation and choice should really be the minimum configuration we all seek.
[posted with ecto]
It is well-known that I am a long-time user of Apple Macintosh computers, and I am very leery of Microsoft Windows and Linux because of the many security problems that continue to plague them. (However, I use Windows, and Linux, and Solaris, and a number of other systems for some things—I believe in using the right tool for each task.) Thus, it is perhaps no surprise that a few people have written to me with a “Nyah, nyah” message after reading a recent article claiming that Windows is the most secure OS over the last six months. However, any such attitude evidences a certain lack of knowledge of statistics, history, and the underlying Symantec report itself. It is possible to lie with statistics—or, at the least, be significantly misled, if one is not careful.
First of all, the news article reported that —in the reporting period—Microsoft software had 12 serious vulnerabilities plus another 27 less serious vulnerabilities. This was compared with 1 serious vulnerability in Apple software out of a total of 43 vulnerabilities. To say that this confirms the point because there were fewer vulnerabilities reported in MS software (39 vs. 43) without noting the difference in severity is clearly misleading. After all, there were 12 times as many severe vulnerabilities in MS software as in Apple software (and more than in some or all of the others systems, too—see the full report).
Imagine reading a report in the newspaper on crime statistics. The report says that Portland saw one killing and 42 instances of littering, while Detroit had 27 instances of jaywalking and 12 instances of rape and homicide. If the reporter concluded that Detroit was the safer place to live and work, would you agree? Where do you think you would feel safer? Where would you be safer (assuming the population sizes were similar; in reality, Portland is about 2/3 the population of Detroit)?
More from a stochastic point of view, if we assume that the identification of flaws is more or less a random process with some independence, then it is not surprising if there are intervals where the relative performance in that period does not match the overall behavior. So, we should not jump to overall conclusions when there are one or two observational periods where one system dominates another in contrast to previous behavior. Any critical decisions we might wish to make about quality and safety should be based on a longer baseline; in this case, the Microsoft products continue to be poor compared to some other systems, including Apple. We might also want to factor in the size of the exposed population, the actual amount of damages and other such issues.
By analogy, imagine you are betting on horses. One horse you have been tracking, named Redmond, has not been performing well. In nearly every race that horse has come in at or below the middle of the pack, and often comes in last, despite being a crowd favorite. The horse looks good, and lots of people bet on it, but it never wins. Then, one day, in a close heat, Redmond wins! In a solid but unexciting race, Redmond comes in ahead of multiple-race winner #2 (Cupertino) by a stride. Some long-time bettors crow about the victory, and say they knew that Remond was the champ. So, you have money to gamble with. Are you going to bet on Redmond to win or place in each of the next 5 races?
Last of all, I could not find a spot in the actual Symantec report where it was stated that any one system is more secure than another—that is something stated by the reporter (Andy Patrizio) who wrote the article. Any claim that ANY system with critical flaws is “secure” or “more secure” is an abuse of the term. That is akin to saying that a cocktail with only one poison is more “healthful” than a cocktail with six poisons. Both are lethal, and neither is healthful under any sane interpretation of the words.
So, in conclusion, let me note that any serious flaws reported are not a good thing, and none of the vendors listed (and there are more than simply Microsoft and Apple) should take pride in the stated results. I also want to note that although I would not necessarily pick a MS platform for an application environment where I have a strong need for security, neither would I automatically veto it. Properly configure and protect any system and it may be a good candidate in a medium or low threat environment. As well, the people at Microsoft are certainly devoting lots of resources to try to make their products better (although I think they are trapped by some very poor choices made in the past).
Dr. Dan Geer made a riveting and thought-provoking presentation on cyber security trends and statistics as the closing keynote address of this year’s annual CERIAS Security Symposium. His presentation materials will shortly be linked into the symposium WWW site, and a video of his talk is here. I recommend that you check that out as additional material, if you are interested in the topic.
[tags]security marketplace, firewalls, IDS, security practices, RSA conference[/tags]
As I’ve written here before, I believe that most of what is being marketed for system security is misguided and less than sufficient. This has been the theme of several of my invited lectures over the last couple of years, too. Unless we come to realize that current “defenses” are really attempts to patch fundamentally faulty designs, we will continue to fail and suffer losses. Unfortunately, the business community is too fixated on the idea that there are quick fixes to really investigate (or support) the kinds of long-term, systemic R&D that is needed to really address the problems.
Thus, I found the RSA conference and exhibition earlier this month to be (again) discouraging this year. The speakers basically kept to a theme that (their) current solutions would work if they were consistently applied. The exhibition had hundreds of companies displaying wares that were often indistinguishable except for the color of their T-shirts—anti-virus, firewalls (wireless or wired), authentication and access control, IDS/IPS, and vulnerability scanning. There were a couple of companies that had software testing tools, but only 3 of those, and none marketing suites of software engineering tools. A few companies had more novel solutions—I was particular impressed by a few that I saw, such as the policy and measurement-based offerings by CoreTrace, ProofSpace, and SignaCert. (In the interest of full disclosure, SignaCert is based around one of my research ideas and I am an advisor to the company.) There were also a few companies with some slick packaging of older ideas (Yoggie being one such example) that still don’t fix underlying problems, but that make it simpler to apply some of the older, known technologies.
I wasn’t the only one who felt that RSA didn’t have much new to offer this year, either.
When there is a vendor-oriented conference that has several companies marketing secure software development suites that other companies are using (not merely programs to find flaws in C and Java code), when there are booths dedicated to secured mini-OS systems for dedicated tasks, and when there are talks scheduled about how to think about limiting functionality of future offerings so as to minimize new threats, then I will have a sense that the market is beginning to move in the direction of maturity. Until then, there are too many companies selling snake oil and talismans—and too many consumers who will continue to buy those solutions because they don’t want to give up their comfortable but dangerous behaviors. And any “security” conference that has Bill Gates as keynote speaker—renowned security expert that he is—should be a clue about what is more important for the conference attendees: real security, or marketing.
Think I am too cynical? Watch the rush into VoIP technologies continue, and a few years from now look at the amount of phishing, fraud, extortion and voice-spam we will have over VoIP, and how the market will support VoIP-enabled versions of some of the same solutions that were in Moscone Center this year. Or count the number of people who will continue to mail around Word documents, despite the growing number of zero-day and unpatched exploits in Word. Or any of several dozen current and predictable dangers that aren’t “glitches”—they are the norm. if you really pay attention to what happens, then maybe you’ll become cynical, too.
If not, there’s always next year’s RSA Conference.
[tags]Microsoft Vista, DRM[/tags]
Peter Gutmann, a scientist at the University of Auckland, has recently written an essay about DRM (Digital Rights Management) in the new Windows Vista OS. The essay is quite interesting, and is certainly thought-provoking. His “Executive Executive Summary” is very quotable:
The Vista Content Protection specification could very well constitute the longest suicide note in history.
Well worth reading and thinking about—I suggest you take a look.
[tags]vulnerabilities,microsoft word, email attachments[/tags]
So far this year, a number of vulnerabilities in Microsoft’s Word have been discovered. Three critical (“zero day”) vulnerabilities have been discovered—and as yet, unpatched—this month. (Vulnerability 1, Vulnerability 2, and Vulnerability 3.) These are hardly the first vulnerabilities reported for Word. There has actually been quite a history of problems associated with Word documents containing malformed (or maliciously formed) content.
For years now, I have had my mailer configured to reject Word documents when they are sent to me in email and also send back an explanatory “bounce” message. In part, this is because I have not had Word installed on my system, nor do I normally use it. As such, Word documents sent to me in email have largely been so much binary noise. Yes, I could install some converters that do a halfway reasonable job of converting Word documents, or I could install something like OpenOffice to read Word files without installing Word itself, but that would continue to (tacitly) encourage dangerous behavior by my correspondents.
People who send me Word documents tend to get a bounce message that points out that Word:
If you want more details on this, including links to other essays, see my explanatory bounce text, as cited above.
The US-CERT has warned that people shouldn’t open unexpected Word documents in email. As general policy, they actually warn not to open email with attachments such as Word documents appearing to be from people you know. This is because malicious software may have infected an acquaintance’s machine and is sending you something infected, or the return address is faked—it may not be from the user you think!
If there was a mad bomber sending out explosives in packages, and you got a box with your Aunt Sally’s name on it, would you possibly pause before opening it? Most people would, but inexplicably, those same people exhibit no hesitation in opening Word documents (and other executable content), thereby endangering their own machines—and often everyone in the same enterprise.
There is almost no reason to email Word documents!! They certainly should be used in email FAR LESS than they currently are.
If you need to send a simple memo or note in email, use plain text (or RichText or even HTML). It is more likely to be readable on most kinds of platform, is compact, and is not capable of carrying a malicious payload.
If you need to send something out that has special formatting or images, consider PDF. It may not be 100% safe (although I know of no current vulnerabilities), but it is historically far safer than Word is or has been. Putting it as an image or PDF on a local WWW site and mailing the URL is also reasonable.
If you must send Word documents back and forth (and there are other word processing systems than Word, btw), then consider sending plain RTF. Or arrange a protocol so all parties know what is being sent and received, and be sure to use an up-to-date antivirus scanner! (See the CERT recommendations.)
The new version of Word 2007 uses XML for encoding, and this promises to be safer than the current format. That remains to be seen, of course. And it may be quite some time before it is installed and commonplace on enough machines to make a difference.
You can help make the community safer—stop sending Word messages in email, and consider bouncing back any email sent to you in Word! If enough of us do it, we might actually be able to make the Internet a little bit safer.
An additional note
So, what do I use for word processing? For years, I have used TeX/LaTeX for papers. Before that I also used troff on Unix. I have used FrameMaker on both Mac and Unix, and wrote several books (including all three editions of Practical Unix Security et al.) with it. I used ClarisWorks on the Mac for some years, and now use Apple’s Pages for many of my papers and documents.
I have installed and used Word under two extraordinary circumstances. Once was for a large project proposal I was leading across 5 universities where there was no other good common alternative that we could all use—or that everyone was willing to use. The second case was when I was on the PITAC and was heavily involved in producing the Cyber Security report.
However, I am back to using Pages on the Mac (which can import RTF and, I am told, Word), and LaTeX. I’ve written over 100 professional articles, 5 books, and I don’t know how many memos and letters, and I have avoided Word. It can be done.
Note that I have nothing against Microsoft, per se. However, I am against getting locked into any single solution, and I am especially troubled at the long history of vulnerabilities in Word…which are continuing to occur after years and years of problems. That is not a good record for the future.
[posted with ecto]
[tags]Florida recount, e-voting, voting machines, Yasinsac, scientific bias[/tags]
As many of us were enjoying Thanksgiving with our families, we heard news of the largest single-day casualties of sectarian violence in Iraq. The UN reports a growing number of kidnappings and executions, often with bodies left unidentified. As a result of the bombings on November 23rd, reprisals included executing people in front of their families, and individuals being doused in kerosene and immolated.
Many of us no doubt spent a few moments wondering how it was possible for presumably civilized, well-educated people to have such deep-seated hatred that they would attack someone simply because he or she had a Sunni-like name, or lived in a Shiite neighborhood. We have wondered the same thing when hearing stories of Tutsi massacres in Rwanda in 1994, of the millions killed by the Khmer Rouge in Cambodia in the 1970s, the “ethnic cleansing” in the former Yugoslavia, and on and on (including the current problems in Darfur). Of course, the ignorant fear of differences continues to show up in the news, whether it is genocide around the world, or an angry rant by an out-of-control comedian.
So, it comes as an unpleasant surprise to see prejudice based on appearance of legitimate opinion directed against a friend and colleague, and on the pages and WWW site of the NY Times, no less. On November 24th, an editorial by Paul Krugman described some of the problems with the count of the votes cast in Sarasota, Florida in the most recent elections. There appears to be a clear instance of some sort of failure, most likely with the electronic voting machines used in the race. The result is an undervote (no votes cast) of about 18,000 in the race for US House—a race decided by under 400 votes. The candidates and some voter groups are challenging the election decision through the courts, and the State of Florida is conducting an independent study to determine the causes of what happened. Mr. Krugman implied that Professor Alec Yasinsac, of Florida State, chosen to lead the independent study, would not provide a valid report because of his apparent support for some Republican candidates for office in recent elections.
I’ve known Alec for nearly a decade. I have never had any doubt about his integrity as a scientist or as a person. Those who know Alec and have worked with him generally hold him in high regard (cf. Avi Rubin’s comments). Alec has built his academic career pursing scientific truths. He knows all too well that producing a biased report would end that career, as if the idea of providing a cover-up would even cross his mind. In fact, Alec has reached out to many of us, privately, in the CS/security community, for advice and counsel as he prepares his group at SAIT (and it is a university group—not simply Alec) to do this task. He’s doing all this for the right reasons—he’s concerned about the accuracy and fairness of electronic voting machines, and he sees this as a chance to rend the veil of secrecy that vendors and state agencies have traditionally drawn around these methods. As with many of us, he is deeply concerned about the impact on our Republic unless we can regain and keep public confidence in the fairness of our voting technologies.
(Note added 11/27: I am not implying that criticism by Mr. Krugman is in any senses equivalent to genocide practiced by others. Instead, I am trying to illustrate that they are both based on the same underlying premise, that of denigrating others because of their beliefs without actually considering them as individuals. That is the point of similarity, and one that seemed quite clear to me as I considered both news items—Iraq and Krugman’s editorial—at the same time.)
Having Opinions vs. Bias
First of all, it is important to understand that having opinions does not mean that one is unalterably biased, or cannot produce valid results. In fact, everyone has opinions of some sort, although possibly not on any particular topic. It may be possible to find people who really have no opinions of any kind about voting equipment as well as who won the elections in question, but those people are likely to be uneducated or poorly motivated to perform an evaluation of the technology. That would not be a good result.
Why is it wrong for someone to have expressed support for a particular candidate? That is one of the freedoms we cherish in this country—to have freedom of expression. Why should anyone be less capable or trustworthy because of what may be an expression of support for a particular candidate, or even a particular political party? Does that mean that Mr. Krugman and others believe that we can’t get a fair trial if we didn’t support a particular judge? That we can’t expect equal treatment from a doctor who suspects that we voted for someone she didn’t? That the police and firefighters we call to our aid shouldn’t help us because of the signs in our front yard supporting someone of a different political party? Mr. Krugman’s (and others) accusation of bias isn’t conceptually any different than these examples ... or burning the home of someone who happens to go to a different mosque or church. If someone is incapable of carrying out his or her professional duties because of expressions of opinion, then only the most ignorant and apathetic would still be employed.
I have consulted with government officials in both the Clinton and Bush administrations. I am not registered with any political party, and I have never voted a straight party ticket in any election during the 32 years I’ve been voting. Does that mean I have no opinion? Hardly—I’ve had an opinion about every candidate I voted for, and usually I had a definite opinion about those I didn’t vote for. But having an opinion is very different from allowing bias to color one’s professional conduct, for me or for anyone else working in information assurance. As you can infer, I find it personally offensive to impugn someone’s professional honesty simply because of exercise of freedom of expression.
Bias is when one is unable or unwilling to consider all the alternatives when formulating a theory, and when experiments to validate or refute that theory are arbitrarily manipulated and selectively disclosed. If that were to happen in this study of the Florida voting machines, then it would require that all the study participants collaborate in that deception. Furthermore, it would require that the presentation of the results be done in a way that obfuscates the deception. Given the professional and personal credentials of some of the people involved, this seems extraordinarily unlikely—and they know how closely their report will be scrutinized. Instead, it is likely that this effort will provide us all with additional ammunition in our efforts to get more reliable voting technology. I know Alec is seeking as much transparency and peer review as he can get for this effort—and those are the methods by which all of science is judged for accuracy. True bias would more likely to be present if the study was conducted by the vendor of the systems in question, or funded and conducted by staff of one of the campaigns. The SAIT personnel making up the study team are neither of these.
Alec has a Constitutional right to vote for—and support—whomever he wishes. There is no reason he should stifle what he believes so long as he keeps it separate from his professional efforts, as he as done to date: His academic career has underscored his integrity and ability as a scientist. His prior 20 years as a decorated Marine officer attest to his patriotism and self-sacrifice. He is a concerned professional, a talented scholar, a resident of Florida, a veteran who has sworn a solemn oath to uphold and protect the US Constitution against all enemies foreign and domestic, and someone who votes. Alec is very qualified to lead this examination for the citizens of the state of Florida. We should all be thankful to have someone with his qualifications taking the lead.
As a closing thought on this topic, let me question whether Mr. Krugman and others would be equally vocal if the person chosen as the lead scientist for this effort was supportive of candidates aligned with the Democratic Party, or the Green Party, or the LIbertarians? Or is it possible that these people’s own biases—believing that apparent supporters of Republicans (or perhaps only Florida Republicans) are intrinsically untrustworthy—are producing clearly questionable conclusions?
A Comment about Paper
I have seen reference to a comment (that I can no longer find for a link) that another reason Alec is unsuitable for this review task is because he believes that paperless voting machines can be used in a fair vote. I have no idea if Alec has stated this or believes precisely this. However, anyone applying rigorous logic would have to agree that it IS possible to have a fair vote using paperless voting machines. It IS also possible to corrupt a vote using paper ballots. However, what is possible is not necessarily something that is feasible to apply on a national scale on a recurring basis.
Key to voting technology is to minimize error and the potential of fraud while also meeting other constraints such as ensuring voter confidence, allowing independent voting access for the disabled, supporting transparency, and doing all this with reasonably affordable, fault-tolerant procedures that can be carried out by average citizens.
The majority of scientists and technologists who have looked at the problem, and who understand all the constraints, view a combination of some computing technology coupled with voter-verified paper audit trails (VVPAT) as a reasonable approach to satisfying all the variables. A totally paperless approach would be too costly (because the extraordinary engineering required for assurance), and would be unlikely to be believed as fair by the overwhelming majority of voters (because cryptographic methods are too difficult for the lay person to understand). Meanwhile, a completely paper-based system is prone to errors in counting, spoiled ballots from voters who don’t understand or who make mistakes, and not independently accessible to all disabled voters. As with any engineering problem, there is no perfect solution. Instead, we need to fully understand the risks and tradeoffs, and seek to optimize the solution given the constraints.
The ACM has adopted a position that endorses the use of VVPAT or equivalent technologies, and has been actively involved in voting machine technology issues for many years. As chair of the USACM, ACM’s US Public Policy committee, that doesn’t make me biased, but it definitely means I have a basis for having professional opinions.
Let’s all seek the truth with open minds, and strive to see each other as fellow citizens with valid opinions rather than as enemies whose ideology makes them targets for vilification. It is our diversity and tolerance that make us strong, and we should celebrate that rather than use it as an excuse to attack others.
Good luck, Alec.
[posted with ecto]
[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices. Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article. That response is a bit long, but worth reading.
Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.” I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.
Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems. It has been shown again and again (and not only in IT) that this is mistaken. It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation. You can’t depend on best practices and people doing the right thing all the time. You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems. Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.
I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either). In the meantime, go take a look at Noam’s response piece. And if you’re in the US, have a happy Thanksgiving.
[posted with ecto]
[tags]cryptography, information security, side-channel attacks, timing attacks,security architecture[/tags]
There is a history of researchers finding differential attacks against cryptography algorithms. Timing and power attacks are two of the most commonly used, and they go back a very long time. One of the older, “classic” examples in computing was the old Tenex password-on-a-page boundary attack. Many accounts of this can be found various places online such as here and here (page 25). These are varieties of an attack known as side-channel attacks—they don’t attack the underlying algorithm but rather take advantage of some side-effect of the implementation to get the key. A search of the WWW finds lots of pages describing these.
So, it isn’t necessarily a surprise to see a news report of a new such timing attack. However, the article doesn’t really give much detail, nor does it necessarily make complete sense. Putting branch prediction into chips is something that has been done for more than twenty years (at least), and results in a significant speed increase when done correctly. It requires some care in cache design and corresponding compiler construction, but the overall benefit is significant. The majority of code run on these chips has nothing to do with cryptography, so it isn’t a case of “Security has been sacrificed for the benefit of performance,” as Seifert is quoted as saying. Rather, the problem is more that the underlying manipulation of cache and branch prediction is invisible to the software and programmer. Thus, there is no way to shut off those features or create adequate masking alternatives. Of course, too many people who are writing security-critical software don’t understand the mapping of code to the underlying hardware so they might not shut off the prediction features even if they had a means to do so.
We’ll undoubtedly hear more details of the attack next year, when the researchers disclose what they have found. However, this story should serve to simply reinforce two basic concepts of security: (1) strong encryption does not guarantee strong security; and (2) security architects need to understand—and have some control of—the implementation, from high level code to low level hardware. Security is not collecting a bunch of point solutions together in a box…it is an engineering task that requires a system-oriented approach.
[posted with ecto]
[tags]malicious code, wikipedia, trojan horse,spyware[/tags]
Frankly, I am surprised it has taken this long for something like this to happen: Malicious code planted in Wikipedia.
The malicious advertisement on MySpace from a while back was a little similar. Heck, there were trojan archives posted on the Usenet binary groups over 20 years ago that also bring this back to mind—I recall an instance of a file damage program being posted as an anti-virus update in the early 1980s!
Basically, anyone seeking “victims” for spyware, trojans, or other nastiness wants effective propagation of code. So, find a high-volume venue that has a trusting and or naive user population, and find a way to embed code there such that others will download it or execute it. Voila!
Next up: viruses on YouTube?
[posted with ecto]