[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices. Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article. That response is a bit long, but worth reading.
Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.” I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.
Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems. It has been shown again and again (and not only in IT) that this is mistaken. It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation. You can’t depend on best practices and people doing the right thing all the time. You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems. Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.
I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either). In the meantime, go take a look at Noam’s response piece. And if you’re in the US, have a happy Thanksgiving.
[posted with ecto]
[tags]cryptography, information security, side-channel attacks, timing attacks,security architecture[/tags]
There is a history of researchers finding differential attacks against cryptography algorithms. Timing and power attacks are two of the most commonly used, and they go back a very long time. One of the older, “classic” examples in computing was the old Tenex password-on-a-page boundary attack. Many accounts of this can be found various places online such as here and here (page 25). These are varieties of an attack known as side-channel attacks—they don’t attack the underlying algorithm but rather take advantage of some side-effect of the implementation to get the key. A search of the WWW finds lots of pages describing these.
So, it isn’t necessarily a surprise to see a news report of a new such timing attack. However, the article doesn’t really give much detail, nor does it necessarily make complete sense. Putting branch prediction into chips is something that has been done for more than twenty years (at least), and results in a significant speed increase when done correctly. It requires some care in cache design and corresponding compiler construction, but the overall benefit is significant. The majority of code run on these chips has nothing to do with cryptography, so it isn’t a case of “Security has been sacrificed for the benefit of performance,” as Seifert is quoted as saying. Rather, the problem is more that the underlying manipulation of cache and branch prediction is invisible to the software and programmer. Thus, there is no way to shut off those features or create adequate masking alternatives. Of course, too many people who are writing security-critical software don’t understand the mapping of code to the underlying hardware so they might not shut off the prediction features even if they had a means to do so.
We’ll undoubtedly hear more details of the attack next year, when the researchers disclose what they have found. However, this story should serve to simply reinforce two basic concepts of security: (1) strong encryption does not guarantee strong security; and (2) security architects need to understand—and have some control of—the implementation, from high level code to low level hardware. Security is not collecting a bunch of point solutions together in a box…it is an engineering task that requires a system-oriented approach.
[posted with ecto]
This session was very well attented (roughly 280 people), which is encouraging. In the following, I will mix all the panel responses together without differentiating the sources.
It was said that virtualization can make security more acceptable, by contrast to past security solutions and suggested practices that used to be hard to deploy or adopt. Virtual appliances can help security by introducing more boundaries between various data center functions, so if one is compromised the entire data center hasn’t been compromised. One panel member argued that virtual appliances (VA) can leverage the expertise of other people. So, presumably if you get a professional VA it may be installed better and more securely than an average system admin could, and you could pass liability on to them (interestingly, someone else told me outside this session that liability issues were what stopped them from publishing or selling virtual appliances).
I think you may also inherit problems due to the vendor philosophy of delivering functional systems over secure systems. As always, the source of the virtual appliances, the processes used to create them, the requirements that they were designed to meet, should be considered in evaluating the trust that can be put into them. Getting virtual appliances doesn’t necessarily solve the hardening problem. Except, now instead of having one OS to harden, you have to repeat the process N times, where N is the number of virtual appliances you deploy.
As a member of the panel argued, virtualization doesn’t make things better or worse, it still all depends on the practices, processes, procedures, and policies used in managing the data center and the various data security and recovery plans. Another pointed out that people shouldn’t assume that virtual appliances or virtualization provide security out-of-the-box. Out of all malicious software, currently 4-5% check if they are running inside a virtual machine; this may become more common.
It was said that security is not the reason why people are deploying virtualization now. Virtualization is not as strong as using several different physical, specialized machines, due to the shared resources and shared communication channels. Virtualization would be much more useful on the client side than on the data center for improving security. Nothing else of interest was said.
Unfortunately, there was no time for me to ask what the panel thought of the idea of opening VMware to plugins that could perform various security functions (taint tracking and various attack protection schemes, IDS, auditing, etc…). After the session one of the panel members mentioned that this was being looked at, and that it raised many problems, but would not elaborate. In my opinion, it could trump the issue of Microsoft (supposedly) closing Windows to security vendors, but they thought of everything! Microsoft’s EULA forbids running certain versions of Windows on virtual machines. I wonder about the wisdom of this, as restricting the choices of security solutions can only hurt Microsoft and their users. Is this motivated by the fear of people cracking the DRM mechanism(s)? Surely just the EULA can’t prevent that—crackers will do what they want. As Windows could simply check to see if it is running inside a VM, DRMed content could be protected by refusing to be performed under those conditions, without making all of Windows unavailable. The fact that the most expensive version of Windows allows running inside a virtual machine (even though performing DRMed content is still forbidden) hints that it’s mostly due to marketing greed, but on the whole I am puzzled by those policies. It certainly won’t help security research and forensic investigations (are forensic examinators exempt from the licensing/EULA restrictions? I wonder).
This talk by Marcus MacNeill (Surgient) discussed the Surgient Virtual Training Lab used by CERT-US to train military personnel in security best practices, etc… I was disappointed because the talk didn’t discuss the challenges of teaching security, and the lessons learned by CERT doing so, but instead focused on how the product could be used in a teaching environment. Not surprisingly, the Surgient product resembles both VMware’s lab manager and ReAssure. However, the Surgient product doesn’t support the sharing of images, and stopping and restarting work, e.g. development work by users (from what I saw—if it does it wasn’t mentioned). They mentioned that they had patented technologies involved, which is disturbing (raise your hand if you like software patents). ReAssure meets (or will soon, thanks to the VIX API) all of the requirements he discussed for teaching, except for student shadowing (seeing what a student is attempting to do). So, I would be very interested in seeing teaching labs using ReAssure as a support infrastructure. There are of course other teaching labs using virtualization that have been developed at other universities and colleges; the challenge is of course to be able to design courses and exercises that are portable and reusable. We can all gain by sharing these, but for that we need a common infrastructure where all these exercises would be valid.
The conference is surprisingly huge (6000 people). Virtualization is obviously important to IT now. I am looking forward to the security-related talks (I’ll post about them later). Here are a few notes from the sessions I attended: