Posts tagged best-practices

Page Content

Thoughts on Virtualization, Security and Singularity

The “VMM Detection Myths and Realities” paper has been heavily reported and discussed before.  It considers whether a theoretical piece of software could detect if it is running inside a Virtual Machine Monitor (VMM).  An undetectable VMM would be “transparent”.  Many arguments are made against the practicality or the commercial viability of a VMM that could provide performance, stealth and reproducible, consistent timings.  The arguments are interesting and reasonably convincing that it is currently infeasible to absolutely guarantee undetectability. 

However, I note that the authors are arguing from essentially the same position as atheists arguing that there is no God.  They argue that the existence of a fully transparent VMM is unlikely, impractical or would require an absurd amount of resources, both physical and in software development efforts.  This is reasonable because the VMM has to fail only once in preventing detection and there are many ways in which it can fail, and preventing each kind of detection is complex.  However, this is not an hermetic, formal proof that it is impossible and cannot exist;  a new breakthrough technology or an “alien science-fiction” god-like technology might make it possible. 

Then the authors argue that with the spread of virtualization, it will become a moot point for malware to try to detect if it is running inside a virtual machine.  One might be tempted to remark, doesn’t this argument also work in the other way, making it a moot point for an operating system or a security tool to try to detect if it is running inside a malicious VMM? 

McAfee’s “secure virtualization”
The security seminar by George Heron answers some of the questions I was asking at last year’s VMworld conference, and elaborates on what I had in mind then.  The idea is to integrate security functions within the virtual machine monitor.  Malware nowadays prevents the installation of security tools and interferes with them as much as possible.  If malware is successfully confined inside a virtual machine, and the security tools are operating from outside that scope, this could make it impossible for an attacker to disable security tools.  I really like that idea. 
 
The security tools could reasonably expect to run directly on the hardware or with an unvirtualized host OS.  Because of this, VMM detection isn’t a moot point for the defender.  However, the presentation did not discuss whether the McAfee security suite would attempt to detect if the VMM itself had been virtualized by an attacker.  Also, would it be possible to detect a “bad” VMM if the McAfee security tools themselves run inside a virtualized environment on top of the “good” VMM?  Perhaps it would need more hooks into the VMM to do this.  Many, in fact, to attempt to catch any of all the possible ways in which a malicious VMM can fail to hide itself properly.  What is the cost of all these detection attempts, which must be executed regularly?  Aren’t they prohibitive, therefore making strong malicious VMM detection impractical?  In the end, I believe this may be yet another race depending on how much effort each side is willing to put into cloaking and detection.  Practical detection is almost as hard as practical hiding, and the detection cost has to be paid everywhere on every machine, all the time.


Which Singularity?
Microsoft’s Singularity project attempts to create an OS and execution environment that is secure by design and simpler.  What strikes me is how it resembles the “white list” approach I’ve been talking about.  “Singularity” is about constructing secure systems with statements (“manifests”) in a provable manner.  It states what processes do and what may happen, instead of focusing on what must not happen. 

Last year I thought that virtualization and security could provide a revolution;  now I think it’s more of the same “keep building defective systems and defend them vigorously”, just somewhat stronger.  Even if I find the name somewhat arrogant, “Singularity” suggests a future for security that is more attractive and fundamentally stable than yet another arms race.  In the meantime, though, “secure virtualization” should help, and expect lots of marketing about it.

8 Security Action Items to Beat “Learned Helplessness”

So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services.  You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again.  Tomorrow you start again, and again, and again.  Is it worth it?  What difference does it make?  Who cares anymore? 

If you’re sick of it, you may just be getting fatigued.

If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness.  Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”.  On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone;  there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns.  Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.

The picture may certainly look bleak, with talk of “perpetual zero-days”.  However, there are things you can do (of course, as in all lists not every item applies to everyone):

  • Don’t be a victim;  don’t surrender to helplessness.  If you have limited energy to spend on security (and who doesn’t have limits?), budget a little bit of time on a systematic and regular basis to stay informed and make progress on tasks you identify as important;  consider the ones listed below.
  • Don’t be a target.  Like or hate Windows, running it on a desktop and connecting to the internet is like having big red circles on your forehead and back.  Alternatives I feel comfortable with for a laptop or desktop system are Ubuntu Linux and MacOS X (for now;  MacOS X may become a greater target in time).  If you’re stuck with Windows, consider upgrading to Vista if you haven’t already;  the security effort poured into Vista should pay off in the long run.  For servers, there is much more choice, and Windows isn’t such a dominant target. 
  • Reduce your exposure (attack surface) by:
    • Browsing the web behind a NAT appliance when at home, in a small business, or whenever there’s no other firewall device to protect you.  Don’t rely only on a software firewall;  it can become disabled or get misconfigured by malware or bad software, or be too permissive by default (if you can’t or don’t know how to configure it).
    • Using the NoScript extension for Firefox (if you’re not using Firefox, consider switching, if only for that reason).  JavaScript is a vector of choice for desktop computer attacks (which is why I find the HoneyClient project so interesting, but I digress).  JavaScript can be used to violate your privacy* or take control of your browser away from you, and give it to website authors, advertisers on those sites, or to the people who compromised those sites, and you can bet it’s not always done for your benefit (even though JavaScript enables better things as well).  NoScript gives you a little control over browser plugins, and which sources are allowed to run scripts in your browser, and attempts to prevent XSS exploits.
    • Turning off unneeded features and services (OK, this is old advice, but it’s still good).
  • Use the CIS benchmarks, and if evaluation tools are available for your platform, run them.  These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers.  It’s encouraging, and may lift the feeling that you are sinking into helplessness.  If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release).  Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).

  • Use the NIST security checklists (hardening guides and templates).  The NIST’s information technology laboratory site has many other interesting security papers to read as well.

  • Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless.  Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it.  If not, request these features from your provider.  Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything.  If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.

  • Watch CERIAS security seminars on subjects that interest you.

  • If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).

Good luck!  Feel free to add more ideas as comments.

*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know?  I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?).  I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later.  It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity.  I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you.  I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so. 

Configuration: the forgotten side of security

I was interviewed for an article, Configuration: the forgotten side of security, about proactive security. I am a big believer in proactive security. However, I do not discount the need for reactive security. In the email interview I stated the following:

I define proactive security as a method of protecting information and resources through proper design and implementation to reduce the need for reactive security measures. In contrast, reactive security is a method of remediation and correction used when your proactive security measures fail. The two are interdependent.

I was specifically asked for best practices on setting up UNIX/Linux systems. My response was to provide some generic goals for configuring systems, which surprisingly made it into the article. I avoided listing specific tasks or steps because those change over time and vary based on the systems used. I have written a security configuration guide or two in my time, so I know how quickly they become out of date. Here are the goals again:

The five basic goals of system configuration:

  1. Build for a specific purpose and only include the bare minimum needed to accomplish the task.
  2. Protect the availability and integrity of data at rest.
  3. Protect the confidentiality and integrity of data in motion.
  4. Disable all unnecessary resources.
  5. Limit and record access to necessary resources.

In all, the most exciting aspect is that I was quoted in an article alongside Prof. Saltzer. That’s good company to have.

Community Comments & Feedback to Security Absurdity Article

[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices.  Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article.  That response is a bit long, but worth reading.

Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.”  I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.

Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems.  It has been shown again and again (and not only in IT) that this is mistaken.  It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation.  You can’t depend on best practices and people doing the right thing all the time.  You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems.  Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.

I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either).  In the meantime, go take a look at Noam’s response piece.  And if you’re in the US, have a happy Thanksgiving.

[posted with ecto]

Passwords and Myth

When I posted earlier about passwords and best practices, I had no idea it would elicit such a response!  So, now that my class’s final exams and papers are graded, I will return to the topic and attempt to address some of the points raised in comments—or, at least those comments that were related to the original blog entry.
[tags] best practices, passwords, awareness, general security[/tags]

Best Practices
It was certainly not my intent to disparage all best practices.  I was merely observing that sometimes best practices are viewed as a panacea.  It is important for people to understand the origins of the best practices they espouse, and whether they are indeed “best”!  Sometimes, excellent practices are adopted outside their realm of proper application, or are used too long without proper (re)evaluation of the underlying conditions.  “Best practices” are designed for the average case, but are not meant to be blindly applied in every case—reason should be applied to the situation, but isn’t.  And all too often, folklore and superstition are accepted as “best practice”  because they “seem” correct, or coincidentally produce desired results.

Consider an example of the first of these (understanding the realm of application): showing an ID to get inside a closed facility, proving that you are a current employee of the company or agency.  That is excellent security practice…until you move it to the lobby of every office building!.  At that point, too many guards aren’t really checking the cards to see if someone is really who they claim to be.  Instead of watching for suspicious behavior, many guards now simply look for a laminated card with a picture on it, and something that looks like an official seal.  Security in many places has degraded by accepting what “best practice” is without understanding where it is really best.

The second case (blind application without reasoning) is illustrated by many of the things that TSA does in airline passenger screening.  One example, told to me by a Federal law enforcement agent, is when he showed his badge and papers while passing though security.  They didn’t make him take out his weapon when going through the metal detector…but then they insisted that he run his shoes through the X-ray machine!  They had rules that allowed them to let a law enforcement agent with a semiautomatic handgun through the checkpoint, but they couldn’t appropriately reason about why they had a rule about screening shoes and apply it to this case!  (Of course, several aspects of TSA screening are poorly considered, but that may be a topic for a later post.)

The third case—folklore and superstition accepted as best practice—is rampant in information security, and I intend to say more about this in later postings.

My post about password security was based on the fact that the “change passwords once a month” rule is based on very old practice, and doesn’t really help now in many real-world environments.  In fact, it may result in weaker security in many cases, as users try to find a way around the rules.  At the least, the average user will have the impression reinforced that “Those security guys are idiots and their goal seems to be to make my life more difficult.”  That doesn’t help build a cooperative working environment where the user population is part of the security infrasrtructure!

Risk Assessment
Donn Parker was one of the first people to argue persuasively that traditional risk assessment would not work in modern IT, and that sound design and best practice would have to do.  I greatly respect Donn’s long experience and opinions, but I don’t completely agree.  In many cases it is possible, using recent experience and expert knowledge, to appropriately estimate risk and loss to quartiles or deciles.  Although imperfect, it can help in making choices and understanding priorities.  When there is insufficient experience and knowledge, I agree with Donn that relying on sound practice is the next best thing; of course, sound design should be used at all times!

Some readers commented that they didn’t have the money to do a risk evaluation. Resolving a question such as password change frequency does not require a full-blown audit and risk analysis.  But, as with my previous comment, if you don’t have the resources, experience or knowledge, then pick sound practice—but put in some effort to understand what is sound!

Password Vaults
A number of responses (including several private responses) were directed to the growing number of passwords, PINs, serial numbers and employee IDs we are expected to remember.  Good security practice suggests that authenticators used in different realms of privilege be unique and uncorrelated.  Good privacy practice suggests that we develop independent identifiers for different uses to prevent correlation.  The two combined result in too many things to remember for those of us whose brains are full (to indirectly pay homage to an old Larson cartoon), and especially for the average person who is overly-taxed when remembering anything beyond who was voted off of American Idol this week.  Now, add frequent requirements to change some of those values, and the situation becomes well-nigh impossible.

Several readers mentioned password vault programs that they use, either on PDAs or the WWW.  I was asked my opinion of some of these.

I use several password vaults myself.  They have 4 characteristics that I believe are important:

  1. The programs use published, strong ciphers to encrypt the contents. (e.g., AES). I don’t need to worry about some random person getting the encrypted database and then decrypting all my keys.
  2. The programs are cross-platform so that I can use the same program on my PDA, my laptop, and my home system.  This keeps me from creating keys and passwords then forgetting them because I don’t have the vault program at hand.
  3. The different versions of the program sync with each other, and allow the database to be backed up.  If I lose my PDA, I’m not completely locked out of everything—I can do a restore, unencrypt, and carry on as before.
  4. I don’t store the database and the encryption routines on someone else’s machine.  That way, I don’t have to worry about the owner of a remote site altering the encryption routines, or making a surreptitious copy of my keys.  It is still possible for someone to intercept my interaction with the program on my local machine, but I have other mechanisms in place to monitor and verify those.

Needless to say, I don’t use a web-based password vault service, nor would I necessarily recommend it to anyone who has sensitive passwords.

One other thing—I escrow some of my passwords.  No, I’m not talking about the ill-fated government key escrow scheme that gave the idea a bad name.  I am referring to self-escrow.  Some of my important passwords at work, which would need to be recovered by the staff if I were to be abducted (again grin) by a UFO crew, have been encrypted and escrowed in a safe place that can be accessed in an emergency.  As more things get locked up with extreme encryption, it is all the more critical that we each consider self-escrow.

So, What’s the Frequency, Kenneth?
How often should passwords be changed?  Many of you asked that, and many of you volunteered your own experience, ranging from monthly to “hardly ever.”  These times were backed up with anecdotes.  Of course, this simply serves to reinforce my comment that the time period should be based on risk assessment of your particular, including access to the system, strength of mechanism, usage, sensitivity of protected information, security of the underlying system, and sophistication of the users…to name a few factors.

Basically, I would suggest you start with an assumption that passwords should be changed every quarter.  If the passwords are used over a lightly protected communications link, then change them more often.  If someone could break the password and use the account without being noticed, then further accelerate the change interval.  If users get guidance on strong password selection, and are motivated to help ensure good security, then maybe you can extend the time period.  In many cases, without due care, you realize that any reuse of passwords is risky.  Instead of dismissing that and imposing monthly password changes, use that knowledge to address the underlying problems.

Several of you mentioned the problem of people sharing passwords and only finding out about it after a mandatory password change.  If that’s the case, you have deeper problems than stale passwords!

I continue to advocate use of a one-time password token for highly sensitive or at-risk resources.  Otherwise, use your judgement and professional evaluation of the risks and benefits of change frequencies.

[posted with ecto]

 

Password Security: What Users Know and What They Actually Do

As a web developer, Usability News from the Software Usability Research Lab at Wichita State is one of my favorite sites.  Design for web apps can seem pretty arbitrary, but UN presents hard numbers to identify best practices, which comes in handy when you’re trying to explain to your boss why the search box shouldn’t be stuck at the bottom of the page (not that this has ever happened at CERIAS, mind you).

The Feb 2006 issue has lots of good bits, but particularly interesting from an infosec perspective are the results of a study on the gulf between what online users know about good password practice, and what they practice.

“It would seem to be a logical assumption that the practices and behaviors users engage in would be related to what they think they should do in order to create secure passwords. This does not seem to be the case as participants in the current study were able to identify many of the recommended practices, despite the fact that they did not use the practices themselves.”

Some interesting points from the study:

  • More than half of users do not vary the complexity of passwords depending on the nature of the data it protects
  • More than half of users never change passwords if the system does not force them to do so.  Nearly 3/4 of the users stated that they should change their passwords every 3 to 6 months, though
  • Half of users believe they should use “special” characters in their passwords (like “&” and “$”), but only 5% do so

More: Password Security: What Users Know and What They Actually Do

Using Virtual Machines to Defend Against Security and Trust Failures

According to the National Vulnerability Database (http://nvd.nist.gov), the number of vulnerabilities found every year increases:  1253 in 2003, 2343 in 2004, and 4734 in 2005.  We take security risks not only by choosing a specific operating system, but also by installing applications and services.  We take risks by browsing the web, because web sites insist on running code on our systems:  JavaScript, Flash (ActionScript), Java, ActiveX, VBscript, QuickTime, and all the plug-ins and browser extensions imaginable.  Applications we pay for want to probe the network to make sure there isn’t another copy running on another computer, creating a vector by which malicious replies could attack us.

  Games refuse to install in unprivileged accounts, so they can run their own integrity checkers with spyware qualities with full privileges (e.g., WoW, but others do the same, e.g., Lineage II), that in turn can even deny you the capability to terminate (kill) the game if it hangs (e.g., Lineage II).  This is done supposedly to prevent cheating, but allows the game companies full access and control of your machine, which is objectionable.  On top of that those games are networked applications, meaning that any vulnerability in them could result in a complete (i.e., root, LocalSystem) compromise. 

  It is common knowledge that if a worm like MyTob compromises your system, you need to wipe the drive and reinstall everything.  This is in part because these worms are so hard to remove, as they attack security software and will prevent firewalls and virus scanners from functioning properly.  However there is also a trust issue—a rootkit could have been installed, so you can’t trust that computer anymore.  So, if you do any sensitive work or are just afraid of losing your work in progress, you need a dedicated gaming or internet PC.  Or do you?

  Company VMWare offers on their web site the free download of VMWare player, as well as a “browser appliance” based on Firefox and Ubuntu Linux.  The advantages are that you don’t need to install and *trust* Firefox.  Moreover, you don’t need to trust Internet Explorer or any other browser anymore.  If a worm compromises Firefox, or malicious JavaScripts change settings and take control of Firefox, you may simply trash the browser appliance and download a new copy.  I can’t overemphasize how much less work this is compared to reinstalling Windows XP for the nth time, possibly having to call the license validation phone line, and frantically trying to find a recent backup that works and isn’t infected too.  As long as VMWare player can contain the infection, your installation is preserved.  Also hosted on the VMWare site are various community-created images allowing you to test various software at essentially no risk, and no configuration work!

  After experiencing this, I am left to wonder, why aren’t all applications like a VMWare “appliance” image, and the operating system like VMWare player?  They should be.  Efforts to engineer software security have obviously failed to contain the growth of vulnerabilities and security problems.  Applying the same solutions the same problems will keep resulting in failures.  I’m not giving up on secure programming and secure software engineering, as I can see promising languages, development methods and technologies appearing, but at the same time I can’t trust my personal computers, and I need to compartmentalize by buying separate machines.  This is expensive and inconvenient.  Virtual machines provide us with an alternative.  In the past, storing entire images of operating systems for each application was unthinkable.  Nowadays, storage is so cheap and abundant that the size of “appliance” images is no longer an issue.  It is time to virtualize the entire machine;  all I now require from the base operating system is to manage a file system and be able to launch VMWare player, with at least a browser appliance to bootstrap…  Well, not quite.  Isolated appliances are not so useful;  I want to be able to transfer documents from appliance to appliance.  This is easily accomplished with a USB memory stick, or perhaps a virtual drive that I can mount when needed.  This shared storage could become a new propagation vector for viruses, but it would be very limited in scope.

  Virtual machine appliances, anyone?

Note (March13, 2006):  Virtual machines can’t defend against cross-site scripting vulnerabilities (XSS), so they are not a solution for all security problems.