Posts tagged awareness

Page Content

8 Security Action Items to Beat “Learned Helplessness”

So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services.  You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again.  Tomorrow you start again, and again, and again.  Is it worth it?  What difference does it make?  Who cares anymore? 

If you’re sick of it, you may just be getting fatigued.

If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness.  Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”.  On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone;  there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns.  Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.

The picture may certainly look bleak, with talk of “perpetual zero-days”.  However, there are things you can do (of course, as in all lists not every item applies to everyone):

  • Don’t be a victim;  don’t surrender to helplessness.  If you have limited energy to spend on security (and who doesn’t have limits?), budget a little bit of time on a systematic and regular basis to stay informed and make progress on tasks you identify as important;  consider the ones listed below.
  • Don’t be a target.  Like or hate Windows, running it on a desktop and connecting to the internet is like having big red circles on your forehead and back.  Alternatives I feel comfortable with for a laptop or desktop system are Ubuntu Linux and MacOS X (for now;  MacOS X may become a greater target in time).  If you’re stuck with Windows, consider upgrading to Vista if you haven’t already;  the security effort poured into Vista should pay off in the long run.  For servers, there is much more choice, and Windows isn’t such a dominant target. 
  • Reduce your exposure (attack surface) by:
    • Browsing the web behind a NAT appliance when at home, in a small business, or whenever there’s no other firewall device to protect you.  Don’t rely only on a software firewall;  it can become disabled or get misconfigured by malware or bad software, or be too permissive by default (if you can’t or don’t know how to configure it).
    • Using the NoScript extension for Firefox (if you’re not using Firefox, consider switching, if only for that reason).  JavaScript is a vector of choice for desktop computer attacks (which is why I find the HoneyClient project so interesting, but I digress).  JavaScript can be used to violate your privacy* or take control of your browser away from you, and give it to website authors, advertisers on those sites, or to the people who compromised those sites, and you can bet it’s not always done for your benefit (even though JavaScript enables better things as well).  NoScript gives you a little control over browser plugins, and which sources are allowed to run scripts in your browser, and attempts to prevent XSS exploits.
    • Turning off unneeded features and services (OK, this is old advice, but it’s still good).
  • Use the CIS benchmarks, and if evaluation tools are available for your platform, run them.  These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers.  It’s encouraging, and may lift the feeling that you are sinking into helplessness.  If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release).  Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).

  • Use the NIST security checklists (hardening guides and templates).  The NIST’s information technology laboratory site has many other interesting security papers to read as well.

  • Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless.  Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it.  If not, request these features from your provider.  Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything.  If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.

  • Watch CERIAS security seminars on subjects that interest you.

  • If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).

Good luck!  Feel free to add more ideas as comments.

*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know?  I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?).  I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later.  It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity.  I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you.  I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so. 

VMworld 2006: How virtualization changes the security equation

This session was very well attented (roughly 280 people), which is encouraging.  In the following, I will mix all the panel responses together without differentiating the sources.

It was said that virtualization can make security more acceptable, by contrast to past security solutions and suggested practices that used to be hard to deploy or adopt.  Virtual appliances can help security by introducing more boundaries between various data center functions, so if one is compromised the entire data center hasn’t been compromised.  One panel member argued that virtual appliances (VA) can leverage the expertise of other people.  So, presumably if you get a professional VA it may be installed better and more securely than an average system admin could, and you could pass liability on to them (interestingly, someone else told me outside this session that liability issues were what stopped them from publishing or selling virtual appliances).

I think you may also inherit problems due to the vendor philosophy of delivering functional systems over secure systems.  As always, the source of the virtual appliances, the processes used to create them, the requirements that they were designed to meet, should be considered in evaluating the trust that can be put into them.  Getting virtual appliances doesn’t necessarily solve the hardening problem.  Except, now instead of having one OS to harden, you have to repeat the process N times, where N is the number of virtual appliances you deploy.

As a member of the panel argued, virtualization doesn’t make things better or worse, it still all depends on the practices, processes, procedures, and policies used in managing the data center and the various data security and recovery plans.  Another pointed out that people shouldn’t assume that virtual appliances or virtualization provide security out-of-the-box.  Out of all malicious software, currently 4-5% check if they are running inside a virtual machine;  this may become more common.

It was said that security is not the reason why people are deploying virtualization now.  Virtualization is not as strong as using several different physical, specialized machines, due to the shared resources and shared communication channels.  Virtualization would be much more useful on the client side than on the data center for improving security.  Nothing else of interest was said.

Unfortunately, there was no time for me to ask what the panel thought of the idea of opening VMware to plugins that could perform various security functions (taint tracking and various attack protection schemes, IDS, auditing, etc…).  After the session one of the panel members mentioned that this was being looked at, and that it raised many problems, but would not elaborate.  In my opinion, it could trump the issue of Microsoft (supposedly) closing Windows to security vendors, but they thought of everything!  Microsoft’s EULA forbids running certain versions of Windows on virtual machines.  I wonder about the wisdom of this, as restricting the choices of security solutions can only hurt Microsoft and their users.  Is this motivated by the fear of people cracking the DRM mechanism(s)?  Surely just the EULA can’t prevent that—crackers will do what they want.  As Windows could simply check to see if it is running inside a VM, DRMed content could be protected by refusing to be performed under those conditions, without making all of Windows unavailable.  The fact that the most expensive version of Windows allows running inside a virtual machine (even though performing DRMed content is still forbidden) hints that it’s mostly due to marketing greed, but on the whole I am puzzled by those policies.  It certainly won’t help security research and forensic investigations (are forensic examinators exempt from the licensing/EULA restrictions?  I wonder).

Passwords and Myth

When I posted earlier about passwords and best practices, I had no idea it would elicit such a response!  So, now that my class’s final exams and papers are graded, I will return to the topic and attempt to address some of the points raised in comments—or, at least those comments that were related to the original blog entry.
[tags] best practices, passwords, awareness, general security[/tags]

Best Practices
It was certainly not my intent to disparage all best practices.  I was merely observing that sometimes best practices are viewed as a panacea.  It is important for people to understand the origins of the best practices they espouse, and whether they are indeed “best”!  Sometimes, excellent practices are adopted outside their realm of proper application, or are used too long without proper (re)evaluation of the underlying conditions.  “Best practices” are designed for the average case, but are not meant to be blindly applied in every case—reason should be applied to the situation, but isn’t.  And all too often, folklore and superstition are accepted as “best practice”  because they “seem” correct, or coincidentally produce desired results.

Consider an example of the first of these (understanding the realm of application): showing an ID to get inside a closed facility, proving that you are a current employee of the company or agency.  That is excellent security practice…until you move it to the lobby of every office building!.  At that point, too many guards aren’t really checking the cards to see if someone is really who they claim to be.  Instead of watching for suspicious behavior, many guards now simply look for a laminated card with a picture on it, and something that looks like an official seal.  Security in many places has degraded by accepting what “best practice” is without understanding where it is really best.

The second case (blind application without reasoning) is illustrated by many of the things that TSA does in airline passenger screening.  One example, told to me by a Federal law enforcement agent, is when he showed his badge and papers while passing though security.  They didn’t make him take out his weapon when going through the metal detector…but then they insisted that he run his shoes through the X-ray machine!  They had rules that allowed them to let a law enforcement agent with a semiautomatic handgun through the checkpoint, but they couldn’t appropriately reason about why they had a rule about screening shoes and apply it to this case!  (Of course, several aspects of TSA screening are poorly considered, but that may be a topic for a later post.)

The third case—folklore and superstition accepted as best practice—is rampant in information security, and I intend to say more about this in later postings.

My post about password security was based on the fact that the “change passwords once a month” rule is based on very old practice, and doesn’t really help now in many real-world environments.  In fact, it may result in weaker security in many cases, as users try to find a way around the rules.  At the least, the average user will have the impression reinforced that “Those security guys are idiots and their goal seems to be to make my life more difficult.”  That doesn’t help build a cooperative working environment where the user population is part of the security infrasrtructure!

Risk Assessment
Donn Parker was one of the first people to argue persuasively that traditional risk assessment would not work in modern IT, and that sound design and best practice would have to do.  I greatly respect Donn’s long experience and opinions, but I don’t completely agree.  In many cases it is possible, using recent experience and expert knowledge, to appropriately estimate risk and loss to quartiles or deciles.  Although imperfect, it can help in making choices and understanding priorities.  When there is insufficient experience and knowledge, I agree with Donn that relying on sound practice is the next best thing; of course, sound design should be used at all times!

Some readers commented that they didn’t have the money to do a risk evaluation. Resolving a question such as password change frequency does not require a full-blown audit and risk analysis.  But, as with my previous comment, if you don’t have the resources, experience or knowledge, then pick sound practice—but put in some effort to understand what is sound!

Password Vaults
A number of responses (including several private responses) were directed to the growing number of passwords, PINs, serial numbers and employee IDs we are expected to remember.  Good security practice suggests that authenticators used in different realms of privilege be unique and uncorrelated.  Good privacy practice suggests that we develop independent identifiers for different uses to prevent correlation.  The two combined result in too many things to remember for those of us whose brains are full (to indirectly pay homage to an old Larson cartoon), and especially for the average person who is overly-taxed when remembering anything beyond who was voted off of American Idol this week.  Now, add frequent requirements to change some of those values, and the situation becomes well-nigh impossible.

Several readers mentioned password vault programs that they use, either on PDAs or the WWW.  I was asked my opinion of some of these.

I use several password vaults myself.  They have 4 characteristics that I believe are important:

  1. The programs use published, strong ciphers to encrypt the contents. (e.g., AES). I don’t need to worry about some random person getting the encrypted database and then decrypting all my keys.
  2. The programs are cross-platform so that I can use the same program on my PDA, my laptop, and my home system.  This keeps me from creating keys and passwords then forgetting them because I don’t have the vault program at hand.
  3. The different versions of the program sync with each other, and allow the database to be backed up.  If I lose my PDA, I’m not completely locked out of everything—I can do a restore, unencrypt, and carry on as before.
  4. I don’t store the database and the encryption routines on someone else’s machine.  That way, I don’t have to worry about the owner of a remote site altering the encryption routines, or making a surreptitious copy of my keys.  It is still possible for someone to intercept my interaction with the program on my local machine, but I have other mechanisms in place to monitor and verify those.

Needless to say, I don’t use a web-based password vault service, nor would I necessarily recommend it to anyone who has sensitive passwords.

One other thing—I escrow some of my passwords.  No, I’m not talking about the ill-fated government key escrow scheme that gave the idea a bad name.  I am referring to self-escrow.  Some of my important passwords at work, which would need to be recovered by the staff if I were to be abducted (again grin) by a UFO crew, have been encrypted and escrowed in a safe place that can be accessed in an emergency.  As more things get locked up with extreme encryption, it is all the more critical that we each consider self-escrow.

So, What’s the Frequency, Kenneth?
How often should passwords be changed?  Many of you asked that, and many of you volunteered your own experience, ranging from monthly to “hardly ever.”  These times were backed up with anecdotes.  Of course, this simply serves to reinforce my comment that the time period should be based on risk assessment of your particular, including access to the system, strength of mechanism, usage, sensitivity of protected information, security of the underlying system, and sophistication of the users…to name a few factors.

Basically, I would suggest you start with an assumption that passwords should be changed every quarter.  If the passwords are used over a lightly protected communications link, then change them more often.  If someone could break the password and use the account without being noticed, then further accelerate the change interval.  If users get guidance on strong password selection, and are motivated to help ensure good security, then maybe you can extend the time period.  In many cases, without due care, you realize that any reuse of passwords is risky.  Instead of dismissing that and imposing monthly password changes, use that knowledge to address the underlying problems.

Several of you mentioned the problem of people sharing passwords and only finding out about it after a mandatory password change.  If that’s the case, you have deeper problems than stale passwords!

I continue to advocate use of a one-time password token for highly sensitive or at-risk resources.  Otherwise, use your judgement and professional evaluation of the risks and benefits of change frequencies.

[posted with ecto]

 

Useful Awareness Videos

The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest.  Topics covered include spyware, phishing, and patching.  The winning video,  Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.

The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest.  In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.