Posts tagged security-trends

Page Content

8 Security Action Items to Beat “Learned Helplessness”

So, you watch for advisories, deploy countermeasures (e.g., change firewall and IDS rules) or shut down vulnerable services, patch applications, restore services.  You detect compromises, limit damages, assess the damage, repair, recover, and attempt to prevent them again.  Tomorrow you start again, and again, and again.  Is it worth it?  What difference does it make?  Who cares anymore? 

If you’re sick of it, you may just be getting fatigued.

If you don’t bother defending anymore because you think there’s no point to this endless threadmill, you may be suffering from learned helplessness.  Some people even consider that if you only passively wait for patches to be delivered and applied by software update mechanisms, you’re already in the “learned helplessness category”.  On the other hand, tracking every vulnerability in the software you use by reading BugTraq, Full Disclosure, etc…, the moment that they are announced, and running proof of concept code on your systems to test them isn’t for everyone;  there are diminishing returns, and one has to balance risk vs energy expenditure, especially when that energy could produce better returns.  Of course I believe that using Cassandra is an OK middle ground for many, but I’m biased.

The picture may certainly look bleak, with talk of “perpetual zero-days”.  However, there are things you can do (of course, as in all lists not every item applies to everyone):

  • Don’t be a victim;  don’t surrender to helplessness.  If you have limited energy to spend on security (and who doesn’t have limits?), budget a little bit of time on a systematic and regular basis to stay informed and make progress on tasks you identify as important;  consider the ones listed below.
  • Don’t be a target.  Like or hate Windows, running it on a desktop and connecting to the internet is like having big red circles on your forehead and back.  Alternatives I feel comfortable with for a laptop or desktop system are Ubuntu Linux and MacOS X (for now;  MacOS X may become a greater target in time).  If you’re stuck with Windows, consider upgrading to Vista if you haven’t already;  the security effort poured into Vista should pay off in the long run.  For servers, there is much more choice, and Windows isn’t such a dominant target. 
  • Reduce your exposure (attack surface) by:
    • Browsing the web behind a NAT appliance when at home, in a small business, or whenever there’s no other firewall device to protect you.  Don’t rely only on a software firewall;  it can become disabled or get misconfigured by malware or bad software, or be too permissive by default (if you can’t or don’t know how to configure it).
    • Using the NoScript extension for Firefox (if you’re not using Firefox, consider switching, if only for that reason).  JavaScript is a vector of choice for desktop computer attacks (which is why I find the HoneyClient project so interesting, but I digress).  JavaScript can be used to violate your privacy* or take control of your browser away from you, and give it to website authors, advertisers on those sites, or to the people who compromised those sites, and you can bet it’s not always done for your benefit (even though JavaScript enables better things as well).  NoScript gives you a little control over browser plugins, and which sources are allowed to run scripts in your browser, and attempts to prevent XSS exploits.
    • Turning off unneeded features and services (OK, this is old advice, but it’s still good).
  • Use the CIS benchmarks, and if evaluation tools are available for your platform, run them.  These tools give you a score, and even as silly as some people may think this score is (reducing the number of holes in a ship from 100 to 10 may still sink the ship!), it gives you positive feedback as you improve the security stance of your computers.  It’s encouraging, and may lift the feeling that you are sinking into helplessness.  If you are a Purdue employee, you have access to CIS Scoring Tools with specialized features (see this news release).  Ask if your organization also has access and if not consider asking for it (note that this is not necessary to use the benchmarks).

  • Use the NIST security checklists (hardening guides and templates).  The NIST’s information technology laboratory site has many other interesting security papers to read as well.

  • Consider using Thunderbird and the Enigmail plugin for GPG, which make handling signed or encrypted email almost painless.  Do turn on SSL or TLS-only options to connect to your server (both SMTP and either IMAP or POP) if it supports it.  If not, request these features from your provider.  Remember, learned helplessness is not making any requests or any attempts because you believe it’s not ever going to change anything.  If you can login to the server, you also have the option of SSH tunneling, but it’s more hassle.

  • Watch CERIAS security seminars on subjects that interest you.

  • If you’re a software developer or someone who needs to test software, consider using the ReAssure system as a test facility with configurable network environments and collections of VMware images (disclosure: ReAssure is my baby, with lots of help from other CERIAS people like Ed Cates).

Good luck!  Feel free to add more ideas as comments.

*A small rant about privacy, which tends to be another area of learned helplessness: Why do they need to know?  I tend to consider all information that people gather about me, that they don’t need to know for tasks I want them to do for me, a (perhaps very minor) violation of my privacy, even if it has no measurable effect on my life that I know about (that’s part of the problem—how do I know what effect it has on me?).  I like the “on a need to know basis” principle, because you don’t know which selected (and possibly out of context) or outdated information is going to be used against you later.  It’s one of the lessons of life that knowledge about you isn’t always used in legal ways, and even if it’s legal, not everything that’s legal is “Good” or ethical, and not all agents of good or legal causes are ethical and impartial or have integrity.  I find the “you’ve got nothing to hide, do you?” argument extremely stupid and irritating—and it’s not something that can be explained in a sentence or two to someone saying that to you.  I’m not against volunteering information for a good cause, though, and I have done so in the past, but it’s rude to just take it from me without asking and without any explanation, or to subvert my software and computer to do so. 

2007: The year of the 9,999 vulnerabilities?

A look at the National Vulnerability Database statistics will reveal that the number of vulnerabilities found yearly has greatly increased since 2003:

YearVulnerabilities%Increase
20021959N/A
20031281-35%
2004236785%
20054876106%
2006660535%



Average yearly increase (including the 2002-2003 decline): 48%

6605*1.48= 9775

So, that’s not quite 9999, but fairly close.  There’s enough variance that hitting 9999 in 2007 seems a plausible event.  If not in 2007, then it seems likely that we’ll hit 9999 in 2008.  So, what does it matter?



MITRE’s CVE effort uses a numbering scheme for vulnerabilities that can accomodate only 9999 vulnerabilities:  CVE-YEAR-XXXX.  Many products and vulnerability databases that are CVE-compatible (e.g., my own Cassandra service, CIRDB, etc…) use a field of fixed size just big enough for that format.  We’re facing a problem similar, although much smaller in scope, to the year-2000 overflow.  When the board of editors of the CVE was formed, the total number of vulnerabilities known, not those found yearly, was in the hundreds.  A yearly number of 9999 seemed astronomical;  I’m sure that anyone who would have brought up that as a concern back then would have been laughed at.  I felt at the time that it would take a security apocalypse to reach that.  Yet there we are, and a fair warning to everyone using or developing CVE-compatible products.



Kudos to the National Vulnerability Database and the MITRE CVE teams for keeping up under the onslaught.  I’m impressed.

Vulnerability disclosure grace period needs to be short, too short for patches

One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.

That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well.  The key word here is “possible”, not “likely”, or so I thought when I started writing this post.  After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities.  How likely is it that two security researchers will find the same vulnerability? 

Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem.  Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found.  In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database).  Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000;  this is most surely an underestimation.  I’ll assume that all of these are equally likely to be found.  An additional twist on the birthday problem is that people are entering and leaving the room;  not all X are present at the same time.  This is because we worry about two vulnerabilities being found within the grace period given to a vendor. 

If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision.  Let’s say that the grace period given to a vendor is one month, so Y = X/12.  Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed.  For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia).  Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006.  The probability that two researchers can find the same vulnerability over a given time period is:

Grace PeriodProbability
1 month0.9998
1 week0.37
1 day0.01


In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account.  Has it always been like this?

Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities.  If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years?  The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.

YearVulnerabilities ReportedProbability
1988-19960
19972520.02
19982460.02
19999180.08
200010180.09
200116720.15
200219590.16
200312810.11
200423630.20
200548760.36
200665600.46

So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long.  These calculations are of course very approximative, but they should be useful enough to serve as guidelines.  They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point. 



In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches;  vendors need to take responsibility, even if the vulnerability is not publicly known.  This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now.  These figures are a serious problem for managing security with patches, as opposed to secure coding from the start:  I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant.  Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.

Note:  the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently.  The odds of that specific ocurrence are much smaller.  However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.

 

Security expert recommends ‘Net diversity - Network World

I recently did an interview with Network World magazine.  The topics discussed might well be of interest to readers of this blog.
[tags]network security,risk management,diversity,security trends[/tags]

[posted with ecto]