Posts in General

Cowed Through DNS

May 2010 will mark the 4th aniversary of our collective cowing by spammers, malware authors and botnet operators. In 2006, spammers squashed Blue Frog. They made the vendor of this service, Blue Security, into lepers, as everyone became afraid of being contaminated by association and becoming a casualty of the spamming war. Blue Frog hit spammers were it counted -- in the revenue stream, simply by posting complaints to spamvertized web sites. It was effective enough to warrant retaliation. DNS was battered into making Blue Security unreachable. The then paying commercial clients of Blue Security were targetted, destroying the business model; so Blue Security folded [1]. I was stunned that the "bad guys" won by brute force and terror, and the security community either was powerless or let it go. Blue Security was even blamed for some of their actions and their approach. Blaming the victims for daring to organize and attempt to defend people, err, I mean for provoking the aggressor further, isn't new. An open-source project attempting to revive the Blue Frog technology evaporated within the year. The absence of interest and progress has since been scary (or scared) silence.

According to most sources, 90-95% of our email traffic has been spam for years now. Not content with this, they subject us to blog spam, friendme spam, IM spam, and XSS (cross-site scripting) spam. That spam or browser abuse through XSS convinces more people to visit links and install malware, thus enrolling computers into botnets. Botnets then enforce our submission by defeating Blue Security type efforts, and extort money from web-based businesses. We can then smugly blame "those idiots" who unknowingly handed over the control over their computers, with a slight air of exasperation. It may also be argued that there's more money to be made selling somewhat effective spam-fighting solutions than by emulating a doomed business model. But in reality, we've been cowed.

I had been hoping that the open source project could make it through the lack of a business model; after all, the open source movement seems like a liberating miracle. However, the DNS problem remained. So, even though I didn't use Blue Frog at the time, I have been hoping for almost 4 years now that DNS would be improved to resist the denial of service attacks that took Blue Security offline. I have been hoping that someone else would take up the challenge. However, all we have is modest success at (temporarily?) disabling particular botnets, semi-effective filtering, and mostly ineffective reporting. Since then, spammers have ruled the field practically uncontested.

Did you hear about Comcast's deployment of DNSSEC [2]? It sounds like a worthy improvement; it's DNS with security extensions, or "secure DNS". However, Denial-of-service (DoS) prevention is out-of-scope of DNSSEC! It has no DoS protections, and moreover there are reports of DoS "amplification attacks" exploiting the larger DNSSEC-aware response size [3]. Hum. Integrity is not the only problem with DNS! A search of IEEE Explore and the ACM digital library for "DNS DoS" reveals several relevant papers [4-7], including a DoS-resistant backwards compatible replacement for the current DNS from 2004. Another alternative, DNSCurve has protection for confidentiality, integrity and availability (DoS) [8], has just been deployed by OpenDNS [9] and is being proposed to the IETF DNSEXT working group [10]. This example of leadership suggests possibilities for meaningful challenges to organized internet crime. I will be eagerly watching for signs of progress in this area. We've kept our head low long enough.

References
1. Robert Lemos (2006) Blue Security folds under spammer's wrath. SecurityFocus. Accessed at http://www.securityfocus.com/news/11392
2. Comcast DNSSEC Information Center Accessed at http://www.dnssec.comcast.net/
3. Bernstein DJ (2009) High-speed cryptography, DNSSEC, and DNSCurve. Accessed at: http://cr.yp.to/talks/2009.08.11/slides.pdf
4. Fanglu Guo, Jiawu Chen, Tzi-cker Chiueh (2006) Spoof Detection for Preventing DoS Attacks against DNS Servers. 26th IEEE International Conference on Distributed Computing Systems.
5. Kambourakis G, Moschos T, Geneiatakis D, Gritzalis S (2007) A Fair Solution to DNS Amplification Attacks. Second International Workshop on Digital Forensics and Incident Analysis.
6. Hitesh Ballani, Paul Francis (2008) Mitigating DNS DoS attacks. Proceedings of the 15th ACM conference on Computer and communications security
7. Venugopalan Ramasubramanian, Emin Gün Sirer (2004) The design and implementation of a next generation name service for the internet. Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications
8. DNSCurve: Usable security for DNS (2009). Accessed at http://dnscurve.org/
9. Matthew Dempsky (2010) OpenDNS adopts DNSCurve. Accessed at http://blog.opendns.com/2010/02/23/opendns-dnscurve/
10. Matthew Dempsky (2010) [dnsext] DNSCurve Internet-Draft. Accessed at http://www.ops.ietf.org/lists/namedroppers/namedroppers.2010/msg00535.html

Blast from the Past

Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.

To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.

I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art" grin

I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.

Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.

Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.

As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.

But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!

Drone “Flaw” Known Since 1990s Was a Vulnerability

"The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said." Call it what it is: it's a vulnerability that was misclassified (some might argue that it's an exposure, but there is clearly a violation of implicit confidentiality policies). This fiasco is the result of the thinking that there is no vulnerability if there is no threat agent with the capability to exploit a flaw. I argued against Spaf regarding this thinking previously; it is also widespread in the military and industry. I say that people using this operational definition are taking a huge risk if there's a chance that they misunderstood either the flaw, the capabilities of threat agents, present or future, or if their own software is ever updated. I believe that for software that is this important, an academic definition of vulnerability should be used: if it is possible that a flaw could conceptually be exploited, it's not just a flaw, it's a vulnerability, regardless of the (assumed) capabilities of the current threat agents. I maintain that (assuming he exists for the sake of this analogy) Superman is vulnerable to kryptonite, regardless of an (assumed) absence of kryptonite on earth.

The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:

"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.

A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.

Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "

Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.

Talking to the Police All the Time

I started writing this entry while thinking about the "if you have nothing to hide, then you have nothing to fear" fallacy. What do you say to someone who says that they have nothing to hide, or that some information about them is worthless anyway, so they don't care about some violation of their privacy? What do you say to a police officer who says that if you have nothing to hide then you have nothing to fear by answering questions? It implies that if you refuse to answer then you're probably "not innocent". That "pleading the 5th" is now used as a joke to admit guilt in light banter, is a sign of how pervasive the fallacy has become. It's followed closely with "where there's smoke there's probably fire" when discussing someone's arrest, trial, or refusal to answer questions. However, in the field of information security, it is an everyday occurrence to encounter people who don't realize the risks to which they are exposed. So why is this fallacy so irritating?

Those kind of statements expose naïveté or, if intended as a manipulative statement, perversity. It takes a long time to explain the risks and convince others that they are real, and that they are really exposed to them, and what the consequences might be. Even if you could somehow manage to explain it convincingly on the spot, before you're done, chances are that you'll be dismissed as a "privacy nut". In addition, you rarely have that kind of time to make a point in a normal discussion. So, that fallacy is often a successful gambit simply because it discourages someone from trying to explain why it's so silly.

You may buy some time by mentioning anecdotes such as the man falsely accused of arson because by coincidence, he bought certain things in a store at a certain time (betrayed by his grocery loyalty card) [1]. Or, there's the Indiana woman who bought for her sick family just a little too much medication containing pseudoephedrine, an ingredient used in the manufacture of crystal meth [2]. Possibilities for the misinterpretation of data or the inappropriate enforcement of bad laws are multiplied by the ways in which it can be obtained. Police can stick a GPS-tracking device on anyone they want without getting a search warrant [3] or routinely use your own phone's GPS [4]. Visiting a web page, regardless of whether you used an automated spider, clicked on a linked manually, perhaps even being tricked into doing it, or were framed by a malicious or compromised web site, can trigger an FBI raid [5] (remember goatse? Except it's worse, with a criminal record for you). There are also the dumb things people post themselves, for example on Facebook, causing them to lose jobs, opportunities for jobs, or even get arrested [6].

Regardless, people always think that happens only to others, that "they were dumb and I'm not" or that they are isolated incidents. This is why I was delighted to find this video of a law professor explaining why talking to police can be a bad idea [7]. Even though I knew that "everything you say can be used against you", I was surprised to learn that nothing you say can be used in your defense. This asymmetry is a rather convincing argument for exercising 5th amendment rights. Then there are the chances that even though you are innocent, due to the stress or excitement you will exaggerate or say something stupid. For example, you might say you've never touched a gun in your life -- except you did once a long time ago when you were a teen maybe, and forgot about it but there's a photo proving that you lied (apparently, that you didn't mean to lie matters little). People say stupid things in less stressful circumstances. Why take the chance? There are also coincidences that look rather damning and bad for you. Police sometimes make mistakes as well. The presentation is well-made and is very convincing; I recommend viewing it.

There are so many ways in which private information can be misinterpreted and used against you or to your disadvantage, and not just by police. Note that I agree that we need an effective police; however, there's a line between that and a surveillance society making you afraid to speak your mind in private, afraid to buy certain things at the grocery store, afraid to go somewhere or visit a web site, or afraid of chatting online with your friends, because you never know who will use anything you say or do against you and put it in the wrong context. In effect, you may be speaking to the police all the time but don't realize it. Even though considering each method separately, it can be argued that technically there isn't a violation of the 5th amendment, the cumulative effect may violate its intent.

Then, after I wrote most of this entry, Google CEO Eric Schmidt declared that "If you have something that you don't want anyone to know, maybe you shouldn't be doing it in the first place" [8]. I'm afraid that's a realistic assessment, even if it's a lawful activity, given the "spying guides" published by the likes of Yahoo!, Verizon, Sprint, Cox, SBC, Cingular, Nextel, GTE, Voicestream for law enforcement, and available at Cryptome [9]. The problem is that you'll then live a sad life devoid of personal liberties. The alternative shrug and ignorance of the risks is bliss, until it happens to you.

[1] Brandon Sprague (2004) Fireman attempted to set fire to house, charges say. Times Snohomish County Bureau, Seattle Times. Accessed at http://seattletimes.nwsource.com/html/localnews/2002055245_arson06m.html

[2] Mark Nestmann (2009) Yes, You ARE a Criminal…You Just Don't Know it Yet. In "Preserving your privacy and more", November 23 2009. Accessed at http://nestmannblog.sovereignsociety.com/2009/11/yes-you-are-a-criminalyou-just-dont-know-it-yet.html

[3] Chris Matyszczyk (2009) Court says police can use GPS to track anyone. Accessed at http://news.cnet.com/8301-17852_3-10237353-71.html

[4] Christopher Soghoian (2009) 8 Million Reasons for Real Surveillance Oversight. Accessed at http://paranoia.dubfire.net/2009/12/8-million-reasons-for-real-surveillance.html

[5] Declan McCullagh (2009) FBI posts fake hyperlinks to snare child porn suspects. Accessed at: http://news.cnet.com/8301-13578_3-9899151-38.html

[6] Mark Nestmann (2009) Stupid Facebook Tricks. In "Preserving your privacy and more", November 27 2009. Accessed at http://nestmannblog.sovereignsociety.com/2009/11/stupid-facebook-tricks.html

[7] James Duane (2008) Don't Talk to Police. http://www.youtube.com/watch?v=i8z7NC5sgik

[8] Ryan Tate (2009) Google CEO: Secrets Are for Filthy People. Accessed at http://gawker.com/5419271/google-ceo-secrets-are-for-filthy-people

[9] Cryptome. Accessed at http://cryptome.org/
Last edited Jan 25, as per emumbert1's suggestion (see comments).

“Verified by VISA”: Still Using SSNs Online, Dropped by PEFCU

I have written before about the "Verified by VISA" program. While shopping for Thanksgiving online this year, I noticed that Verified by Visa scripts were blocked by NoScript, and I could complete my purchases without authenticating. It was tempting to conclude that the implementation was faulty, but a few phone calls clarified that the Purdue Employee Federal Credit Union stopped participating in the program. I have ambivalent feelings about this. I'm glad that PEFCU let us escape from the current implementation and surprise enrollment based on SSN at the time of purchase, and SSN-based password reset. Yet, I wish a password-protection system was in place because it could significantly improve security (see below). Getting such a system to work is difficult, because in addition to needing to enroll customers, both banks and merchants have to support it. For the sake of curiosity, I counted the number of participating stores in various countries, as listed on the relevant VISA web sites:
CountryNumber of Stores
USA126
Europe183
Thailand439
Taiwan144
Japan105
China90
Singapore65
Malaysia27
Hong Kong20
Vietnam17
Australia13
India7
Others0
Multiply this by the fraction of participating banks (data not available for the US), and for a program that started in 2001, that's spotty coverage. Adoption would be better by getting people to enroll when applying for credit cards, when making a payment, by mail at any time, or in person at their bank. The more people adopt it, the more stores and banks will be keen on reducing their risk as the cost per participating card holder would decrease. Ambushing people at the time of an online purchase with an SSN request violates the security principle of psychological acceptability. The online password reset based on entering your SSN, which I had criticized, is still exposing people to SSN-guessing risks, and also the only means to change your password. I wish that VISA would overhaul the implementation and use an acceptable process (e.g., a nonce-protected link via email to a page with a security question). The reason I'm interested is because I'd rather have a password-protected credit card, and a single password to manage, than a hundred+ online shopping accounts that keep my credit card information with varying degrees of (in)security. Using an appropriate choke-point would reduce attack surface, memorization requirements, and identity theft.

Firefox Vulnerabilities: Souvenirs of Windows 95

I've been waiting for an announcement of vulnerabilities in Firefox due to popular extensions. I've compared it to Windows 95 before. Yet students often opine that Firefox is more secure than Internet Explorer. It is worth repeating this explanation from the announcement:

"Mozilla doesn't have a security model for extensions and Firefox fully trusts the code of the extensions. There are no security boundaries between extensions and, to make things even worse, an extension can silently modify another extension."

Asking which of Firefox and Internet Explorer is most secure is like asking which of two random peasants is wealthier. They both might be doing their best and there may be significant differences but I wouldn't expect either to be a financier. While I'm running with this analogy, let me compare the widespread and often mandatory use of client scripts in websites (e.g., JavaScript) to CDOs: they both are designed by others with little interest in your security, they leverage your resources for their benefit, they are opaque, complex, nearly impossible to audit, and therefore untrustworthy. They have also both caused a lot of damage, as having scripting enabled is required for many attacks on browsers. How much smaller would botnets be without scripting? Like CDOs, scripting is a financial affair; it is needed to support advertising and measure the number of visitors and click-throughs. Scripting will stay with us because there's money involved, and if advertisers had their way, there would be no option to disable plugins and JavaScript, nor would there be extensions like NoScript. To be fair, there are beneficial uses for JavaScript, but it's a tangled mess with a disputable net value. Here's my take on media and advertising:

Every medium supported exclusively by advertising tends to have a net value of zero for viewers and users (viewsers?). This is where radio and TV are right now. If the value was significantly higher than zero, advertisers could wring more profits from it, for example by increasing the duration or number of annoying things, polluting your mind or gathering and exploiting more information. If it was significantly less than zero, then they would lose viewership and therefore revenue.

So, with time, and if advertising is allowed to power all websites through the requirement for scripting and JavaScript, surfing the web will become as pleasant, useful and watchable as TV, for example (with the difference that your TV can't be used --yet-- to attack people and other nations). I don't mind being locked out of websites that critically depend on advertising revenue -- just like I don't watch TV anymore because it has become a negative value proposition to me. However I mind being needlessly exposed to risks due to other people's decisions, when I use other websites. I'm looking forward to the "component directory lockdown" in Firefox 3.6 as a step in the right direction, and that's the bright light at the end of the tunnel: some things are improving.

Are We All Aware Yet?

So, here we are, in November already. We've finished up with National Cyber Security Awareness Month — feel safer? I was talking with someone who observed that he remembered "National Computer Security Day" (started back in the late 1990s) that then became "National Computer Security Week" for a few years. Well, the problems didn't go away when everyone started to call it "cyber," so we switched to a whole month but only of "awareness." This is also the "Cyber Leap Ahead Year." At the same level of progress, we'll soon have "The Decade of Living Cyber Securely." The Hundred Years' War comes to mind for some reason, but I don't think our economic system will last that long with losses mounting as they are. The Singularity may not be when computers become more powerful than the human mind, but will be the point at which all intellectual property, national security information, and financial data has been stolen and is no longer under the control of its rightful owners.

Overly gloomy? Perhaps. But consider that today is also the 21st anniversary of the Morris Internet Worm. Back then, it was a big deal because a few thousand computers were affected. Meanwhile, today's news has a story about the Conficker worm passing the 7 million host level, and growing. Back in 1988 there were about 100 known computer viruses. Today, most vendors have given up trying to measure malware as the numbers are in the millions. And now we are seeing instances of fraud based on fake anti-malware programs being marketed that actually infect the hosts on which they are installed! The sophistication and number of these things are increasing non-linearly as people continue to try to defend fundamentally unsecurable systems.

And as far as awareness goes, a few weeks ago I was talking with some grad students (not from Purdue). Someone mentioned the Worm incident; several of the students had never heard of it. I'm not suggesting that this should be required study, but it is indicative of something I think is happening: the overall awareness of security issues and history seems to be declining among the population studying computing. I did a quick poll, and many of the same students only vaguely recalled ever hearing about anything such as the Orange Book or Common Criteria, about covert channels, about reference monitors, or about a half dozen other things I mentioned. Apparently, anything older than about 5 years doesn't seem to register. I also asked them to name 5 operating systems (preferably ones they had used), and once they got to 4, most were stumped (Windows, Linux, MacOS and a couple said "Multics" because I had asked about it earlier; one young man smugly added "whatever it is running on my cellphone," which turned out to be a Windows variant). No wonder everyone insists on using the same OS, the same browser, and the same 3 programming languages — they have never been exposed to anything else!

About the same time, I was having a conversation with a senior cyber security engineer of a major security defense contractor (no, I won't say which one). The engineer was talking about a problem that had been posed in a recent RFP. I happened to mention that it sounded like something that might be best solved with a capability architecture. I got a blank look in return. Somewhat surprised, I said "You know, capabilities and rings — as in Multics and System/38." The reaction to that amazed me: "Those sound kinda familiar. Are those versions of SE Linux?"

Sigh. So much for awareness, even among the professionals who are supposed to be working in security. The problems are getting bigger faster than we have been addressing them, and too many of the next generation of computing professionals don't even know the basic fundamentals or history of information security. Unfortunately, the focus of government and industry seems to continue to be on trying to "fix" the existing platforms rather than solve the actual problems. How do we get "awareness" into that mix?

There are times when I look back over my professional career and compare it to trying to patch holes in a sinking ship while the passengers are cheerfully boring new holes in the bottom to drop in chum for the circling sharks. The biggest difference is that if I was on the ship, at least I might get a little more sun and fresh air.

More later.

Cassandra Firing GnuPG Blanks

A routine software update (a minor revision number) caused a serious problem. A number of blank messages were sent until we realized that attempts to sign messages with GnuPG from PHP resulted in empty strings. If you received a blank message from Cassandra, you can find out what it was about by logging to the service. Then click on the affected profile name (from the subject of the email), then "Search" and "this month". This will retrieve the latest alerts over an interval of one month for that profile. Messages will not be signed until we figure out a fix. We're sorry for the inconvenience. Edit (Monday 11/2, noon): This has been fixed and emails are signed again. I also added a pre-flight test to detect this condition in the future.

What About the Other 11 Months?

October is "officially" National Cyber Security Awareness Month. Whoopee! As I write this, only about 27 more days before everyone slips back into their cyber stupor and ignores the issues for the other 11 months.

Yes, that is not the proper way to look at it. The proper way is to look at the lack of funding for long-term research, the lack of meaningful initiatives, the continuing lack of understanding that robust security requires actually committing resources, the lack of meaningful support for education, almost no efforts to support law enforcement, and all the elements of "Security Theater" (to use Bruce Schneier's very appropriate term) put forth as action, only to realize that not much is going to happen this month, either. After all, it is "Awareness Month" rather than "Action Month."

There was a big announcement at the end of last week where Secretary Napolitano of DHS announced that DHS had new authority to hire 1000 cybersecurity experts. Wow! That immediately went on my list of things to blog about, but before I could get to it, Bob Cringely wrote almost everything that I was going to write in his blog post The Cybersecurity Myth - Cringely on technology. (NB. Similar to Bob's correspondent, I have always disliked the term "cybersecurity" that was introduced about a dozen years ago, but it has been adopted by the hoi polloi akin to "hacker" and "virus.") I've testified before the Senate about the lack of significant education programs and the illusion of "excellence" promoted by DHS and NSA -- you can read those to get my bigger picture view of the issues on personnel in this realm. But, in summary, I think Mr. Cringely has it spot on.

Am I being too cynical? I don't really think so, although I am definitely seen by many as a professional curmudgeon in the field. This is the 6th annual Awareness Month and things are worse today than when this event was started. As one indicator, consider that the funding for meaningful education and research have hardly changed. NITRD (National Information Technology Research & Development) figures show that the fiscal 2009 allocation for Cyber Security and Information Assurance (their term) was about $321 million across all Federal agencies. Two-thirds of this amount is in budgets for Defense agencies, with the largest single amount to DARPA; the majority of these funds have gone to the "D" side of the equation (development) rather than fundamental research, and some portion has undoubtedly gone to support offensive technologies rather than building safer systems. This amount has perhaps doubled since 2001, although the level of crime and abuse has risen far more -- by at least two levels of magnitude. The funding being made available is a pittance and not enough to really address the problems.

Here's another indicator. A recent conversation with someone at McAfee revealed that new pieces of deployed malware are being indexed at a rate of about 10 per second -- and those are only the ones detected and being reported! Some of the newer attacks are incredibly sophisticated, defeating two-factor authentication and falsifying bank statements in real time. The criminals are even operating a vast network of fake merchant sites designed to corrupt visitors' machines and steal financial information.   Some accounts place the annual losses in the US alone at over $100 billion per year from cyber crime activities -- well over 300 times everything being spent by the US government in R&D to stop it. (Hey, but what's 100 billion dollars, anyhow?) I have heard unpublished reports that some of the criminal gangs involved are spending tens of millions of dollars a year to write new and more effective attacks. Thus, by some estimates, the criminals are vastly outspending the US Government on R&D in this arena, and that doesn't count what other governments are spending to steal classified data and compromise infrastructure. They must be investing wisely, too: how many instances of arrests and takedowns can you recall hearing about recently?

Meanwhile, we are still awaiting the appointment of the National Cyber Cheerleader. For those keeping score, the President announced that the position was critical and he would appoint someone to that position right away. That was on May 29th. Given the delay, one wonders why the National Review was mandated as being completed in a rush 60 day period. As I noted in that earlier posting, an appointment is unlikely to make much of a difference as the position won't have real authority. Even with an appointment, there is disagreement about where the lead for cyber should be, DHS or the military. Neither really seems to take into account that this is at least as much a law enforcement problem as it is one of building better defenses. The lack of agreement means that the tenure of any appointment is likely to be controversial and contentious at worst, and largely ineffectual at best.

I could go on, but it is all rather bleak, especially when viewed through the lens of my 20+ years experience in the field.  The facts and trends have been well documented for most of that time, too, so it isn't as if this is a new development. There are some bright points, but unless the problem gets a lot more attention (and resources) than it is getting now, the future is not going to look any better.

So, here are my take-aways for National Cyber Security Awareness:

  • the government is more focused on us being "aware" than "secure"
  • the criminals are probably outspending the government in R&D
  • no one is really in charge of organizing the response, and there isn't agreement about who should
  • there aren't enough real experts, and there is little real effort to create more
  • too many people think "certification" means "expertise"
  • law enforcement in cyber is not a priority
  • real education is not a real priority

But hey, don't give up on October! It's also Vegetarian Awareness Month, National Liver Awareness Month, National Chiropractic Month, and Auto Battery Safety Month (among others). Undoubtedly there is something to celebrate without having to wait until Halloween. And that's my contribution for National Positive Attitude Month.

The Secunia Personal Software Inspector

So you have all the patches from Microsoft applied automatically, Firefox updates itself as well as its extensions... But do you still have vulnerable, outdated software? Last weekend I decided to try the Secunia Personal Software Inspector, which is free for personal use, on my home gaming computer. The Secunia PSI helps find software that falls through the cracks of the auto-update capabilities. I was pleasantly surprised. It has a polished normal interface as well as an informative advanced interface. It ran quickly and found obsolete versions of Adobe Flash installed concurrently with newer ones, and pointed out that Firefox wasn't quite up-to-date as the latest patch hadn't been applied.

When I made the Cassandra system years ago, I was also dreaming of something like this. It is limited to finding vulnerable software by version, not configuration, and giving links to fixes; so it doesn't help hardening a system to the point that some computer security benchmarks can. However, those security benchmarks can decrease the convenience of using a computer, so they require judgment. It can also be time consuming and moderately complex to figure out what you need to do to improve the benchmark results. By contrast, the SPI is so easy to install and use that it should be considered by anyone capable of installing software updates, or anyone managing a family member's computer. The advanced interface also pointed out that there were still issues with Internet Explorer and with Firefox for which no fixes were available. I may use Opera instead until these issues get fixed. It is unfortunate that it runs only on Windows, though.

The Secunia Personal Software Inspector is not endorsed by Purdue University CERIAS; the above are my personal opinions. I do not own any shares or interests in Secunia.
Edit: fixed the link, thanks Brett!