As a researcher and educator, I regularly follow many newsletters, blogs and newsfeeds on a near daily basis. Some items I bookmark for my classes and research, but most I simply read, note, and discard. I read many dozen such items per day -- sometimes as many as 100 when there is a lot happening and I have a backlog.
After news of the Sony incident broke on April 20th, I saw items about how some people knew about vulnerabilities in parts of the Sony network, and servers running old versions of the Apache webservers. Those postings had material similar to what was published in Wired on April 28th. To the best of my memory, at least one of those postings mentioned that some of these vulnerabilities were exposed to Sony in a mailing list or blog prior to the compromise. It may be that the reference was to the PSN webserver vulnerabilities, it may have been about the earlier flaw with the PS3 connecting to the PSN, or it may have been some other vulnerabilities...but I am pretty certain it was about the webservers. There was no discussion about how the breach occurred or whether the old software played a part in those breaches.
After reading these stories, I moved on to other issues. I was not a customer of Sony or the Playstation Network (PSN), and they have never had a relationship with our research group, so I had no reason to pay close attention to the story. Furthermore, we were approaching the end of the semester, I was teaching a graduate class and also preparing for two trips to workshops. Thus, I had several other things to occupy my time and attention, and this story was definitely not one of them.
On May 1st, in my capacity as chair of USACM, I received an invitation to appear at a House subcommittee meeting on the morning of May 4 on the issue of data breaches and privacy. This is a topic that has been one of USACM's main thrust areas, and is in my main areas of interest, so even though it was extremely short notice, I said yes. I spent the next 48 hours frantically trying to rearrange my teaching and administrative schedule at the university while also producing a formal written testimony to deliver to Congressional offices by a Tuesday noon deadline. This occurred, but with very little sleep over that two day period. Tuesday afternoon I had to drive to Indianapolis to fly to Washington for the Wednesday morning hearing.
Wednesday morning at 9:30am. the House Subcommittee on Commerce, Manufacturing and Trade of the House Energy and Commerce Committee held its hearing on “The Threat Of Data Theft To American Consumers.” I was the 4th witness in the panel (our written statements are available online). Three days of little sleep and too much coffee, plus the TV lights, combined to give me quite a headache, but that may not be evident if you watch the C-SPAN recording of the hearing.
In my written testimony I indicated that "...some news reports indicate that Sony was running software that was badly out of date, and had been warned about that risk." During questioning, I stated that I had read this on security lists that I normally read.
The fun begins
My comment that I had seen accounts about the server software being out of date and no firewalls was reported accurately by a few media outlets. However, a few others widely misquoted as me stating, authoritatively, that Sony was running outdated, unpatched software and implied that this was somehow the cause of the breach. Other news sources, blogs, and aggregators then picked up this version of the story and repeated it as their own, often with some other embellishment.
In only a few cases did a responsible journalist contact me to fact-check the story and determine what I had actually said, and what I actually knew.
I tried to correct one or two of the incorrect reports, but most occurred in places where there was no contact address for corrections, and they soon were spreading faster than I could possibly respond. I gave up.
Soon after the stories started circulating, I received email from Eugene Alvarado (he has given me permission to name him), who indicated that in early February he reported to Sony that there was widespread hacking of the network going on that was interfering with use of the network. He never got a response. So, at least one other person observed problems and reported them to Sony in advance of the breach in April. If the problem was significant, there may well have been others.
More recently, at least one "commentator" who "thinks" he is "clever" because he can put quotes around words like "security expert" to imply something meaningful about my expertise has posted a critique pointing out that some of Sony's servers were, in fact, up-to-date. However, at least one follow-up by someone else observes that other Sony servers (with interesting names such as "Login" and "Auth") were running software dated 2008. Thus, it may well be the case that some of the systems were current and some not. As we well know, it only takes one system out of rev or with a missing patch to serve as an entry point to a whole network.
To this day, I have never heard from nor spoken with anyone at Sony. I have never bothered to probe or investigate their systems, because frankly, I don't care. Those issues are for others to determine and settle. What I think were the bigger issues to the story at the hearing were about having standard breach notifications and the 24 USACM privacy principles that were in my testimony. There are hundreds of other breaches occurring every year in the U.S. resulting in fraud, identity theft, and other crimes. Those are smaller than this incident with the PSN, but the victims are no less damaged. We need for the FTC and law enforcement to have more resources to help fight these problems, and we could definitely use some appropriate Federal legislation on minimum privacy protections and breach notifications. Read the 4 written testimonies from the hearing to get a sense of what is involved.
As to the spurious story, I tried to be clear in my testimony (written and oral) that I was simply repeating what I had read in some online newsgroups. I am really quite appalled at the number of places that have twisted that into a claim that Sony was somehow, definitely, running substandard software or systems. It is possible they were, but it is also possible they were running very well-maintained systems that fell prey to a clever attacker. That has happened to other high profile victims.
I certainly bear the good folks at Sony no ill will, and I hope they resolve the situation with the Playstation Network soon.
In the meantime, perhaps this can serve as an abject lesson about dealing with the media and bloggers — some of them want a sensational story, whether the facts support it or not, and you had better not get in the way!
A recent article contains information indicating there was obvious evidence in Sony's logs of scanning activity starting March 3rd that should have been noticed.
Another recent article provides more information about the scanning activity preceding the breach, and suggests that it occurred from more than one source.
Here is a very nice timeline and summary of Sony security incidents that seem to keep on coming.
Recently, Amazon's cloud service failed for several customers, and has not come back fully for well over 24 hours. As of the time I write this, Amazon has not commented as to what caused the problem, why it took so long to fix, or how many customers it affected.
It seems a client of Amazon was not able to contact support, and posted in a support forum under the heading "Life of our patients is at stake - I am desperately asking you to contact." The body of the message was that "We are a monitoring company and are monitoring hundreds of cardiac patients at home. We were unable to see their ECG signals"
What ensued was a back-and-forth with others incredulous that such a service would not have a defined disaster plan and alternate servers defined, with the original poster trying to defend his/her position. At the end, as the Amazon service slowly came back, the original poster seemed to back off from the original claim, which implies either an attempt to evade further scolding (and investigation), or that the original posting was a huge exaggeration to get attention. Either way, the prospect of a mission critical system depending on the service was certainly disconcerting.
Personnel from Amazon apparently never contacted the original poster, despite that company having a Premium service contract.
25 or so years ago, Brian Reid defined a distributed system as "...one where I can't get my work done because a computer I never heard of is down." (Since then I've seen this attributed to Leslie Lamport, but at the time heard it attributed to Reid.) It appears that "The Cloud" is simply today's buzzword for a distributed system. There have been some changes to hardware and software, but the general idea is the same — with many of the limitations and cautions attendant thereto, plus some new ones unique to it. Those who extol its benefits (viz., cost) without understanding the many risks involved (security, privacy, continuity, legal, etc.) may find themselves someday making similar postings to support fora — as well as "position wanted" sites.
The full thread is available here.
I have not been blogging here for a while because of some health and workload issues. I hope to resume regular posts before too much longer.
Recently, I was interviewed about the current state of security . I think the interview came across fairly well, and captured a good cross-section of my current thinking on this topic. So, I'm posting a link to that interview here with some encouragement for you to go read it as a substitute for me writing a blog post:
Also, let me note that our annual CERIAS Symposium will be held April 5th & 6th here at Purdue. You can register and find more information via our web site.
But that isn't all!
Plus, all of the above are available via RSS feeds. We also have a Twitter feed: @cerias. Not all of our information goes out on the net, because some of it is restricted to our partner organizations, but eventually the majority of it makes it out to one of the above outlets.
So, although I haven't been blogging recently, there has still been a steady stream of activity from the 150+ people who make up the CERIAS "family."
Some of you may notice that Purdue is listed among this year's (2010) group of educational institutions receiving designation as one of the CAEs in that program. Specifically, we have received designation as a CAE-R (Center of Academic Excellence in Research).
"What changed?" you may ask, and "Why did you submit?"
The simple answers are "Not that much," and "Because it was the least-effort solution to a problem." A little more elaborate answers follow. (It would help if you read the previous post on this topic to put what follows in context.)
Basically, the first three reasons I listed in the previous post still hold:
What has changed is the level of effort to apply or renew at least the CAE-R stamp. The designation is now good for 5 academic years, and that is progress. Also, the requirements for the CAE-R designation were easily satisfied by a few people in a matter of several hours mining existing literature and research reports. Both of those were huge pluses for us in submitting the application and reducing the overhead to a more acceptable level given the return on onvestment.
The real value in this, and the reason we entered into the process, is that a few funding opportunities have indicated that applicants' institutions must be certified as a CAE member or else the applicant must document a long list of items to show "equivalence." As our faculty and staff compete for some of these grants, the cost-benefit tradeoff suggested that a small group to go through the process once, for the CAE-R. Of course, this raises the question of why the funding agencies suggest that XX Community College is automatically qualified to submit a grant, while a major university that is not CAE certified (MIT is an example) has to prove that it is qualified!
So, for us, it came down to a matter of deciding whether to stay out of the program as a matter of principle, or submit an application to make life a little simpler for all of our faculty and staff when submitting proposals. In the end, several of our faculty & staff decided to do it over an afternoon because they wanted to make their own proposals simpler to produce. And, our attempt to galvanize some movement away from the CAE program produced huge waves of ...apathy... by other schools; they appear to have no qualms about standing in line for government cheese. Thus, with somewhat mixed feelings by some of us, we got our own block of curd, with an expiration date of 2015.
Let me make very clear -- we are very supportive of any faculty willing to put in the time to develop a program, and working to educate students to enter this field. We are also very glad that there are people in government who are committed to supporting that academic effort. We are in no way trying to denigrate any institution or individual involved in the CAE program. But the concept of giving a gold star to make everyone feel good about doing what should be the minimum isn't how we should be teaching, or about how we should be promoting good cybersecurity education.
(And I should also add that not every faculty member here holds the opinions expressed above.)
I have been friends with Linda McCarthy for many years. As a security strategist she has occupied a number of roles -- running research groups, managing corporate security, writing professional books, serving as a senior consultant, conducting professional training....and more. That she isn't widely known is more a function of her not seeking it by having a blog or gaining publicity by publishing derivative hacks to software than it is anything else; There are many in the field who are highly competent and who practice out of the spotlight most of the time.
One of Linda's passions over the last few years has been in reaching out to kids -- especially teens -- to make them aware of how to be safe when online. Her most recent effort is an update to her book for the youngest computer users. The book is now published under the Creative Commons license. The terms allow free use of the book for personal use. That's a great deal for a valuable resource!
I'm enclosing the recent press release on the book to provide all the information on how to get the book (or selected chapters).
If you're an experienced computer user, this will all seem fairly basic. But that's the point -- the basics require special care to present to new users, and in terms they understand. (And yes, this is targeted mostly to residents of the U.S.A. and maybe Canada, but the material should be useful for everyone, including parents.)
Industry-Leading Internet Security Book for Kids, Teens, Adults Available Now as Free Download
Own Your Space® teams with Teens, Experts, Corporate Sponsors for Kids' Online Safety
SAN FRANCISCO, June 17 -- As unstructured summertime looms, kids and teens across the nation are likely to be spending more time on the Internet and texting.
Now, a free download is available to help them keep themselves safer both online and while using a cell phone.
Own Your Space®, the industry-leading Internet security book for youth, parents, and adults, was first written by Linda McCarthy, a 20-year network and Internet-security expert.
This all-new free edition -- by McCarthy, security pros, and dedicated teenagers -- teaches youths and even their parents how to keep themselves "and their stuff" safer online.
A collaboration between network-security experts, teenagers, and artists, the flexible licensing of Creative Commons, and industry-leading corporate sponsors, together have made it possible for everyone on the Internet to access Own Your Space for free via myspace.com/ownyourspace, facebook.com/ownyourspace.net, and www.ownyourspace.net.
"With the rise of high-technology communications within the teen population, this is the obvious solution to an increasingly ubiquitous problem: how to deliver solid, easy-to-understand Internet security information into their hands? By putting it on the Internet and their hard drives, for free," said Linda McCarthy, former Senior Director of Internet Safety at Symantec.
Besides the contributors' own industry experience, Own Your Space also boasts the "street cred" important to the book's target audience; this new edition has been overseen by a cadre of teens who range in age from 13 to 17.
"In this age of unsafe-Internet and risky-texting practices that have led to the deaths and the jailing of minors, I'm thankful for everyone who works toward and sponsors our advocacy to keep more youth safe while online and on cell phones," McCarthy said.
Everyone interested in downloading Own Your Space® for free can visit myspace.com/ownyourspace, facebook.com/ownyourspace.net, and www.ownyourspace.net. Corporations who would like to increase the availability of the book and promote child safety online through their hardware and Web properties can contact Linda McCarthy firstname.lastname@example.org.
McCarthy is releasing the book in June to celebrate Internet Safety Month.
I am posting the following at the request of someone associated with this effort at NITRD:
On May 19 the White House announced a new effort to enlist public involvement in defining new areas to "change the game" for cybersecurity. Three areas for research were proposed:
- Moving Target – Systems that move in multiple dimensions to disadvantage the attacker and increase resiliency.
- Tailored Trustworthy Spaces – Security tailored to the needs of a particular transaction rather than the other way around.
- Cyber Economic Incentives – A landscape of incentives that reward good cybersecurity and ensure crime doesn’t pay.
For the next few weeks (until June 18), the public is being invited to make comments. As readers of this blog tend to be well-informed about security issues and research needs, I'd like to encourage you to review the details of the research areas and add your thoughts to the discussion at http://cybersecurity.nitrd.gov As this effort will impact the Federal funding of research for FY2012 and beyond, adding your thoughts is not only beneficial to the government, but also beneficial to those of us in the research community to ensure that research topics are both useful and feasible.
As I've noted before I believe that referring to this as "game change" has the potential to create the wrong attitudes towards the problems. However, at least this isn't an attempt to solve everything in 60-90 days!
The 12th anniversary of CERIAS is looming (in May). As part of the display materials for our fast-approaching annual CERIAS Symposium (register now!), I wanted to get a sense of the impact of our educational activities in addition to our research. What I found surprised me -- and may surprise many others!
Back in 1997, a year before the formation of CERIAS, I presented testimony before a U.S. House of Representatives hearing on "Secure Communications." For that presentation, I surveyed peers around the country to determine something about the capacity of U.S. higher education in the field of information security and privacy (this was before the term "cyber" was popularized). I discovered that, at the time, there were only four defined programs in the country. We estimated that there were fewer than 20 academic faculty in the US at that time who viewed information security other than cryptography as their primary area of emphasis. (The reason we excluded cryptography was because there were many people who were working in abstract mathematics that could be applied to cryptography but who knew extremely little about information security as a field, and certainly were not teaching it).
The best numbers I could come up with from surveying all those people was that, as of 1997, U.S. higher education was graduating only about three new Ph.D. students a year in information security, Thus, there were also very few faculty producing new well-educated experts at any level, and too small a population to easily grow new programs. I noted in my remarks that the output was too low by at least two orders of magnitude for national needs (and was at least 3-5 orders too low for international needs).
As I have noted before, my testimony helped influence the creations of (among other things) the NSA's CAE program and the Scholarship for Service program. Both provided some indirect support for increasing the number of Ph.D graduates and courses at all postsecondary levels. The SfS has been a qualified success, although the CAE program not so much.
When CERIAS was formed, one element of our strategic plan was to focus on helping other institutions build up their capacity to offer infosec courses at every level, as a matter of strategic leadership. We decided to do this through five concurrent approaches:
Our goal was not only to produce new expertise, but to retrain personnel with strong backgrounds in computing and computing education. Transformation was the only way we could see that a big impact could be made quickly.
We have had considerable success at all five of these initiatives. Currently, there are several dozen classes in CERIAS focus areas across Purdue. In addition to the more traditional graduate degrees, our Interdisciplinary graduate degree program is small but competitive and has led to new courses. Overall, on the Ph.D. front, we anticipate another 15 Ph.D. grads this May, bringing the total CERIAS output of PhD.s over 12 years to 135. To the best of our ability to estimate (using some figures from NSF and elsewhere), that was about 25% of all U.S. PhDs in the first decade that CERIAS was in existence, and we are currently graduating about 20% of U.S. output. Many of those graduates have taught or still teach at colleges and universities, even if part-time. We have also graduated many hundreds of MS and undergrad students with some deep coursework and research experience in information security and privacy issues.
We have hosted several score post-docs and visiting faculty over the years, and always welcome more --- our only limitation right now is available funding. For several years, we had an intensive summer program for faculty from 2 and 4-year schools, many of which are serving minority and disadvantaged populations. Graduates of that program went on to create many new courses at their home institutions. We had to discontinue this program after a few years because of, again, lack of funding.
Our academic affiliates program ran for five years, and we believe it was a great success. Several schools with only one or two faculty working in the area were able to leverage the partnership to get grants and educational resources, and are now notable for their own intrinsic capabilities. We discontinued the affiliates program several years ago as we realized all but one of those partners had "graduated."So, how can we measure the impact of this aspect of our strategic plan? Perhaps by simply coming up with some numbers....
We compiled a list of anyone who had been through CERIAS (and a few years of COAST, prior) who:
We gathered from them (as many as we could reach) the names of any higher education institution where they taught courses related to security, privacy or cyber crime. We also folded in the names of our academic affiliates at which such courses were (or still are) offered. The resultant list has over 100 entries! Even if we make a somewhat moderate estimate of the number of people who took these classes, we are well into the tens of thousands of students impacted, in some way, and possibly above 100,000, worldwide. That doesn't include the indirect effect, because many of those students have gone on (or will) to teach in higher education -- some of our Ph.D. grads have already turned out Ph.D. grads who now have their own Ph.D. students!
Seeing the scope of that impact is gratifying. And knowing that we will do more in the years ahead is great motivation, too.
Of course, it is also a little frustrating, because we could have done more, and more needs to be done. However, the approaches we have used (and are interested in trying next) never fit into any agency BAA. Thus, we have (almost) never been able to get grant support for our educational efforts. And, in many cases, the effort, overhead and delays in the application processes aren't worth the funding that is available. (The same is true of many of our research and outreach activities, but that is a topic for another time.)
We've been able to get this far because of the generosity of the companies and agencies that have been CERIAS general supporters over the years -- thank you! Our current supporters are listed on the CERIAS WWW site (hint: we're open to adding more!). We're also had a great deal of support within Purdue University from faculty, staff and the administration. It has been a group effort, but one that has really made a positive difference in the world....and provides us motivation to continue to greater heights.
See you at the CERIAS Symposium!
Here is the list of the
106 107 108 educational institutions [last updated 3/21,1600 EDT]:
Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.
To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.
I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art"
I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.
Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.
Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.
As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.
But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!
I have a set of keywords registered with Google Alerts that result in a notification whenever they show up in a new posting. This helps me keep track of some particular topics of interest.
One of them popped up recently with a link to a review and some comments about a book I co-authored (Practical Unix & Internet Security, 3rd Edition). The latest revision is over 6 years old, but still seems to be popular with many security professionals; some of the specific material is out of date, but much of the general material is still applicable and is likely to be applicable for many years yet to come. At the time we wrote the first edition of the book there were only one or two books on computer security, so we included more material to make this a useful text and reference.
In general, I don't respond to reviews of my work unless there is an error of fact, and not always even then. If people like the book, great. If they don't, well, they're entitled to their opinions -- no matter how ignorant and ill-informed they may be.
This particular posting included reviews from Amazon that must have been posted about the 2nd edition of the book, nearly a decade old, although their dates as listed on this site make it look like they are recent. I don't recall seeing all of the reviews before this.
One of the responses in this case was somewhat critical of me rather than the book: the text by James Rothschadl. I'm not bothered by his criticism of my knowledge of security issues. Generally, hackers who specialize in the latest attacks dismiss anyone not versed in their tools as ignorant, so I have heard this kind of criticism before. It is still the case that the "elite" hackers who specialize in the latest penetration tools think that they are the most informed about all things security. Sadly, some decision-makers believe this too, much to their later regret, usually because they depend on penetration analysis as their primary security mechanism.
What triggered this blog posting was when I read the comments that included the repetition of erroneous information originally in the book Underground by Suelette Dreyfus. In that book, Ms. Dreyfus recounted the exploits of various hackers and miscreants -- according to them. One such claim, made by a couple of hackers, was that they had broken into my account circa 1990. I do not think Ms. Dreyfus sought independent verification of this, because the story is not completely correct. Despite this, some people have gleefully pointed this out as "Spaf got hacked."
There are two problems with this tale. First, the computer account they broke into was on the CS department machines at Purdue. It was not a machine I administered (and for which I did not have administrator rights) -- it was on shared a shared faculty machine. Thus, the perps succeeded in getting into a machine run by university staff that happened to have my account name but which I did not maintain. That particular instance came about because of a machine crash, and the staff restored the system from an older backup tape. There had been a security patch applied between the backup and the crash, and the staff didn't realize that the patch needed to be reapplied after the backup.
But that isn't the main problem with this story: rather, the account they broke into wasn't my real account! My real account was on another machine that they didn't find. Instead, the account they penetrated was a public "decoy" account that was instrumented to detect such behavior, and that contained "bait" files. For instance, the perps downloaded a copy of what they thought was the Internet Worm source code. It was actually a copy of the code with key parts missing, and some key variables and algorithms changed such that it would partially compile but not run correctly. No big deal.
Actually, I got log information on the whole event. It was duly provided to law enforcement authorities, and I seem to recall that it helped lead to the arrest of one of them (but I don't recall the details about whether there was a prosecution -- it was 20 years ago, after all).
At least 3 penetrations of the decoy account in the early 1990s provided information to law enforcement agencies, as well as inspired my design of Tripwire. I ran decoys for several years (and may be doing so to this day . I always had a separate, locked down account for personal use, and even now keep certain sensitive files encrypted on removable media that is only mounted when the underlying host is offline. I understand the use of defense-in-depth, and the use of different levels of protection for different kinds of information. I have great confidence in the skills of our current system admins. Still, I administer a second set of controls on some systems. But i also realize that those defenses may not be enough against really determined, resourced attacks. So, if someone wants to spend the time and effort to get in, fine, but they won't find much of interest -- and they may be providing data for my own research in the process!
So, here we are, in November already. We've finished up with National Cyber Security Awareness Month — feel safer? I was talking with someone who observed that he remembered "National Computer Security Day" (started back in the late 1990s) that then became "National Computer Security Week" for a few years. Well, the problems didn't go away when everyone started to call it "cyber," so we switched to a whole month but only of "awareness." This is also the "Cyber Leap Ahead Year." At the same level of progress, we'll soon have "The Decade of Living Cyber Securely." The Hundred Years' War comes to mind for some reason, but I don't think our economic system will last that long with losses mounting as they are. The Singularity may not be when computers become more powerful than the human mind, but will be the point at which all intellectual property, national security information, and financial data has been stolen and is no longer under the control of its rightful owners.
Overly gloomy? Perhaps. But consider that today is also the 21st anniversary of the Morris Internet Worm. Back then, it was a big deal because a few thousand computers were affected. Meanwhile, today's news has a story about the Conficker worm passing the 7 million host level, and growing. Back in 1988 there were about 100 known computer viruses. Today, most vendors have given up trying to measure malware as the numbers are in the millions. And now we are seeing instances of fraud based on fake anti-malware programs being marketed that actually infect the hosts on which they are installed! The sophistication and number of these things are increasing non-linearly as people continue to try to defend fundamentally unsecurable systems.
And as far as awareness goes, a few weeks ago I was talking with some grad students (not from Purdue). Someone mentioned the Worm incident; several of the students had never heard of it. I'm not suggesting that this should be required study, but it is indicative of something I think is happening: the overall awareness of security issues and history seems to be declining among the population studying computing. I did a quick poll, and many of the same students only vaguely recalled ever hearing about anything such as the Orange Book or Common Criteria, about covert channels, about reference monitors, or about a half dozen other things I mentioned. Apparently, anything older than about 5 years doesn't seem to register. I also asked them to name 5 operating systems (preferably ones they had used), and once they got to 4, most were stumped (Windows, Linux, MacOS and a couple said "Multics" because I had asked about it earlier; one young man smugly added "whatever it is running on my cellphone," which turned out to be a Windows variant). No wonder everyone insists on using the same OS, the same browser, and the same 3 programming languages — they have never been exposed to anything else!
About the same time, I was having a conversation with a senior cyber security engineer of a major security defense contractor (no, I won't say which one). The engineer was talking about a problem that had been posed in a recent RFP. I happened to mention that it sounded like something that might be best solved with a capability architecture. I got a blank look in return. Somewhat surprised, I said "You know, capabilities and rings — as in Multics and System/38." The reaction to that amazed me: "Those sound kinda familiar. Are those versions of SE Linux?"
Sigh. So much for awareness, even among the professionals who are supposed to be working in security. The problems are getting bigger faster than we have been addressing them, and too many of the next generation of computing professionals don't even know the basic fundamentals or history of information security. Unfortunately, the focus of government and industry seems to continue to be on trying to "fix" the existing platforms rather than solve the actual problems. How do we get "awareness" into that mix?
There are times when I look back over my professional career and compare it to trying to patch holes in a sinking ship while the passengers are cheerfully boring new holes in the bottom to drop in chum for the circling sharks. The biggest difference is that if I was on the ship, at least I might get a little more sun and fresh air.