I am posting the following at the request of someone associated with this effort at NITRD:
On May 19 the White House announced a new effort to enlist public involvement in defining new areas to "change the game" for cybersecurity. Three areas for research were proposed:
- Moving Target – Systems that move in multiple dimensions to disadvantage the attacker and increase resiliency.
- Tailored Trustworthy Spaces – Security tailored to the needs of a particular transaction rather than the other way around.
- Cyber Economic Incentives – A landscape of incentives that reward good cybersecurity and ensure crime doesn’t pay.
For the next few weeks (until June 18), the public is being invited to make comments. As readers of this blog tend to be well-informed about security issues and research needs, I'd like to encourage you to review the details of the research areas and add your thoughts to the discussion at http://cybersecurity.nitrd.gov As this effort will impact the Federal funding of research for FY2012 and beyond, adding your thoughts is not only beneficial to the government, but also beneficial to those of us in the research community to ensure that research topics are both useful and feasible.
As I've noted before I believe that referring to this as "game change" has the potential to create the wrong attitudes towards the problems. However, at least this isn't an attempt to solve everything in 60-90 days!
The 12th anniversary of CERIAS is looming (in May). As part of the display materials for our fast-approaching annual CERIAS Symposium (register now!), I wanted to get a sense of the impact of our educational activities in addition to our research. What I found surprised me -- and may surprise many others!
Back in 1997, a year before the formation of CERIAS, I presented testimony before a U.S. House of Representatives hearing on "Secure Communications." For that presentation, I surveyed peers around the country to determine something about the capacity of U.S. higher education in the field of information security and privacy (this was before the term "cyber" was popularized). I discovered that, at the time, there were only four defined programs in the country. We estimated that there were fewer than 20 academic faculty in the US at that time who viewed information security other than cryptography as their primary area of emphasis. (The reason we excluded cryptography was because there were many people who were working in abstract mathematics that could be applied to cryptography but who knew extremely little about information security as a field, and certainly were not teaching it).
The best numbers I could come up with from surveying all those people was that, as of 1997, U.S. higher education was graduating only about three new Ph.D. students a year in information security, Thus, there were also very few faculty producing new well-educated experts at any level, and too small a population to easily grow new programs. I noted in my remarks that the output was too low by at least two orders of magnitude for national needs (and was at least 3-5 orders too low for international needs).
As I have noted before, my testimony helped influence the creations of (among other things) the NSA's CAE program and the Scholarship for Service program. Both provided some indirect support for increasing the number of Ph.D graduates and courses at all postsecondary levels. The SfS has been a qualified success, although the CAE program not so much.
When CERIAS was formed, one element of our strategic plan was to focus on helping other institutions build up their capacity to offer infosec courses at every level, as a matter of strategic leadership. We decided to do this through five concurrent approaches:
Our goal was not only to produce new expertise, but to retrain personnel with strong backgrounds in computing and computing education. Transformation was the only way we could see that a big impact could be made quickly.
We have had considerable success at all five of these initiatives. Currently, there are several dozen classes in CERIAS focus areas across Purdue. In addition to the more traditional graduate degrees, our Interdisciplinary graduate degree program is small but competitive and has led to new courses. Overall, on the Ph.D. front, we anticipate another 15 Ph.D. grads this May, bringing the total CERIAS output of PhD.s over 12 years to 135. To the best of our ability to estimate (using some figures from NSF and elsewhere), that was about 25% of all U.S. PhDs in the first decade that CERIAS was in existence, and we are currently graduating about 20% of U.S. output. Many of those graduates have taught or still teach at colleges and universities, even if part-time. We have also graduated many hundreds of MS and undergrad students with some deep coursework and research experience in information security and privacy issues.
We have hosted several score post-docs and visiting faculty over the years, and always welcome more --- our only limitation right now is available funding. For several years, we had an intensive summer program for faculty from 2 and 4-year schools, many of which are serving minority and disadvantaged populations. Graduates of that program went on to create many new courses at their home institutions. We had to discontinue this program after a few years because of, again, lack of funding.
Our academic affiliates program ran for five years, and we believe it was a great success. Several schools with only one or two faculty working in the area were able to leverage the partnership to get grants and educational resources, and are now notable for their own intrinsic capabilities. We discontinued the affiliates program several years ago as we realized all but one of those partners had "graduated."So, how can we measure the impact of this aspect of our strategic plan? Perhaps by simply coming up with some numbers....
We compiled a list of anyone who had been through CERIAS (and a few years of COAST, prior) who:
We gathered from them (as many as we could reach) the names of any higher education institution where they taught courses related to security, privacy or cyber crime. We also folded in the names of our academic affiliates at which such courses were (or still are) offered. The resultant list has over 100 entries! Even if we make a somewhat moderate estimate of the number of people who took these classes, we are well into the tens of thousands of students impacted, in some way, and possibly above 100,000, worldwide. That doesn't include the indirect effect, because many of those students have gone on (or will) to teach in higher education -- some of our Ph.D. grads have already turned out Ph.D. grads who now have their own Ph.D. students!
Seeing the scope of that impact is gratifying. And knowing that we will do more in the years ahead is great motivation, too.
Of course, it is also a little frustrating, because we could have done more, and more needs to be done. However, the approaches we have used (and are interested in trying next) never fit into any agency BAA. Thus, we have (almost) never been able to get grant support for our educational efforts. And, in many cases, the effort, overhead and delays in the application processes aren't worth the funding that is available. (The same is true of many of our research and outreach activities, but that is a topic for another time.)
We've been able to get this far because of the generosity of the companies and agencies that have been CERIAS general supporters over the years -- thank you! Our current supporters are listed on the CERIAS WWW site (hint: we're open to adding more!). We're also had a great deal of support within Purdue University from faculty, staff and the administration. It has been a group effort, but one that has really made a positive difference in the world....and provides us motivation to continue to greater heights.
See you at the CERIAS Symposium!
Here is the list of the
106 107 108 educational institutions [last updated 3/21,1600 EDT]:
Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.
To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.
I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art"
I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.
Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.
Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.
As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.
But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!
I have a set of keywords registered with Google Alerts that result in a notification whenever they show up in a new posting. This helps me keep track of some particular topics of interest.
One of them popped up recently with a link to a review and some comments about a book I co-authored (Practical Unix & Internet Security, 3rd Edition). The latest revision is over 6 years old, but still seems to be popular with many security professionals; some of the specific material is out of date, but much of the general material is still applicable and is likely to be applicable for many years yet to come. At the time we wrote the first edition of the book there were only one or two books on computer security, so we included more material to make this a useful text and reference.
In general, I don't respond to reviews of my work unless there is an error of fact, and not always even then. If people like the book, great. If they don't, well, they're entitled to their opinions -- no matter how ignorant and ill-informed they may be.
This particular posting included reviews from Amazon that must have been posted about the 2nd edition of the book, nearly a decade old, although their dates as listed on this site make it look like they are recent. I don't recall seeing all of the reviews before this.
One of the responses in this case was somewhat critical of me rather than the book: the text by James Rothschadl. I'm not bothered by his criticism of my knowledge of security issues. Generally, hackers who specialize in the latest attacks dismiss anyone not versed in their tools as ignorant, so I have heard this kind of criticism before. It is still the case that the "elite" hackers who specialize in the latest penetration tools think that they are the most informed about all things security. Sadly, some decision-makers believe this too, much to their later regret, usually because they depend on penetration analysis as their primary security mechanism.
What triggered this blog posting was when I read the comments that included the repetition of erroneous information originally in the book Underground by Suelette Dreyfus. In that book, Ms. Dreyfus recounted the exploits of various hackers and miscreants -- according to them. One such claim, made by a couple of hackers, was that they had broken into my account circa 1990. I do not think Ms. Dreyfus sought independent verification of this, because the story is not completely correct. Despite this, some people have gleefully pointed this out as "Spaf got hacked."
There are two problems with this tale. First, the computer account they broke into was on the CS department machines at Purdue. It was not a machine I administered (and for which I did not have administrator rights) -- it was on shared a shared faculty machine. Thus, the perps succeeded in getting into a machine run by university staff that happened to have my account name but which I did not maintain. That particular instance came about because of a machine crash, and the staff restored the system from an older backup tape. There had been a security patch applied between the backup and the crash, and the staff didn't realize that the patch needed to be reapplied after the backup.
But that isn't the main problem with this story: rather, the account they broke into wasn't my real account! My real account was on another machine that they didn't find. Instead, the account they penetrated was a public "decoy" account that was instrumented to detect such behavior, and that contained "bait" files. For instance, the perps downloaded a copy of what they thought was the Internet Worm source code. It was actually a copy of the code with key parts missing, and some key variables and algorithms changed such that it would partially compile but not run correctly. No big deal.
Actually, I got log information on the whole event. It was duly provided to law enforcement authorities, and I seem to recall that it helped lead to the arrest of one of them (but I don't recall the details about whether there was a prosecution -- it was 20 years ago, after all).
At least 3 penetrations of the decoy account in the early 1990s provided information to law enforcement agencies, as well as inspired my design of Tripwire. I ran decoys for several years (and may be doing so to this day . I always had a separate, locked down account for personal use, and even now keep certain sensitive files encrypted on removable media that is only mounted when the underlying host is offline. I understand the use of defense-in-depth, and the use of different levels of protection for different kinds of information. I have great confidence in the skills of our current system admins. Still, I administer a second set of controls on some systems. But i also realize that those defenses may not be enough against really determined, resourced attacks. So, if someone wants to spend the time and effort to get in, fine, but they won't find much of interest -- and they may be providing data for my own research in the process!
So, here we are, in November already. We've finished up with National Cyber Security Awareness Month — feel safer? I was talking with someone who observed that he remembered "National Computer Security Day" (started back in the late 1990s) that then became "National Computer Security Week" for a few years. Well, the problems didn't go away when everyone started to call it "cyber," so we switched to a whole month but only of "awareness." This is also the "Cyber Leap Ahead Year." At the same level of progress, we'll soon have "The Decade of Living Cyber Securely." The Hundred Years' War comes to mind for some reason, but I don't think our economic system will last that long with losses mounting as they are. The Singularity may not be when computers become more powerful than the human mind, but will be the point at which all intellectual property, national security information, and financial data has been stolen and is no longer under the control of its rightful owners.
Overly gloomy? Perhaps. But consider that today is also the 21st anniversary of the Morris Internet Worm. Back then, it was a big deal because a few thousand computers were affected. Meanwhile, today's news has a story about the Conficker worm passing the 7 million host level, and growing. Back in 1988 there were about 100 known computer viruses. Today, most vendors have given up trying to measure malware as the numbers are in the millions. And now we are seeing instances of fraud based on fake anti-malware programs being marketed that actually infect the hosts on which they are installed! The sophistication and number of these things are increasing non-linearly as people continue to try to defend fundamentally unsecurable systems.
And as far as awareness goes, a few weeks ago I was talking with some grad students (not from Purdue). Someone mentioned the Worm incident; several of the students had never heard of it. I'm not suggesting that this should be required study, but it is indicative of something I think is happening: the overall awareness of security issues and history seems to be declining among the population studying computing. I did a quick poll, and many of the same students only vaguely recalled ever hearing about anything such as the Orange Book or Common Criteria, about covert channels, about reference monitors, or about a half dozen other things I mentioned. Apparently, anything older than about 5 years doesn't seem to register. I also asked them to name 5 operating systems (preferably ones they had used), and once they got to 4, most were stumped (Windows, Linux, MacOS and a couple said "Multics" because I had asked about it earlier; one young man smugly added "whatever it is running on my cellphone," which turned out to be a Windows variant). No wonder everyone insists on using the same OS, the same browser, and the same 3 programming languages — they have never been exposed to anything else!
About the same time, I was having a conversation with a senior cyber security engineer of a major security defense contractor (no, I won't say which one). The engineer was talking about a problem that had been posed in a recent RFP. I happened to mention that it sounded like something that might be best solved with a capability architecture. I got a blank look in return. Somewhat surprised, I said "You know, capabilities and rings — as in Multics and System/38." The reaction to that amazed me: "Those sound kinda familiar. Are those versions of SE Linux?"
Sigh. So much for awareness, even among the professionals who are supposed to be working in security. The problems are getting bigger faster than we have been addressing them, and too many of the next generation of computing professionals don't even know the basic fundamentals or history of information security. Unfortunately, the focus of government and industry seems to continue to be on trying to "fix" the existing platforms rather than solve the actual problems. How do we get "awareness" into that mix?
There are times when I look back over my professional career and compare it to trying to patch holes in a sinking ship while the passengers are cheerfully boring new holes in the bottom to drop in chum for the circling sharks. The biggest difference is that if I was on the ship, at least I might get a little more sun and fresh air.
October is "officially" National Cyber Security Awareness Month. Whoopee! As I write this, only about 27 more days before everyone slips back into their cyber stupor and ignores the issues for the other 11 months.
Yes, that is not the proper way to look at it. The proper way is to look at the lack of funding for long-term research, the lack of meaningful initiatives, the continuing lack of understanding that robust security requires actually committing resources, the lack of meaningful support for education, almost no efforts to support law enforcement, and all the elements of "Security Theater" (to use Bruce Schneier's very appropriate term) put forth as action, only to realize that not much is going to happen this month, either. After all, it is "Awareness Month" rather than "Action Month."
There was a big announcement at the end of last week where Secretary Napolitano of DHS announced that DHS had new authority to hire 1000 cybersecurity experts. Wow! That immediately went on my list of things to blog about, but before I could get to it, Bob Cringely wrote almost everything that I was going to write in his blog post The Cybersecurity Myth - Cringely on technology. (NB. Similar to Bob's correspondent, I have always disliked the term "cybersecurity" that was introduced about a dozen years ago, but it has been adopted by the hoi polloi akin to "hacker" and "virus.") I've testified before the Senate about the lack of significant education programs and the illusion of "excellence" promoted by DHS and NSA -- you can read those to get my bigger picture view of the issues on personnel in this realm. But, in summary, I think Mr. Cringely has it spot on.
Am I being too cynical? I don't really think so, although I am definitely seen by many as a professional curmudgeon in the field. This is the 6th annual Awareness Month and things are worse today than when this event was started. As one indicator, consider that the funding for meaningful education and research have hardly changed. NITRD (National Information Technology Research & Development) figures show that the fiscal 2009 allocation for Cyber Security and Information Assurance (their term) was about $321 million across all Federal agencies. Two-thirds of this amount is in budgets for Defense agencies, with the largest single amount to DARPA; the majority of these funds have gone to the "D" side of the equation (development) rather than fundamental research, and some portion has undoubtedly gone to support offensive technologies rather than building safer systems. This amount has perhaps doubled since 2001, although the level of crime and abuse has risen far more -- by at least two levels of magnitude. The funding being made available is a pittance and not enough to really address the problems.
Here's another indicator. A recent conversation with someone at McAfee revealed that new pieces of deployed malware are being indexed at a rate of about 10 per second -- and those are only the ones detected and being reported! Some of the newer attacks are incredibly sophisticated, defeating two-factor authentication and falsifying bank statements in real time. The criminals are even operating a vast network of fake merchant sites designed to corrupt visitors' machines and steal financial information. Some accounts place the annual losses in the US alone at over $100 billion per year from cyber crime activities -- well over 300 times everything being spent by the US government in R&D to stop it. (Hey, but what's 100 billion dollars, anyhow?) I have heard unpublished reports that some of the criminal gangs involved are spending tens of millions of dollars a year to write new and more effective attacks. Thus, by some estimates, the criminals are vastly outspending the US Government on R&D in this arena, and that doesn't count what other governments are spending to steal classified data and compromise infrastructure. They must be investing wisely, too: how many instances of arrests and takedowns can you recall hearing about recently?
Meanwhile, we are still awaiting the appointment of the National Cyber Cheerleader. For those keeping score, the President announced that the position was critical and he would appoint someone to that position right away. That was on May 29th. Given the delay, one wonders why the National Review was mandated as being completed in a rush 60 day period. As I noted in that earlier posting, an appointment is unlikely to make much of a difference as the position won't have real authority. Even with an appointment, there is disagreement about where the lead for cyber should be, DHS or the military. Neither really seems to take into account that this is at least as much a law enforcement problem as it is one of building better defenses. The lack of agreement means that the tenure of any appointment is likely to be controversial and contentious at worst, and largely ineffectual at best.
I could go on, but it is all rather bleak, especially when viewed through the lens of my 20+ years experience in the field. The facts and trends have been well documented for most of that time, too, so it isn't as if this is a new development. There are some bright points, but unless the problem gets a lot more attention (and resources) than it is getting now, the future is not going to look any better.
So, here are my take-aways for National Cyber Security Awareness:
But hey, don't give up on October! It's also Vegetarian Awareness Month, National Liver Awareness Month, National Chiropractic Month, and Auto Battery Safety Month (among others). Undoubtedly there is something to celebrate without having to wait until Halloween. And that's my contribution for National Positive Attitude Month.
I've heard from many, many people who read my blog post about this. So far, everyone who attended and was not involved with the planning of the Summit has basically agreed with my comments.
Here is an interesting post by Russ Thomas that explores the NCLY in depth from a different point of view.
There has been considerable press coverage and discussion on the intertubes about the provision in S. 773 (see my earlier post) that would allow the President to shut down critical infrastructure networks in the event of a national emergency. The people worried about the black helicopters are sure this, coupled with attempts to pass health care, are a sure sign of the Apocalypse -- or the approach of the end of the world in 2012, whichever comes first. Far less attention has been paid to other troubling aspects of the bill, such as the troubling requirement for professional certification of cyber security personnel.
According to some of the experts I have talked with, the President already has this general authority from other legislation. This simply makes it explicit. Furthermore, if we're in a declared national emergency wouldn't a centralized, coordinated response make sense? If not centered at the White House, then where else?
The bill is still in revision, although a draft of an amended version has been circulated to some groups for comment. I have been told that it is unlikely to move forward until after health care reform has been resuscitated or pronounced dead, and after the annual Federal budget appropriations process is finished. So, there may be additional issues betwixt now and then.
I wrote something in my personal blog about my 9/11 memories. It isn't really related to cyber security or Purdue, but some of my comments might be interesting to some people.
In addition to my personal blog cited above, I also maintain a Tumbler blog with pointers to recent news items that relate to security, privacy and cyber law. It is available as <http://blog.spaf.us> (my part of the overall CERIAS blog (here) can be accessed as <http://cblog.spaf.us>). I generally post links there every day.
I spent several days this week in DC, visiting officials and agencies related to cyber security. I get the sense that there is little expectation of more funding or attention in the coming fiscal year. The administration has been undergoing a bruising battle over health care, there is yet to be debate on policy for Afghanistan, and there are background engagements in constant play on issues related to the deficit. Cyber is not likely to be viewed as critical because things seem to have been going "okay" so far, and addressing cyber will be costly and require political capital. So, unless there is some splashy disaster, we might not see much progress.
I am a big fan of the Monty Python troupe. Their silly take on several topics helped point out the absurd and pompous, and still do, but sometimes were simply lunatic in their own right.
One of their sketches, about a group of sailors stuck in a lifeboat came to mind as I was thinking about this post. The sketch starts (several times) with the line "Still no sign of land." The sketch then proceeds to a discussion of how they are so desperate that they may have to resort to cannibalism.
So why did that come to mind?
We still do not have a national Cyber Cheerleader in the Executive Office of the President. On May 29th, the President announced that he would appoint one – that cyber security was a national priority.
Three months later – nada.
Admittedly, there are other things going on: health care reform, a worsening insurgency problem in Afghanistan, hesitancy in the economic recovery, and yet more things going on that require attention from the White House. Still, cyber continues to be a problem area with huge issues. See some of the recent news to see that there is no shortage of problems – identity theft, cyber war questions, critical infrastructure vulnerability, supply chain issues, and more.
Rumor has it that several people have been approached for the Cheerleader position, but all have turned it down. This isn't overly surprising – the position has been set up as basically one where blame can be placed when something goes wrong rather than as a position to support real change. There is no budget authority, seniority, or leverage over Federal agencies where the problems occur, so there is no surprise that it is not wanted. Anyone qualified for a high-level position in this area should recognize what I described 20 years ago in "Spaf's First Law":
If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong.
I wonder how many false starts it will take before it is noticed that there is something wrong with the position if good people don't want it? And will that be enough to result in a change in the way the position is structured?
Meanwhile, we are losing good people from what senior leadership exists. Melissa Hathaway has resigned from the temporary position at the NSC from which she led the 60-day study, and Mischel Kwon has stepped down from leadership of US-CERT. Both were huge assets to the government and the public, and we have all lost as a result of their departure.
The crew of the lifeboat is dwindling. Gee, what next? Well, funny you should mention that.
Last week, I attended the "Cyber Leap Year Summit," which I have variously described to people who have asked as "An interesting chance to network" to "Two clowns short of a circus." (NB. I was there, so it was not three clowns short.)
The implied premise of the Summit, that bringing together a group of disparate academics and practitioners can somehow lead to a breakthrough is not a bad idea in itself. However, when you bring together far too many of them under a facilitation protocol that most of them have not heard of coupled with a forced schedule, it shouldn't be a surprise if the result in much other than some frustration. At least, that is what I heard from most of the participants I spoke with. It remains to be seen if the reporters from the various sections are able to glean something useful from the ideas that were so briefly discussed. (Trying to winnow "the best" idea from 40 suggestions given only 75 minutes and 40 type A personalities is not a fun time.)
There was also the question of "best" being brought together. In my session, there were people present who had no idea about basic security topics or history. Some of us made mention of well-known results or systems, and they went completely over the heads of the people present. Sometimes, they would point this out, and we lost time explaining. As the session progressed, the parties involved seemed to simply assume that if they hadn't heard about it, it couldn't be important, so they ignored the comments.
Here are three absurdities that seem particularly prominent to me about the whole event:
I raised the first two issues as the first comments in the public Q&A session on Day 1. Aneesh Chopra, the Federal Chief Technology Officer (CTO), and Susan Alexander, the Chief Technology Officer for Information and Identity Assurance at DoD, were on the panel to which I addressed the questions. I was basically told not to ask those kinds of questions, and to sit down. although the response was phrased somewhat less forcefully than that. Afterwards, no less than 22 people told me that they wanted to ask the same questions (I started counting after #5). Clearly, I was not alone in questioning the formulation of the meeting.
Do I seem discouraged? A bit. I had hoped that we would see a little more careful thought involved. There were many government observers present, and in private, one-on-one discussions with them, it was clear they were equally discouraged with what they were hearing, although they couldn't state that publicly.
However, this is yet another in long line of meetings and reports with which I have had involvement, where the good results are ignored, and the "captains of industry and government" have focused on the wrong things. But by holding continuing workshops like this one, at least it appears that the government is doing something. If nothing comes of it, they can blame the participants in some way for not coming up with good enough ideas rather than take responsibility for not asking the right questions or being willing to accept answers that are difficult to execute.
Too cynical? Perhaps. But I will continue to participate because this is NOT a "game," and the consequences of continuing to fail are not something we want to face — even with "...white wine sauce with shallots, mushrooms and garlic."
I have a Facebook account. I use it as a means to communicate little status updates with many, many friends and acquaintances while keeping up to date (a little) on their activities. I'm usually too pressed for time to correspond with everyone as I would otherwise prefer to do, and this tenuous connection is probably better than none at all.
Sometime early in the year, either I slipped up in running a script or somehow, without authorization, Facebook slurped up my whole address book. This was something I most definitely did not want to happen, so even giving Facebook the benefit of the doubt and blaming it on operator (me) error it says something about their poor interface that such a thing could happen to an experienced user. (Of course, in the worst case, their software did something invasive without my authorization.)
Whatever happened, Facebook immediately started spamming EVERYONE with an invitation "from me" inviting them to join Facebook. There are many people in my address book with whom I have some professional relationship but who would not be in any category I would remotely consider "friend." It was annoying to me, and annoying/perplexing to them, to have to deal with these emails. A few of them joined, but many others complained to me.
I thought the problem would resolve itself with time. In particular, I didn't want to send a note to everyone in my list saying it was a mistake and not to respond. Sadly, the Facebook system seems to periodically sweep through this list and reissue invitations. Thus, I have gotten a trickle of continuing complaints, and suspect that a number of other people are simply annoyed with me.
So, what to do if this was a responsible business? Why, look for a customer help email address, web form, or telephone number to contact them. Good luck. They have FAQs galore, but it is the web equivalent of voicemail-hell: one link leads to another and back to the FAQs again with no way to contact anything other than an auto-responder that tells me to consult the FAQ system.
On July 26, I responded to a complaint from one of the unintended victims. I cc'd a set of email addresses that I thought might possibly be monitored at Facebook, including "firstname.lastname@example.org." I got an automated response back to read an inappropriate and unhelpful section of the FAQ. I replied to the email that it was not helpful and did not address my complaint.
On July 29 I received a response that may have been from a person (it had a name attached) that again directed me to the FAQs. Again I responded that it was not addressing my complaint.
August 6th brought a new email from the same address that seemed to actually be responsive to my complaint. It indicated that there was a URL I could visit to see the addresses I had "invited" to join, and I could delete any I did not wish to be receiving repeated invitations. Apparently, this is unadvertised but available to all Facebook users (see http://www.facebook.com/invite_history.php).
I visited the site, and sure enough, there were all 2200+ addresses.
First problem: It is not possible to delete the entire list. One can only operate on 100 names at a time (one page). Ok, I can do this, although I find it very annoying when sites are programmed this way. But 22 times through the removal process is something I'm willing to do.
Second problem: Any attempt to delete addresses from the database results in an error message. The message claims they are working on the problem or to check that I'm actually connected to the Internet, but that's it. I've tried the page about every other day since August 6th, with various permutations of choices, and the error is still there. So much for "working on it."
I've also tried emailing the same Facebook address where I got the earlier response, with no answer in 2 weeks.
I thought about unsubscribing from Facebook as a way of clearing this out, but I am not convinced that the list -- and the automated invites -- would stop even if I inactivated my account.
I certainly won't be inviting anyone else to join Facebook, and I am now recommending that no one else does, either.
I was talking to several people at the Cyber Leap Year Summit about how we have decades of research in computing that too many current researchers fail to look at because it was never put on line. We have all noticed the disturbing trend that too many papers submitted for publication do not reference anything before about 2000 -- perhaps because so much of that early work has not been indexed in search engines?
I mentioned that I had seen papers a few years back where the authors had implied that they had invented virtualization, despite the idea going back decades; at least the Wikipedia entry seems to avoid that mistake.
Someone jokingly mentioned that at least a few things were new, such as cloud computing.
Not so fast.
Some Cloud Computing is really nothing more than SaaS on virtualized platforms. That isn't new.
However, one view of Cloud Computing is that it provides seamless processing and storage somewhere on the net, where you don't have to know where it is stored, where it makes use of multiple platforms for performance and storage, and you don't need to worry about individual machine failures because the rest of the system continues forward.
Interestingly, that was precisely the goal of the distributed OS project where I did my PhD dissertation. I wrote the first prototype distributed OS kernel for the system. The name of the project? CLOUDS. The year? 1986 was when I defended, but the name was coined in 1984. (Cf. a summary article written in 1991.)
My kernel had virtual memory, process creation/deletion, object stores, capabilities, and a built-in debugger (that one could invoke after a crash -- no blue screen, simply the console message of "Shut 'er down Scotty, she's suckin mud agin.") I demonstrated it creating objects and invoking methods (actions) on them across the network on other machines, among other things. Three later PhD dissertations relied on it, as did at least 2 MS theses.
(Oh, and I wrote most of the code in VAX assembler language and it all ran on the bare hardware. I debugged it by stepping through memory, and found some hardware bugs in the process. I was a real programmer back then: i have programmed machine code on six architectures, and in over 25 other high-level languages. But I digress...)
My dissertation is not very good; I would not accept it from one of my students now, and do not recommend anyone read it. But circumstances were such that I didn't actually have an advisor for a big chunk of my research work, and the committee wanted to get me out. I never got a publication from the dissertation work, either. In retrospect, I'm not sure that was the best course of action, but I seem to have turned out more or less okay otherwise.
Bonus item: The first Ph.D. from the group, based on an earlier attempt at the kernel, was Jim Allchin. But don't blame the Clouds group for Windows!
Bonus item: Only about 4 people ever knew, but "Clouds" was an acronym. We liked the imagery because if you combined two clouds, you simply ended up with a cloud. Up close, you couldn't tell where the boundaries of a Cloud were. And if you took some away from a cloud, you still had a cloud. Great, huh? I'm going to reveal the acronym here: Coalescing Local Objects Under Distributed Supervision. We needed the acronym for the proposal to the funding agencies, but for obvious reasons, we never referred to it as anything other than Clouds. The acronym was coined by Bill Thibault.
Bonus item: The Clouds kernel was the third OS I had written, and the final one. The second one was also in assembly language and some custom microcode for the PR1ME 500 &750 series computers (of which Georgia Tech had five). I taught a class around machine architecture and writing an OS at Georgia Tech while a grad student. I'd love to hear from anyone who took the class.
Bonus item: Although my research work quickly moved into other areas of computing, I stayed with OS long enough to help start and chair (with George Leach) the (first WEBDMS; 1989 and) SEBDMS (1991, 1992) conferences. These later evolved into the OSDI conferences -- which I have never attended. It is unlikely that many people remember this connection.
A few people still remember me for that OS work. Others know me for the work I did in mutation testing, and yet others for the work in dynamic slicing and backtracking for debugging. That was all before I started work in security and forensics. I'm to blame for more than many people know -- and I'm not telling about the rest.
But next time someone tries to tell you about their latest "new" idea, you might check with some of us
older more seasoned computing folk, first, and let us reminisce about the good old days.