The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Still no sign of land

Share:

I am a big fan of the Monty Python troupe. Their silly take on several topics helped point out the absurd and pompous, and still do, but sometimes were simply lunatic in their own right.

One of their sketches, about a group of sailors stuck in a lifeboat came to mind as I was thinking about this post. The sketch starts (several times) with the line "Still no sign of land." The sketch then proceeds to a discussion of how they are so desperate that they may have to resort to cannibalism.

So why did that come to mind?

We still do not have a national Cyber Cheerleader in the Executive Office of the President. On May 29th, the President announced that he would appoint one – that cyber security was a national priority.

Three months later – nada.

Admittedly, there are other things going on: health care reform, a worsening insurgency problem in Afghanistan, hesitancy in the economic recovery, and yet more things going on that require attention from the White House. Still, cyber continues to be a problem area with huge issues. See some of the recent news to see that there is no shortage of problems – identity theft, cyber war questions, critical infrastructure vulnerability, supply chain issues, and more.

Rumor has it that several people have been approached for the Cheerleader position, but all have turned it down. This isn't overly surprising – the position has been set up as basically one where blame can be placed when something goes wrong rather than as a position to support real change. There is no budget authority, seniority, or leverage over Federal agencies where the problems occur, so there is no surprise that it is not wanted. Anyone qualified for a high-level position in this area should recognize what I described 20 years ago in "Spaf's First Law":

If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong.

I wonder how many false starts it will take before it is noticed that there is something wrong with the position if good people don't want it? And will that be enough to result in a change in the way the position is structured?

Meanwhile, we are losing good people from what senior leadership exists. Melissa Hathaway has resigned from the temporary position at the NSC from which she led the 60-day study, and Mischel Kwon has stepped down from leadership of US-CERT. Both were huge assets to the government and the public, and we have all lost as a result of their departure.

The crew of the lifeboat is dwindling. Gee, what next? Well, funny you should mention that.

Last week, I attended the "Cyber Leap Year Summit," which I have variously described to people who have asked as "An interesting chance to network" to "Two clowns short of a circus." (NB. I was there, so it was not three clowns short.)

The implied premise of the Summit, that bringing together a group of disparate academics and practitioners can somehow lead to a breakthrough is not a bad idea in itself. However, when you bring together far too many of them under a facilitation protocol that most of them have not heard of coupled with a forced schedule, it shouldn't be a surprise if the result in much other than some frustration. At least, that is what I heard from most of the participants I spoke with. It remains to be seen if the reporters from the various sections are able to glean something useful from the ideas that were so briefly discussed. (Trying to winnow "the best" idea from 40 suggestions given only 75 minutes and 40 type A personalities is not a fun time.)

There was also the question of "best" being brought together. In my session, there were people present who had no idea about basic security topics or history. Some of us made mention of well-known results or systems, and they went completely over the heads of the people present. Sometimes, they would point this out, and we lost time explaining. As the session progressed, the parties involved seemed to simply assume that if they hadn't heard about it, it couldn't be important, so they ignored the comments.

Here are three absurdities that seem particularly prominent to me about the whole event:

  1. Using "game change" as the fundamental theme is counter-productive to the issue. Referring to cyber security and privacy protection as a "game" trivializes it, and if nothing substantial occurs, it suggests that we simply haven't won the "game" yet. But in truth, these problems are something fundamental to the functioning of society, the economy, national defense, and even the rule of law. We cannot afford to "not win" this. We should not trivialize it by calling it a "game."
  2. Putting an arbitrary 60-90 day timeline on the proposed solutions exacerbates the problems. There was no interest in discussing the spectrum of solutions, but only talking about things that could be done right away. Unfortunately, this tends to result in people talking about more patches rather than looking at fundamental issues. It also means that potential solutions that require time (such as phasing in some product liability for bad software) are outside the scope of both discussion and consideration, and this continues to perpetuate the idea that quick fixes are somehow the solution.
  3. Suggesting that all that is needed is for the government to sponsor some group-think, feel-good meeting to come up with solutions is inane. Some of us have been looking at the problem set for decades, and we know some of what is needed. It will take sustained effort and some sacrifice to make a difference. Other parts of the problem are going to require sustained investigation and data gathering. There is no political will for either. Some of the approaches were even brought up in our sessions; in the one I was in, which had many economists and people from industry, the ideas were basically voted down (or derided, contrary to the protocol of the meeting) and dropped. This is part of the issue: the parties most responsible for the problem do not want to bear any responsibility for the fixes.

I raised the first two issues as the first comments in the public Q&A session on Day 1. Aneesh Chopra, the Federal Chief Technology Officer (CTO), and Susan Alexander, the Chief Technology Officer for Information and Identity Assurance at DoD, were on the panel to which I addressed the questions. I was basically told not to ask those kinds of questions, and to sit down. although the response was phrased somewhat less forcefully than that. Afterwards, no less than 22 people told me that they wanted to ask the same questions (I started counting after #5). Clearly, I was not alone in questioning the formulation of the meeting.

Do I seem discouraged? A bit. I had hoped that we would see a little more careful thought involved. There were many government observers present, and in private, one-on-one discussions with them, it was clear they were equally discouraged with what they were hearing, although they couldn't state that publicly.

However, this is yet another in long line of meetings and reports with which I have had involvement, where the good results are ignored, and the "captains of industry and government" have focused on the wrong things. But by holding continuing workshops like this one, at least it appears that the government is doing something. If nothing comes of it, they can blame the participants in some way for not coming up with good enough ideas rather than take responsibility for not asking the right questions or being willing to accept answers that are difficult to execute.

Too cynical? Perhaps. But I will continue to participate because this is NOT a "game," and the consequences of continuing to fail are not something we want to face — even with "...white wine sauce with shallots, mushrooms and garlic."

More customer disservice—This time, Facebook

Share:

I have a Facebook account. I use it as a means to communicate little status updates with many, many friends and acquaintances while keeping up to date (a little) on their activities. I'm usually too pressed for time to correspond with everyone as I would otherwise prefer to do, and this tenuous connection is probably better than none at all.

Sometime early in the year, either I slipped up in running a script or somehow, without authorization, Facebook slurped up my whole address book. This was something I most definitely did not want to happen, so even giving Facebook the benefit of the doubt and blaming it on operator (me) error it says something about their poor interface that such a thing could happen to an experienced user. (Of course, in the worst case, their software did something invasive without my authorization.)

Whatever happened, Facebook immediately started spamming EVERYONE with an invitation "from me" inviting them to join Facebook. There are many people in my address book with whom I have some professional relationship but who would not be in any category I would remotely consider "friend." It was annoying to me, and annoying/perplexing to them, to have to deal with these emails. A few of them joined, but many others complained to me.

I thought the problem would resolve itself with time. In particular, I didn't want to send a note to everyone in my list saying it was a mistake and not to respond. Sadly, the Facebook system seems to periodically sweep through this list and reissue invitations. Thus, I have gotten a trickle of continuing complaints, and suspect that a number of other people are simply annoyed with me.

So, what to do if this was a responsible business? Why, look for a customer help email address, web form, or telephone number to contact them. Good luck. They have FAQs galore, but it is the web equivalent of voicemail-hell: one link leads to another and back to the FAQs again with no way to contact anything other than an auto-responder that tells me to consult the FAQ system.

On July 26, I responded to a complaint from one of the unintended victims. I cc'd a set of email addresses that I thought might possibly be monitored at Facebook, including "abuse@facebook.com." I got an automated response back to read an inappropriate and unhelpful section of the FAQ. I replied to the email that it was not helpful and did not address my complaint.

On July 29 I received a response that may have been from a person (it had a name attached) that again directed me to the FAQs. Again I responded that it was not addressing my complaint.

August 6th brought a new email from the same address that seemed to actually be responsive to my complaint. It indicated that there was a URL I could visit to see the addresses I had "invited" to join, and I could delete any I did not wish to be receiving repeated invitations. Apparently, this is unadvertised but available to all Facebook users (see http://www.facebook.com/invite_history.php).

I visited the site, and sure enough, there were all 2200+ addresses.

First problem: It is not possible to delete the entire list. One can only operate on 100 names at a time (one page). Ok, I can do this, although I find it very annoying when sites are programmed this way. But 22 times through the removal process is something I'm willing to do.

Second problem: Any attempt to delete addresses from the database results in an error message. The message claims they are working on the problem or to check that I'm actually connected to the Internet, but that's it. I've tried the page about every other day since August 6th, with various permutations of choices, and the error is still there. So much for "working on it."

I've also tried emailing the same Facebook address where I got the earlier response, with no answer in 2 weeks.

I thought about unsubscribing from Facebook as a way of clearing this out, but I am not convinced that the list -- and the automated invites -- would stop even if I inactivated my account.


Bottom line: providing Facebook any access to email addresses at all is like Roach Motel -- they go in, but there is no way to get them out. And Facebook's customer service and interfaces leave a whole lot to be desired. Coupled with other complaints people have had about viruses, spamming, questionable uses of personal images and data, changes to the privacy policy, and the lack of any useful customer service, and I really have to wonder if the organization is run by people with any clue at all.

I certainly won't be inviting anyone else to join Facebook, and I am now recommending that no one else does, either.

A Quick Note about Cloud Computing

Share:

I was talking to several people at the Cyber Leap Year Summit about how we have decades of research in computing that too many current researchers fail to look at because it was never put on line. We have all noticed the disturbing trend that too many papers submitted for publication do not reference anything before about 2000 -- perhaps because so much of that early work has not been indexed in search engines?

I mentioned that I had seen papers a few years back where the authors had implied that they had invented virtualization, despite the idea going back decades; at least the Wikipedia entry seems to avoid that mistake.

Someone jokingly mentioned that at least a few things were new, such as cloud computing.

Not so fast.

Some Cloud Computing is really nothing more than SaaS on virtualized platforms. That isn't new.

However, one view of Cloud Computing is that it provides seamless processing and storage somewhere on the net, where you don't have to know where it is stored, where it makes use of multiple platforms for performance and storage, and you don't need to worry about individual machine failures because the rest of the system continues forward.

Interestingly, that was precisely the goal of the distributed OS project where I did my PhD dissertation. I wrote the first prototype distributed OS kernel for the system. The name of the project? CLOUDS. The year? 1986 was when I defended, but the name was coined in 1984. (Cf. a summary article written in 1991.)

My kernel had virtual memory, process creation/deletion, object stores, capabilities, and a built-in debugger (that one could invoke after a crash -- no blue screen, simply the console message of "Shut 'er down Scotty, she's suckin mud agin.") I demonstrated it creating objects and invoking methods (actions) on them across the network on other machines, among other things. Three later PhD dissertations relied on it, as did at least 2 MS theses.

(Oh, and I wrote most of the code in VAX assembler language and it all ran on the bare hardware. I debugged it by stepping through memory, and found some hardware bugs in the process. I was a real programmer back then: i have programmed machine code on six architectures, and in over 25 other high-level languages. But I digress...)

My dissertation is not very good; I would not accept it from one of my students now, and do not recommend anyone read it. But circumstances were such that I didn't actually have an advisor for a big chunk of my research work, and the committee wanted to get me out. I never got a publication from the dissertation work, either. In retrospect, I'm not sure that was the best course of action, but I seem to have turned out more or less okay otherwise. grin

Bonus item: The first Ph.D. from the group, based on an earlier attempt at the kernel, was Jim Allchin. But don't blame the Clouds group for Windows!

Bonus item: Only about 4 people ever knew, but "Clouds" was an acronym. We liked the imagery because if you combined two clouds, you simply ended up with a cloud. Up close, you couldn't tell where the boundaries of a Cloud were. And if you took some away from a cloud, you still had a cloud. Great, huh? I'm going to reveal the acronym here: Coalescing Local Objects Under Distributed Supervision. We needed the acronym for the proposal to the funding agencies, but for obvious reasons, we never referred to it as anything other than Clouds. The acronym was coined by Bill Thibault.

Bonus item: The Clouds kernel was the third OS I had written, and the final one. The second one was also in assembly language and some custom microcode for the PR1ME 500 &750 series computers (of which Georgia Tech had five). I taught a class around machine architecture and writing an OS at Georgia Tech while a grad student. I'd love to hear from anyone who took the class.

Bonus item: Although my research work quickly moved into other areas of computing, I stayed with OS long enough to help start and chair (with George Leach) the (first WEBDMS; 1989 and) SEBDMS (1991, 1992) conferences. These later evolved into the OSDI conferences -- which I have never attended. It is unlikely that many people remember this connection.

A few people still remember me for that OS work. Others know me for the work I did in mutation testing, and yet others for the work in dynamic slicing and backtracking for debugging. That was all before I started work in security and forensics. I'm to blame for more than many people know -- and I'm not telling about the rest. grin

But next time someone tries to tell you about their latest "new" idea, you might check with some of us older more seasoned computing folk, first, and let us reminisce about the good old days.