Over the last few days we have seen a considerable flow of news and social media coverage of untended exposure of celebrity photographs (e.g., here). Many (most?) of these photos were of attractive females in varying states of undress, and this undoubtedly added to the buzz.
We have seen commentary from some in the field of cybersecurity, as well as more generally-focused pundits, stating that the subjects of these photos “should have known better.” These commentators claim that it is generally known that passwords/cloud storage/phones have weak security, so the victims only have themselves to blame.
We find these kinds of comments ill-informed, disappointing, and in many cases, repugnant.
First, we note that the victims of these leaks were not trained in cyber security or computing. When someone falls ill, we don’t blame her for not having performed studies in advanced medicine. When someone’s car breaks down, we don’t blame him for failing to have a degree in mechanical engineering. Few people are deeply versed in fields well outside their profession.
The people involved in these unauthorized exposures apparently took prudent measures they were instructed to on the systems as they were designed. As an example, the passwords used must have passed the checks in place or they would not have been able to set them. It doesn’t matter if we approve of the passwords that passed those automated checks -- they were appropriate to the technical controls in place. What the people stored, how they did it, or the social bias against their state of dress has nothing to do with this particular issue.
Quite simply, the protection mechanisms were not up to the level of the information being protected. That is not the users’ fault. They were using market standards, represented as being secure. For instance, it is admitted that Apple products were among those involved (and that is the example in some of our links). People have been told for almost a decade that the iOS and Apple ecosystem is much more secure than other environments. That may or may not be true, but it certainly doesn’t take into account the persistent, group effort that appears to have been involved in this episode, or some of the other criminal deviants working in groups online. We have met a few actresses and models, and many young people. They don’t think of risk in the same way security professionals do, and having them depend on standard technology alone is clearly insufficient against such a determined threat.
Consider: assume you live in a nice house. You’ve got windows, doors, and locks on those windows and doors. You likely have some kind of curtains or window coverings. If you live in a house, even a house with no yard, if you close your curtains we accept that as a request for privacy. If I walk up on the sidewalk and attempt to peer into your windows, that is being a “peeping tom.” Even though I might have every right to stand on the sidewalk, face the direction I’m looking, and stop or pause, I do not have the right to violate your privacy.
Consider: Your house has a nice solid door with a nice lock. That lock likely has orders of magnitude less entropy than a password. Every night you walk through your house, lock your doors, and you go to sleep believing you are likely safe. Yet, that lock and that door will not stop a group of determined, well-equipped attackers or likely even slow them down. The police will not arrive for some amount of time and effective self-protection against unexpected provocation by a gang is uncertain, at best. As a polite and law-abiding society, we respect the door and the lock, and expect others to do the same. We understand that the door and lock keep honest people honest. They set a barrier to entry for criminals. Burglaries still happen and we use the power of law to enact punishment against criminals, although many crimes go unsolved.
If an unauthorized entry to your house occurs, whether by picking the lock, climbing through the window, or discovering a loose set of boards in the wall, we would never blame you, the victim — it is clear that the person who entered, unbidden, was to blame. Some of our peers would try to blame the companies that made the locks and windows, rather than acknowledge the bad intent of the burglar. Too many people in information security tend to think we can always build better locks, or that having “white hats” continually picking locks somehow will lead to them being unpickable. Many are so enamored of the technology of those locks and the pastime of picking them that they will blame the victim instead of anything to do with the production or design of the locks themselves. (This is not a new phenomenon: Spafford wrote about this topic 22 years ago.)
One fundamental error here is that all locks are by design meant to be opened. Furthermore, the common thinking ignores the many failures (and subsequent losses) before any "super lock" might appear. We also observe that few will be able to afford any super-duper unpickable locks, or carry the 20-pound key necessary to operate them. Technological protections must be combined with social and legal controls to be effective.
This leads to our second major point.
Imagine if that burglary occurred at your house, and you suffered a major loss because the agent of the crime discovered your artwork or coin collection. We would not dream of blaming you for the loss, somehow implying that you were responsible by having your possessions stored in your house. If somebody were to say anything like that, we would reproach them for blaming/shaming the victim. Society in general would not try to castigate you for having possessions that others might want to steal.
Unfortunately, many computer professionals (and too many others, outside the profession) have the mindset that crimes on computers are somehow the fault of the victim (and this has been the case for many years). We must stop blaming the victims in cases such as this, especially when what they were doing was not illegal. We see criticism of their activities instead of the privacy invasion as blaming/shaming no less atrocious as that of how rape victims are treated — and that is also usually badly, unfortunately.
If we give users lousy technology and tell them it is safe, they use it according to directions, and they do not understand its limitations, they should not be blamed for the consequences. That is true of any technology. The fault lies with the providers and those who provide vague assurances about it. Too bad we let those providers get away with legally disclaiming all responsibility.
We are sympathetic to the argument that these exposures of images should perhaps be considered as a sex crime. They were acts of taking something without permission that violated the privacy and perceptions of safety of the victim for the sexual gratification and sense of empowerment of the perpetrator (and possibly also other reasons). Revenge porn, stalking, assault, and rape are similar...and we should not blame the victims for those acts, either. The sexually-themed abuse of female journalists and bloggers is also in this category -- and if you aren't aware of it, then you should be: women who write things online that some disagree with will get threats of violence (including rape), get abusive and pornographic messaging and images (deluges of it), and called incredibly offensive names...sometimes for years. It is beyond bullying and into something that should be more actively abhorred.
Some male members of the cyber security community are particularly bad in their treatment of women, too.
Between the two of us, Sam and Spaf, we have served as professors, counselors, and one of us (Liles) as a law enforcement officer; we have well over 50 years combined professional experience with both victims and bad behavior of their abusers. We have counseled female students and colleagues who have been stalked and harassed online for years. They keep encountering law enforcement officials and technologists who ask "What are you doing to encourage this?" None of them encourage it, and some live in real terror 24x7 of what their stalkers will do next. Some have had pictures posted that are mortifying to them and their loved ones, they've lost jobs, had to move, withdraw from on-line fora, lost relationships, and even change their names and those of their children. This can last for years.
Sexual offenders blame the victim to absolve themselves of responsibility, and thus, guilt. "She was acting suggestively," "she was dressed that way," etc. If the people around them chime in and blame the victim, they are excusing the criminal -- they are reinforcing the idea that somehow the victim provoked it and the abuser "obviously couldn't help himself.” They thus add unwarranted guilt and shame to the victim while excusing the criminal. We generally reject this with offenses against children, realizing that the children are not responsible for being abused. We must stop blaming all adult victims (mostly female, but others also get abused this way), too.
Victim blaming (and the offensively named slut shaming -- these aren't "sluts," they are victimized women) must STOP. Do you side with privacy rights and protection of the public, or with the rapists and abusers? There is no defendable middle ground in these cases.
We are also horrified by the behavior of some of the media surrounding this case. The crimes have been labeled as leaks, which trivializes the purposeful invasion and degradation performed. Many outlets provided links to the pictures, as did at least one well-known blogger. That illustrates everything wrong about the paparazzi culture, expressed via computer. To present these acts as somehow accidental (“leak”) and blame the victims not only compounds the damage, but glosses over the underlying story — this appears to be the result of a long-term criminal conspiracy of peeping toms using technologies to gather information for the purpose of attacking the privacy of women. This has allegedly been going on for years and law enforcement has apparently had almost no previous interest in the cases — why isn’t that the lead story? The purposeful exploitation of computer systems and exposure of people's private information is criminal. Some pundits only began to indicate concern when it was discovered that some of the pictures were of children.
It is clear we have a long way to go as a society. We need to do a better job of building strong technology and then deploying it so that it can be used correctly. We need to come up with better social norms and legal controls to hold miscreants accountable. We need better tools and training for law enforcement to investigate cyber crimes without also creating openings for them to be the ones who are regularly violating privacy. We need to find better ways of informing the public how to make cyber risk-related decisions.
But most of all, we need to find our collective hearts. Instead of idealizing and idolizing the technology with which we labor, deflecting criticisms for faults onto victims and vendors, we need to do a much better job of understanding the humanity — including both strengths and weaknesses — of the people who depend on that technology. The privacy violations, credit card fraud, espionage, harassment, and identity thefts all have real people as victims. Cyber security is, at its core, protecting people, and the sooner we all take that to heart, the better.
I have long argued that the ability to patch something is not a security “feature” — whatever caused the need to patch is a failure. The only proper path to better security is to build the item so it doesn’t need patching — so the failure doesn’t occur, or has some built-in alternative protection.
This is, by the way, one of the reasons that open source is not “more secure” simply because the source is available for patching — the flaws are still there, and often the systems don’t get patched because they aren’t connected to any official patching and support regime. Others may be in locations or circumstances where they simply cannot be patched quickly — or perhaps not patched at all. That is also an argument against disclosure of some vulnerabilities unless they are known to be in play — if the vulnerability is disclosed but cannot be patched on critical systems, it simply endangers those systems. Heartbleed is an example of this, especially as it is being found in embedded systems that may not be easily patched.
But there is another problem with relying on patching — when the responsible parties are unable or unwilling to provide a patch, and that is especially the case when the vulnerability is being actively exploited.
In late January, a network worm was discovered that was exploiting a vulnerability in Linksys routers. The worm was reported to the vendor and some CERT teams. A group at the Internet Storm Center analyzed the worm, and named it TheMoon. They identified vulnerabilities in scripts associated with Linksys E-series and N-series routers that allowed the worm to propagate, and for the devices to be misused.
Linksys published instructions on their website to reduce the threat, but it is not a fix, according to reports from affected users — especially for those who want to use remote administration. At the time, a posting at Linksys claimed a firmware fix would be published “in the coming weeks."
Fast forward to today, three months later, and a fix has yet to be published, according to Brett Glass, the discoverer of the original worm.
Complicating the fix may be the fact that Belkin acquired Linksys. Belkin does not have a spotless reputation for customer relations; this certainly doesn’t help. I have been copied on several emails from Mr. Glass to personnel at Belkin, and none have received replies. It may well be that they have decided that it is not worth the cost of building, testing, and distributing a fix.
I have heard that some users are replacing their vulnerable systems with those by vendors who have greater responsiveness to their customers’ security concerns. However, this requires capital expenses, and not all customers are in a position to do this. Smaller users may prefer to continue to use their equipment despite the compromise (it doesn’t obviously endanger them — as yet), and naive users simply may not know about the problem (or believe it has been fixed).
At this point we have vulnerable systems, the vendor is not providing a fix, the vulnerability is being exploited and is widely known, and the system involved is in widespread use. Of what use is patching in such a circumstance? How is patching better than having properly designed and tested the product in the first place?
Of course, that isn’t the only question that comes to mind. For instance, who is responsible for fixing the situation — either by getting a patch out and installed, or replacing the vulnerable infrastructure? And who pays? Fixing problems is not free.
Ultimately, we all pay because we do not appropriately value security from the start. That conclusion can be drawn from incidents small (individual machine) to medium (e.g., the Target thefts) to very large (government-sponsored thefts). One wonders what it will take to change that? How do we patch peoples’ bad attitudes about security — or better yet, how do we build in a better attitude?
I’ve been delayed in posting this as I have been caught up in travel, teaching, and the other exigencies of my “day job,” including our 15th annual CERIAS Symposium. That means this posting is a little stale, but maybe it is also a little more complete.
I try to attend the RSA Conference every year. The talks are not usually that useful, but the RSAC is the best event to see what is new in the market, and to catch up with many of my colleagues (new and old), touch base with some organizations, see CERIAS alumni, sample both some exotic cuisines and questionable hors d'oeuvres, and replenish my T-shirt supply. It is a very concentrated set of activities that, when properly managed, fits in a huge set of conversations. My schedule for the week is usually quite full, and I am exhausted by the time I return home. This year, I was particularly worn out because I was recovering from a mild case of pneumonia. Still, I mostly enjoyed my week in San Francisco.
This year, there was a boycott, of sorts, against the conference by various parties who were upset at the purported collaboration of RSA with US government agencies many years ago. I’m not going to go into that here, but I think it (the boycott) was misguided. Not only is there no hard evidence that there was any actual weakening of any algorithms, but it was over a decade ago and at a time when both the national security climate and public sentiment were different than today. There is also the issue that companies are susceptible to legal pressures that are not easily dismissed. If there is any blame to accrue to RSA, it would be better directed to the company’s products than the conference. As it was, during my week there, I only saw about 30 seconds of protest — and the conference had (I believe) record attendance.
The conference really has three general components: the technical track, the exhibit floor, and the informal connections around everything else. I’ll address each separately. I have some particular comments about the use of “booth babes” on the exhibit floor.
The conference every year has scores (hundreds?) of talks, workshops, and panels, usually given by industry analysts, CEOs, and engineers, and by various government officials. It is not a scientific conference by any stretch of the imagination. Although marketing talks are strictly prohibited, one of the primary motivations of speakers is to get on stage to promote “their brand.” Often, the talks are filtered through a particular product point of view to reinforce the marketing pitch given elsewhere, or to sell a book, or to subtly promote the speaker’s usefulness as a consultant. Over the last decade I have attended many talks, but found few of them really informative, and several involved misinformation that was not challenged by anyone during the session. I have stopped sending in proposals for talks because my past proposals didn’t fare well — “too academic” was the judgement. I guess if I don’t pull a hacker out of my hat and make a database disappear, I’m not entertaining enough for this crowd considering overall conference attendance, which simply goes to my point about conference focus.
This year there was at least one partially informative session. I was asked at the last minute to fill in on a panel hosted by Gary McGraw. The panel attempted to address a topic that I spend several hour-long lectures covering in class: the classic Saltzer and Schoeder principles of secure design. The panel only covered four of the principles, and superficially, but I think the panel went okay. We had a small crowd.
I attended, briefly, a number of other sessions, but didn’t stay for any but one of them. Perhaps I am getting too cynical and jaded, but I didn’t find anything that was new and interesting; yes, a few things were new, but not surprising or even well-analyzed. I’m not sure it mattered for the audience.
The one session that I stayed for, and thoroughly enjoyed, was the closing session with Stephen Colbert. He was brilliant and funny. His off-the-cuff answers during the Q&A session was excellent all by itself — he not only displayed better than superficial knowledge of portions of the field, but he gave some very quick answers that showed some level of insight as well as humor. Not all of his answers went over with all of the crowd, but I think that showed he was giving some genuine answers of his own rather than trying to amuse.
Lots of the real business at the conference really isn’t at the conference, but in the halls, hotels, restaurants, and bars in the vicinity. Companies hold both formal and informal receptions for past & future customers, everyone from CEOs to sales reps work out deals over dinner and drinks, analysts and commentators get news over lunch and finger foods, and employees are recruited in all sorts of venues. Some of the media conduct interviews with notable people (and some rather sketchy types). Organizations presented awards and recognized members at receptions (e.g., the ISSA honored their newest Fellows and Distinguished Fellows, and the (ISC)2 celebrated its 25th anniversary).
These connections are a major draw of the conference for me. I get to reconnect with people I don’t often get to see otherwise, and I also get to meet many others who I might not otherwise encounter. I get to hear about interesting stories that aren’t told to the general sessions, hear about new projects, and tell people about how they are missing out on hiring our great grads from CERIAS. I always return home with a stack of business cards with notes on them of things to send, lookup, and people to call.
This year was no different: I connected with over 30 people I had not seen in months…or years, and met another few score new. I missed running into several people I was hoping to see, but generally had a full schedule. Luckily, there are enough people who have yet to get the memos, and I was invited to some of the receptions. In several instances, I got to meet some people in person that I have only known via on-line persona. In other cases, I got to meet long-time friends and acquaintances who I never get to see often enough because of schedule issues. Some people I was hoping to see weren’t able to make it because of budget issues this year curtailing travel, which seems to have been a little more pronounced this year than the last couple of years, at least among my circle.
I should note that few academics attend this conference: the cost, even with a discounted admission, is significant, and combined with travel, hotel, and other expenses, it can take a sizable chunk out of a limited academic-sized budget. I saw a few colleagues in attendance, but we were all senior. In past years I have tried to cover the expenses for junior colleagues to attend at times in their careers where the possibility of networking with industry might be beneficial, but there is seldom enough unrestricted funding coming in to CERIAS (or me) to cover this on a regular basis.
Overall, I saw little impact from the “boycott.” In fact, I saw several people who spoke or attended the “boycott” event and were also present at the RSA events!
The exhibition at the conference is huge. Nearly all major vendors — and several government entities, from several countries -- have booths of some sort. This year the booths covered both the North and South halls at Moscone Center — there were many hundreds of them. Walking the exhibit floor is mind (and foot) numbing, but I try to do it at least twice each year to be sure I get a good coverage of what is new and interesting…and what is not.
Some companies opt for large — even multistory — booths with lots of screens and demos. Others have small booths with simply a counter and some literature. Many new companies spend a fair amount for a booth to try to gain some market awareness of their products and services. I haven’t done a formal tally, but I’d guess that somewhere around 20% of the companies I see in any given year are no longer there 2 years later — either they fail or are acquired.
Overall, I didn’t see much that excited me as new or particular innovative. Again, that may simply be the longer perspective I bring to this. I remember the old National Computer Security Conferences in the 1990s as a sort of precursor to this, and the baseline trend is not a good one. In the 90s, the exhibitors were all about secure software development and hardened systems. In 2014, the majority of big vendors were flogging services to detect threats that get through all the defenses on Windows and Linux, recovery from break-ins, and other technologies that basically already concede some defeat. Of course, there were also trends — more about encryption, threat intelligence, big data, and securing “cloud” computing, for N different definitions of cloud. I think the best summary of the exhibits was given by Patrick Gray and Marcus Ranum (click the link to hear the audio): somewhat cynical, but dead on.
As I noted in my last blog entry here, the industry is continuing to focus on solving some of the wrong problems.
With all those exhibitors on the floor, they are all seeking ways to place some branding with attendees, and to get people to stop by the booths for longer discussions (and to harvest addresses for later sales calls). Usually, this is with some form of giveaway item, such as pens, candy, or T-shirts with clever designs. Sometimes they have a notable security figure there autographing books. I certainly pick up my share of T-shirts and books, plus a few other items that I may use, but the majority of items I decline. The giveaway that amazes me the most is the free USB item that people gladly accept and plug into systems. This is a security conference in 2014 and people are doing that?? Consider that one of the vendors that seemed to be successful giving out a lot of USB sticks was Huawei…. simply wow.
Also annoying are the booths where the people can’t even answer simple questions about their companies. Instead, they want to scan my badge and have me sit through a presentation. No thank you. If you can’t tell me in 30 seconds what your company is about, then I’m not about to sit through 10 minutes of someone breathlessly extolling your “industry leading” approach to … whatever it is you do, and I certainly don’t want to sit through a WebEx presentation next month when I am right here with you now.
In an attempt to stand out, some vendors have gone in odd directions by trying to have some “flash” at the booth to bring people in. In prior years, I saw people in suits of armor and gorilla costumes. There have been booths with motorcycles and sports cars. This year, they had professional magicians, gymnasts, and even a ring with a boxing match! These are not items with branding that someone will walk away with and possibly display in the weeks to come, but simply an attempt to attract attention. It is fairly strange, and annoying, however. Why should those tell me anything about a product or security service, other than the company leadership thinks flash is more important than substance? How the heck do those displays relate to information security?
The most egregious example of this disconnect is the “booth babe.” These are when women (and rarely, men) — usually in some scanty outfit involving spandex — are on display to draw people into the booth. They are never themselves engineers or even in sales: there are agencies that hire out their staff to do this kind of thing. Heck, they can’t even answer basic questions about the company! I make it a point to try to talk to some of these people to see why they are at the booth, and I can’t recall an instance in the past few years where any of the women actually had a technical job within the company whose booth they “adorned."
Let me make clear that I appreciate attractive women. That has been my particular orientation for 45 years, and I am not unhappy with it (although I have always wished more of them appreciated me in return!) But more to the point, I appreciate all women — and men who exemplify achievement and dedication. I appreciate imagination. I appreciate professionalism. I do not appreciate attempts to lure me to a vendor through setting off fireworks, dangling shiny objects, or having women in short shorts trying to get me into the booth. It is insulting.
Simply put, it is the wrong message in the wrong context, and the people sending the message are seriously short of clue.
This kind of behavior is harmful to the field because it conveys a message that women are valued primarily because of their appearance, and it trivializes their intellectual contributions. I talked about this in a recent interview and recently wrote about how the field is skewed. We should not and cannot condone the negative messages.
Let me make it clear that I have no quibble with the women themselves who were involved in this — they were hired to put on costumes and be cheerful, to try to draw people in. Standing on concrete floors in 3” heels all day, in not enough clothing to stay warm with the A/C, and trying to be cheerful is not easy. Some of them are students, working to pay tuition, others are supporting children. In one case, the company receptionist and her friend were gamely hanging out in short-shorts to support her company in return for the trip to San Fran. In another case, a company had a beauty pagent winner present, dressed conservatively. She is a pleasant person, and uses her minor celebrity for some good causes, but I do not think that is why the company had her at their booth; I don’t fault her for that decision, however.
Across the exhibition I saw many women who were not on display. Some were in t-shirts and jeans. Some were in heels and dresses. (I asked a few, and it was their first RSA — few will wear heels a second time!) More importantly — it was their own choice, and not something imposed by management. They dressed to be comfortable —as themselves — and if asked technical questions, they were able to respond. Some were thoughtful, some tired, some funny — but all real and there to interact as members of the profession, not as window dressing. That is precisely how they should be treated, and how they want to be treated — as professional colleagues.
I know I’m not the only person who thinks the "booth babe" approach is wrong. I discussed this with several people I know, men and women, and the majority were bothered by it as well. I think the blog posts by Marcus Ranum and Chenxi Wang sum up some of the different reactions quite well. Winn Schwartau actually captured this and many of my other frustrations with the exhibits in one wonderful article.
My message to the vendors: start treating all of us as thinking adults. Focus on the value proposition of your products and services and you'll get a much better response.
I think it was overall a good experience. I hope to attend next year’s conference, and I look forward to seeing old and new friends, maybe hearing something innovative, and seeing a change in the way exhibitors are showing off their wares. We shall see.
I have continued to update my earlier post about women in cybersecurity. Recent additions include links to some scholarship opportunities offered by ACSA and the (ISC)2 Foundation. Both scholarship opportunities have deadlines in the coming weeks, so look at them soon if you are interested.
The 15th Annual Security Symposium is less than a month away! Registration is still open but filling quickly. If you register for the Symposium, or for the 9th ICCWS held immediately prior, you can get a discount on the other event. Thus, you should think about attending both and saving on the registration costs! See the link for more details.
I periodically post an item to better define my various social media presences. If you follow me (Spaf) and either wonder why I post in multiple venues, or want to read even more of my musings, then take a look at it.
I ran across one of my old entries in this blog — from October 2007 — that had predictions for the future of the field. In rereading them, I think I did pretty well, although some of the predictions were rather obvious. What do you think?
Sometime in the next week or so (assuming the polar vortex and ice giants don’t get me) I will post some of my reflections on the RSA 2014 conference. However, if you want a sneak peek at what I think about what I saw on the display floor and after listening to some of the talks, you can read another of my old blog entries — things haven’t changed much.
I’ve had several items cross my social media feeds, along with email, in the last few days that prompt me to write this. It’s gotten a bit longer than I intended, but there’s a lot to say on an important topic. As a first post to this blog in 2014, I think it is a good topic to address. It has to do with imbalance and bad behavior in the overall field of cybersecurity: the low percentage of women, and how they are sometimes treated.
Computing, as a field in the USA, has had a low and almost constantly decreasing percentage of women going into the field and staying. (The US is the primary focus of this blog entry; I believe the problem is similar in Canada, the UK, Australia, and others, but don’t have the data. Also, there is a corresponding problem with other traditional minorities, but that’s not what prompted this post and I hope to visit it later.). There are many reasons posited for this, many of which are likely somewhat to blame; there is no single, dominant reason, apparently. Many studies and reports have been conducted, experiments tried, and programs put into place, but few have made any measurable, long-term change. The problem is almost undoubtedly rooted in social behaviors and expectations because there are other cultures where the ratio of women to men is about 1:1, or even has women in larger percentages.
Cybersecurity is little different, and may be worse. I regularly speak at conferences, companies, and agencies where the room will have 30 men and one (or no) women. At events where there are speakers or panels, all the speakers and panelists are men. The few women attending often are simply the ones there processing registrations. And there are a nontrivial number of reports of women being groped and harassed at professional meetings (see, for instance, this). Also bad, women are frequently abused online as well as offline, and not only in security and computing. Many are reluctant to publish email addresses or contact info online because of unwanted, inappropriate content sent to them — no matter whether they’re 8 or 80.
(Right now, if you are thinking to yourself that there isn’t a real problem, that things are fine, and it is all a problem of some women who can’t take a joke, then you are part of the problem, and you need to shape up. Worse, if you think that women shouldn’t be upset about this status quo, instead they should get back to the kitchen, then you are so out of touch that I don’t know where to start. In either case, try telling that same thing to women doctors, pilots, police, firefighters, or better yet, to our many women in the military — especially when your safety is in their care. Then come back when you’ve healed up. If nothing else, at least keep in mind that there are legal reasons to treat people equally and with respect.)
Assuming you are actually living in the 21st century, let me assure you that the overall situation is a HUGE problem for us. As a field, and as a society this is bad because we have a shortage of talent that is getting worse with time. We also have some rather skewed and limited ideas of how to approach problems that might benefit from a more inclusive pool of designers and practitioners. And as human beings we should be concerned — especially those of us who are sons, brothers, fathers, and husbands — people who could be (and sometimes are) our mothers, sisters, daughters, and wives are being mistreated and demeaned. That simply isn’t right. Neither is it right that we are limiting the opportunities for individuals to learn, grow, and achieve.
Computing, security, privacy, creativity — those are all traits of the mind. Minds exist in all kinds of bodies, including those with other colors, more or fewer curves, different masses and volumes, varied ages, and some have less physical abilities than others. But that doesn’t change what is possible in their minds! We should applaud ability, dedication, and imagination wherever we find it. Discouraging women (or anyone with ability) from pursuing a career in computing, abusing them online, and groping them at conferences are all counterproductive to our own futures — as if rude and wrong wasn't enough. Cybersecurity and privacy are key areas where we need more insight and creativity — we should enhance it rather than diminish it.
No field is populated only with superstars and wild talents. That is especially true in IT. We hear about people with great accomplishments, and we like to think we’re special in our way, but the truth is that the field is too large for any individuals to master it. Success comes from teams, and the most successful teams are those that integrate many different viewpoints, backgrounds, and skill sets, and who respect their differences yet work with common goals. That includes bringing in people from different genders, ethnicities, ages, and more. Success is enhanced by diversity.
I’m not going to go through a longer litany of problems here, or try to analyze the situation further. I’ve been working with various women’s groups for over 20 years and I still don’t pretend to be able to understand all of what is happening. It is complex. However, I see the problem continuously when I look at our student body, when I visit professional meetings, and when I read reports. I know it is real.
What I can do, is offer some advice to those who care.
Here are some general tips that should be common sense.
The basic idea here is really embodied in #8. Be thoughtful and don't treat anyone as substantially different Instead, relate to every person as a professional. But most of all, speak up if you see someone getting picked on or treated badly, or if they aren’t getting encouragement they should. It’s like security and privacy itself — an attack on any link is an attack on the whole, and if a link falls we are all diminished.
There is debate within many minority communities of whether aligning with self-interest groups is helpful. On the plus side, the mentoring, the support resources, and the sense of community can all be a big help. However, that also runs the risk of not sufficiently engaging in the mixed environment where one has to work, of developing unrealistic expectations based on anecdotal stories, and failing to help educate the majority in how to help. There seems to be enough positive “buzz” about some groups and their activities to warrant recommending them. Not all are likely to fit your own particular needs and interests, so check them out. If you know of some I have missed, please let me know so I can add them here.
The (ISC)2 is organizing a women’s special interest group. I have spoken with organizers , but am unsure of the status of it at this time.
The Women in Cyber Security conference will be held in April in Nashville. I know nothing about it other than what is on their web page, but it looks like it could be a great experience.
Of course, please keep in mind that not all men are the same! Many want to do the right things but aren’t always sure what is appropriate. Help train a few.
From a professional point of view, being a member of ACM and ISSA is good idea for anyone in the field, based simply on the value of the organizations. Both promote professionalism, community, and personal growth, and there are a variety of other benefits to membership. Both have steep discounts for student members. I am a long-standing member of both, and can recommend them.
Our society has a lot of problems with cybersecurity and privacy. New flaws show up, and old flaws don’t really get fixed. Parties ranging from individual criminals to nation-state organizations are all seeking ways to penetrate our systems and mess with our information. We need every good person we can get on board and working together if we hope to make progress. We should make every effort to enable that partnership.
Or think of it in these terms: if we can’t be trusted to protect and empower those within our own community, why should anyone trust us to protect anything else?
Updated 1/7: Added a few list items about mentoring and language, listed ISACA, small grammatical corrections.
Updated 1/8: Corrected several typos
Updated 1/10: Added ISSA group link. Added comment from Anita Jones; this is the memo she mentions in that comment.
Updated 1/14: Small grammatical corrections.
Updated 1/22: Added ACM-W page link
Updated 1/24: Added the Systers link
Updated 2/16: Added link to subscribe to the ACM-W list. Minor grammatical cleanup.
Updated 3/2: Added links to ACSA and (ISC)2 scholarship information.
Updated 6/8: Added link to the Ada Initiative
If you have any additions or corrections to the above lists, please send me private email. Also note that, as usual, anonymous, spammy, or abusive feedback to the blog may not be published as is, if at all.
Over the last month or two I have received several invitations to go speak about cyber security. Perhaps the up-tick in invitations is because of the allegations by Edward Snowden and their implications for cyber security. Or maybe it is because news of my recent awards has caught their attention. It could be it is simply to hear about something other than the (latest) puerile behavior by too many of our representatives in Congress and I'm an alternative chosen at random. Whatever the cause, I am tempted to accept many of these invitations on the theory that if I refuse too many invitations, people will stop asking, and then I wouldn't get to meet as many interesting people.
As I've been thinking about what topics I might speak about, I've been looking back though the archive of talks I've given over the last few decades. It's a reminder of how many things we, as a field, knew about a long time ago but have been ignored by the vendors and authorities. It's also depressing to realize how little impact I, personally, have had on the practice of information security during my career. But, it has also led me to reflect on some anniversaries this year (that happens to us old folk). I'll mention three in particular here, and may use others in some future blogs.
In early November of 1988 the world awoke to news of the first major, large-scale Internet incident. Some self-propagating software had spread around the nascent Internet, causing system crashes, slow-downs, and massive uncertainty. It was really big news. Dubbed the "Internet Worm," it served as an inspiration for many malware authors and vandals, and a wake-up call for security professionals. I recall very well giving talks on the topic for the next few years to many diverse audiences about how we must begin to think about structuring systems to be resistant to such attacks.
Flash forward to today. We don't see the flashy, widespread damage of worm programs any more, such as what Nimda and Code Red caused. Instead, we have more stealthy botnets that infiltrate millions of machines and use them for spam, DDOS, and harassment. The problem has gotten larger and worse, although in a manner that hides some of its magnitude from the casual observer. However, the damage is there; don't try to tell the folks at Saudi Aramaco or Qatar's Rasgas that network malware isn't a concern any more! Worrisomely, experts working with SCADA systems around the world are increasingly warning how vulnerable they might be to similar attacks in the future.
Computer viruses and malware of all sorts first notably appeared "in the wild" in 1982. By 1988 there were about a dozen in circulation. Those of us advocating for more care in design, programming and use of computers were not heeded in the head-long rush to get computing available on every desktop (and more) at the lowest possible cost. Thus, we now have (literally) tens of millions of distinct versions of malware known to security companies, with millions more appearing every year. And unsafe practices are still commonplace -- 25 years after that Internet Worm.
For the second anniversary, consider 10 years ago. The Computing Research Association, with support from the NSF, convened a workshop of experts in security to consider some Grand Challenges in information security. It took a full 3 days, but we came up with four solid Grand Challenges (it is worth reading the full report and (possibly) watching the video):
I would argue -- without much opposition from anyone knowledgeable, I daresay -- that we have not made any measurable progress against any of these goals, and have probably lost ground in at least two.
Why is that? Largely economics, and bad understanding of what good security involves. The economics aspect is that no one really cares about security -- enough. If security was important, companies would really invest in it. However, they don't want to part with all the legacy software and systems they have, so instead they keep stumbling forward and hope someone comes up with magic fairy dust they can buy to make everything better.
The government doesn't really care about good security, either. We've seen that the government is allegedly spending quite a bit on intercepting communications and implanting backdoors into systems, which is certainly not making our systems safer. And the DOD has a history of huge investment into information warfare resources, including buying and building weapons based on unpatched, undisclosed vulnerabilities. That's offense, not defense. Funding for education and advanced research is probably two orders of magnitude below what it really should be if there was a national intent to develop a secure infrastructure.
As far as understanding security goes, too many people still think that the ability to patch systems quickly is somehow the approach to security nirvana, and that constructing layers and layers of add-on security measures is the path to enlightenment. I no longer cringe when I hear someone who is adept at crafting system exploits referred to as a "cyber security expert," but so long as that is accepted as what the field is all about there is little hope of real progress. As J.R.R. Tolkien once wrote, "He that breaks a thing to find out what it is has left the path of wisdom." So long as people think that system penetration is a necessary skill for cyber security, we will stay on that wrong path.
And that is a great segue into the last of my three anniversary recognitions. Consider this quote (one of my favorite) from 1973 -- 40 years ago -- from a USAF report, Preliminary Notes on the Design of Secure Military Computer Systems, by a then-young Roger Schell:
…From a practical standpoint the security problem will remain as long as manufacturers remain committed to current system architectures, produced without a firm requirement for security. As long as there is support for ad hoc fixes and security packages for these inadequate designs and as long as the illusory results of penetration teams are accepted as demonstrations of a computer system security, proper security will not be a reality.
That was something we knew 40 years ago. To read it today is to realize that the field of practice hasn't progressed in any appreciable way in three decades, except we are now also stressing the wrong skills in developing the next generation of expertise.
Maybe I'll rethink that whole idea of going to give a talks on security and simply send them each a video loop of me banging my head against a wall.
PS -- happy 10th annual National Cyber Security Awareness Month -- a freebie fourth anniversary! But consider: if cyber security were really important, wouldn't we be aware of that every month? The fact that we need to promote awareness of it is proof it isn't taken seriously. Thanks, DHS!
Now, where can I find I good wall that doesn't already have dents from my forehead....?
[If you want to skip my recollection and jump right to the announcement that is the reason for this post, go here.]
Back in about 1990 I was approached by an eager undergrad who had recently come to Purdue University. A mutual acquaintance (hi, Rob!) had recommended that the student connect with me for a project. We chatted for a bit and at first it wasn't clear exactly what he might be able to do. He had some experience coding, and was working in the campus computing center, but had no background in the more advanced topics in computing (yet).
Well, it just so happened that a few months earlier, my honeypot Sun workstation had recorded a very sophisticated (for the time) attack, which resulted in an altered shared library with a back door in place. The attack was stealthy, and the new library had the same dates, size and simple hash value as the original. (The attack was part of a larger series of attacks, and eventually documented in "@Large: The Strange Case of the World's Biggest Internet Invasion" (David H. Freedman, Charles C. Mann .)
I had recently been studying message digest functions and had a hunch that they might provide better protection for systems than a simple
ls -1 | diff - old comparison. However, I wanted to get some operational sense about the potential for collision in the digests. So, I tasked the student with devising some tests to run many files through a version of the digest to see if there were any collisions. He wrote a program to generate some random files, and all seemed okay based on that. I suggested he look for a different collection -- something larger. He took my advice a little too much to heart. It seems he had a part time job running backup jobs on the main shared instructional computers at the campus computing center. He decided to run the program over the entire file system to look for duplicates. Which he did one night after backups were complete.
The next day (as I recall) he reported to me that there were no unexpected collisions over many hundreds of thousands of files. That was a good result!
The bad result was that running his program over the file system had resulted in a change of the access time of every file on the system, so the backups the next evening vastly exceeded the existing tape archive and all the spares! This led directly to the student having a (pointed) conversation with the director of the center, and thereafter, unemployment. I couldn't leave him in that position mid-semester so I found a little money and hired him as an assistant. I them put him to work coding up my idea, about how to use the message digests to detect changes and intrusions into a computing system. Over the next year, he would code up my design, and we would do repeated, modified "cleanroom" tests of his software. Only when they all passed, did we release the first version of Tripwire.
That is how I met Gene Kim .
Gene went on to grad school elsewhere, then a start-up, and finally got the idea to start the commercial version of Tripwire with Wyatt Starnes; Gene served as CTO, Wyatt as CEO. Their subsequent hard work, and that of hundreds of others who have worked at the company over the years, resulted in great success: the software has become one of the most widely used change detection & IDS systems in history, as well as inspiring many other products.
Gene became more active in the security scene, and was especially intrigued with issues of configuration management, compliance, and overall system visibility, and with their connections to security and correctness. Over the years he spoken with thousands of customers and experts in the industry, and heard both best-practice and horror stories involving integrity management, version control, and security. This led to projects, workshops, panel sessions, and eventually to his lead authorship of "Visible Ops Security: Achieving Common Security and IT Operations Objectives in 4 Practical Steps" (Gene Kim, Paul Love, George Spafford) , and some other, related works.
His passion for the topic only grew. He was involved in standards organizations, won several awards for his work, and even helped get the B-sides conferences into a going concern. A few years ago, he left his position at Tripwire to begin work on a book to better convey the principles he knew could make a huge difference in how IT is managed in organizations big and small.
I read an early draft of that book a little over a year ago (late 2011), It was a bit rough -- Gene is bright and enthusiastic, but was not quite writing to the level of J.K. Rowling or Stephen King. Still, it was clear that he had the framework of a reasonable narrative to present major points about good, bad, and excellent ways to manage IT operations, and how to transform them for the better. He then obtained input from a number of people (I think he ignored mine), added some co-authors, and performed a major rewrite of the book. The result is a much more readable and enjoyable story -- a cross between a case study and a detective novel, with a dash of H. P. Lovecraft and DevOps thrown in.
The official launch date of the book, "The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win" (Gene Kim, Kevin Behr, George Spafford), is Tuesday, January 15, but you can preorder it before then on (at least) Amazon.
The book is worth reading if you have a stake in operations at a business using IT. If you are a C-level executive, you should most definitely take time to read the book. Consultants, auditors, designers, educators...there are some concepts in there for everyone.
But you don't have to take only my word for it -- see the effusive praise of tech luminaries who have read the book .
So, Spaf sez, get a copy and see how you can transform your enterprise for the better.
(Oh, and I have never met the George Spafford who is a coauthor of the book. We are undoubtedly distant cousins, especially given how uncommon the name is. That Gene would work with two different Spaffords over the years is one of those cosmic quirks Vonnegut might write about. But Gene isn't Vonnegut, either.
So, as a postscript.... I've obviously known Gene for over 20 years, and am very fond of him, as well as happy for his continuing success. However, I have had a long history of kidding him, which he has taken with incredible good nature. I am sure he's saving it all up to get me some day....
When Gene and his publicist asked if I could provide some quotes to use for his book, I wrote the first of the following. For some reason, this never made it onto the WWW site . So, they asked me again, and I wrote the second of the following -- which they also did not use.
So, not to let a good review (or two) go to waste, I have included them here for you. If nothing else, it should convince others not to ask me for a book review.
But, despite the snark (who, me?) of these gag reviews, I definitely suggest you get a copy of the book and think about the ideas expressed therein. Gene and his coauthors have really produced a valuable, readable work that will inform -- and maybe scare -- anyone involved with organizational IT.
Based on my long experience in academia, I can say with conviction that this is truly a book, composed of an impressive collection of words, some of which exist in human languages. Although arranged in a largely random order, there are a few sentences that appear to have both verbs and nouns. I advise that you immediately buy several copies and send them to people -- especially people you don't like -- and know that your purchase is helping keep some out of the hands of the unwary and potentially innocent. Under no circumstances, however, should you read the book before driving or operating heavy machinery. This work should convince you that Gene Kim is a visionary (assuming that your definition of "vision" includes "drug-induced hallucination").
I picked up this new book -- The Phoenix Project , by Gene Kim, et al. -- and could not put it down. You probably hear people say that about books in which they are engrossed. But I mean this literally: I happened to be reading it on my Kindle while repairing some holiday ornaments with superglue. You might say that the book stuck with me for a while.
There are people who will tell you that Gene Kim is a great author and raconteur. Those people, of course, are either trapped in Mr. Kim's employ or they drink heavily. Actually, one of those conditions invariably leads to the other, along with uncontrollable weeping, and the anguished rending of garments. Notwithstanding that, Mr. Kim's latest assault on les belles-lettres does indeed prompt this reviewer to some praise: I have not had to charge my health spending account for a zolpidem refill since I received the advance copy of the book! (Although it may be why I now need risperidone.)
I must warn you, gentle reader, that despite my steadfast sufferance in reading, I never encountered any mention of an actual Phoenix. I skipped ahead to the end, and there was no mention there, either. Neither did I notice any discussion of a massive conflagration nor of Arizona, either of which might have supported the reference to Phoenix . This is perhaps not so puzzling when one recollects that Mr. Kim's train of thought often careens off the rails with any random, transient manifestation corresponding to the meme "Ooh, a squirrel!" Rather, this work is more emblematic of a bus of thought, although it is the short bus, at that.
Despite my personal trauma, I must declare the book as a fine yarn: not because it is unduly tangled (it is), but because my kitten batted it about for hours with the evident joy usually limited to a skein of fine yarn. I have found over time it is wise not to argue with cats or women. Therefore, appease your inner kitten and purchase a copy of the book. Gene Kim's court-appointed guardians will thank you. Probably.
(Congratulations Gene, Kevin and George!)
Sunday, October 2nd, Earl Eugene Schultz, Jr. passed away. Gene probably had suffered an unrecognized stroke about two weeks earlier, and a week later fell down a long escalator at the Minneapolis municipal airport. He was given immediate emergency aid, then hospitalized, but never regained consciousness. Many of his family members were with him during his final days.
What follows is a more formal obituary, based on material provided by his family and others. That is followed by some personal reflections.
Gene was born September 10, 1946, in Chicago to E. Eugene Sr. and Elizabeth Schultz. They moved to California in 1948, and Gene’s sister, Nancy, was born in 1955. The family lived in Lafayette, California. Gene graduated from UCLA, and earned his MS and PhD (in Cognitive Science, 1977) at Purdue University in Indiana.
While at Purdue University, Gene met and married Cathy Brown. They were married for 36 years, and raised three daughters: Sarah, Rachel and Leah.
Gene was an active member of Cornerstone Fellowship, and belonged to a men’s Bible study. His many interests included family, going to his mountain home in Twain Harte, model trains, music, travelling, the outdoors, history, reading and sports.
Gene is survived by his wife of 36 years, Cathy Brown Schultz; father, Gene Schultz, Sr.; sister, Nancy Baker; daughters and their spouses, Sarah and Tim Vanier, Rachel and Duc Nguyen, Leah and Nathan Martin; and two grandchildren, Nola and Drake Nguyen.A memorial service will be held at Cornerstone Fellowship in Livermore, California on Saturday, October 8, 2011 at 1 pm. Donations may be sent to Caring Bridge.org under his name, Gene Schultz.
You should also take a few moments to visit this page and learn about the symptoms and response to stroke.
Gene was one of the more notable and accomplished figures in computing security over the last few decades. During the course of his career, Gene was professor of computer science at several universities, including the University of California at Davis and Purdue University, and retired from the University of California at Berkeley. He consulted for a wide range of clients, including U.S. and foreign governments and the banking, petroleum, and pharmaceutical industries. He also managed several information security practices and served as chief technology officer for two companies.
Gene formed and managed the Computer Incident Advisory Capability (CIAC) — an incident response team for the U.S. Department of Energy — from 1986–1992. This was the first formal incident response team, predating the CERT/CC by several years. He also was instrumental in the founding of FIRST — the Forum of Incident Response & Security Teams.
During his 30 years of work in security, Gene authored or co-authored over 120 papers, and five books. He was manager of the I4 program at SRI from 1994–1998. From 2002–2007, he was the Editor-in-Chief of Computers and Security — the oldest journal in computing security — and continued to serve on its editorial board. Gene was also an associate editor of Network Security. He was a member of the accreditation board of the Institute of Information Security Professionals (IISP).
Gene testified as an expert several times before both Senate and House Congressional committees. He also served as an expert advisor to a number of companies and agencies. Gene was a certified SANS instructor, instructor for ISACA, senior SANS analyst, member of the SANS NewsBites editorial board, and co-author of the 2005 and 2006 Certified Information Security Manager preparation materials.
Dr. Schultz was honored numerous times for his research, service, and teaching. Among his many notable awards, Gene received the NASA Technical Excellence Award, Department of Energy Excellence Award, the Vanguard Conference Top Gun Award (for best presenter) twice, the Vanguard Chairman's Award, the ISACA John Kuyers Best Speaker/Best Conference Contributor Award and the National Information Systems Security Conference Best Paper Award. One of only a few Distinguished Fellows of the Information Systems Security Association (ISSA), he was also named to the ISSA Hall of Fame and received ISSA's Professional Achievement and Honor Roll Awards.
As I recall, I first “met” Gene almost 25 years ago, when he was involved with the CIAC and I was involved with network security. We exchanged email about security issues and his time at Purdue. I may have even met him earlier — I can’t recall, exactly. It seems we have been friends forever. We also crossed paths once or twice at conferences, but it was only incidental.
In 1998, I started CERIAS at Purdue. I had contacted personnel at the (now defunct) company Global Integrity while at the National Computer Security Conference that year about supporting the effort at CERIAS. What followed was a wonderful collaboration: Gene was the Director of Research for Global Integrity, and as part of their support for CERIAS they “loaned” Gene to us for several years. Gene, Cathy and Leah moved to West Lafayette, a few houses away from where I lived, and Gene proceeded to help us in research and teaching courses over the next three years while he worked remotely for GI.
The students at Purdue loved Gene, but that seems to have been the case for everywhere he taught. Gene had a gift for conveying complex concepts to students, and had incredible patience when dealing with them one-on-one. He came up with great assignments, sprinkled his lectures with interesting stories from his experience, and encouraged the students to try things to see what they might discover. He was inspirational. He was inspirational as a colleague; too, although we both traveled so much that we didn’t get to see each other too often.
In 2001 he parted ways with Global Integrity, and moved his family back to California. This was no doubt influenced by the winters they had experienced in Indiana — too much of a reminder of grad student days for Gene and Cathy! I remember one time that we all got together to watch a New Year’s Purdue football bowl appearance, and the snow was so high as to make the roads impassable for a few days. Luckily, we lived near each other and it was only a short walk to warmth, hors d’oeuvres, and wine.
In the following years, Gene and I kept in close touch. We served on a few committees and editorial boards together, regularly saw each other at conferences, and kept the email flowing back and forth. He returned to Purdue and CERIAS several times to conduct seminars and joint research. He was generous with his time to the students and faculty who met with him.
Earlier this year, several of us put together a proposal to a funding agency. In it, we listed Gene as an outside expert to review and advise us on our work. We had room in the budget to pay him almost any fee he requested. But, when I spoke with him on the phone, he indicated he didn’t care if we paid more than his expenses — “I want to help CERIAS students and advance the field” was his rationale.
Since I learned of the news of his accident, and subsequent passing, I have provided some updates and notes to friends, colleagues, former students, and others via social media and email. So many people who knew Gene have responded with stories. There are three elements that are frequently repeated, and from my experience they help to define the man:
Gene Schultz was a wonderful role model, mentor and friend for a huge number of people, including being a husband to a delightful wife for 36 years and father to three wonderful daughters. Our world is a little less bright with him gone, but so very much better that he was with us for the time he was here.
E. Eugene Schultz, Jr., 9/10/46–10/2/11. Requiescat in pace.
Yet another breach of information has occurred, this time from the Arizona Department of Public Safety. A large amount of data about law enforcement operations was exposed, as was a considerable amount of personnel information. As someone who has been working in information security and the implications of technology for nearly 30 years, two things come to mind.
First, if a largely uncoordinated group could penetrate the systems and expose all this information, then so could a much more focused, well-financed, and malevolent group — and it would not likely result in postings picked up by the media. Attacks by narcotics cartels, organized crime, terrorists and intelligence agencies are obvious threats; we can only assume that some have already succeeded but not been recognized or publicized. And, as others are noting, this poses a real threat to the physical safety of innocent people. Yes, in any large law enforcement organization there are likely to be some who are corrupt (the claimed reason behind the attack), but that is not reason to attack them all. Some of the people they are arrayed against are far worse.
For example, there are thousands (perhaps tens of thousands) of kidnappings in Mexico for ransom, with many of the hostages killed rather than freed after payment. Take away effective law enforcement in Arizona, and those gangs would expand into the U.S. where they could demand bigger ransoms. The hackers, sitting behind a keyboard removed from gang and street violence, safe from forcible rape, and with enough education to be able to avoid most fraud, find it easy to cherry-pick some excesses to complain about. But the majority of people in the world do not have the education or means to enjoy that level of privileged safety. Compared to police in many third-world countries where extortion and bribes are always required for any protection at all, U.S. law enforcement is pretty good. (As is the UK, which has also recently been attacked.)
Ask yourself what the real agenda is of a group that has so far only attacked law enforcement in some of the more moderate countries, companies without political or criminal agendas, and showing a total disregard for collateral damage. Ask why these "heroes" aren't seeking to expose some of the data and names of the worst drug cartels, or working to end human trafficking and systematic rape in war zones, or exposing the corruption in some African, South American & Asian governments, or seeking to damage the governments of truly despotic regimes (e.g., North Korea, Myanmar), or interfering with China's online attacks against the Dalai Lama, or leaking memos about willful environmental damage and fraud by some large companies, or seeking to destroy extremist groups (such as al Qaida) that oppress woman and minorities and are seeking weapons of mass destruction.
Have you seen one report yet about anything like the above? None of those actions would necessarily be legal, but any one of them would certainly be a better match for the claimed motives. Instead, it is obvious that these individuals and groups are displaying a significant political and moral bias — or blindness — they are ignoring the worst human rights offenders and criminals on the planet. It seems they are after the ego-boosting publicity, and concerned only with themselves. The claims of exposing evil is intended to fool the naive.
In particular, this most recent act of exposing the names and addresses of family members of law enforcement, most of whom are undoubtedly honest people trying to make the world a safer place, is not a matter of "Lulz" — it is potentially enabling extortion, kidnapping, and murder. The worst criminals, to whom money is more important than human life, are always seeking an opportunity to neutralize the police. Attacking family members of law enforcement is common in many countries, including Mexico, and this kind of exposure further enables it now in Arizona. The data breach is attacking some of the very people and organizations trying to counter the worst criminal and moral abuses that may occur, and worse, their families.
Claiming that, for instance, that the "War on Drugs" created the cartels and is morally equivalent (e.g., response #13 in this) is specious. Laws passed by elected representatives in the U.S. did not cause criminals in Mexico to kidnap poor people, force them to fight to the death for the criminals' amusement, and then force the survivors to act as expendable drug mules. The moral choices by criminals are exactly that — moral choices. The choice to kidnap, rape, or kill someone who objects to your criminal behavior is a choice with clear moral dimensions. So are the choices of various hackers who expose data and deface systems.
When I was growing up, I was the chubby kid with glasses. I didn't do well in sports, and I didn't fit in with the groups that were the "cool kids." I wasn't into drinking myself into a stupor, or taking drugs, or the random vandalism that seemed to be the pasttimes of those very same "cool kids." Instead, I was one of the ones who got harassed, threatened, my homework stolen, and laughed at. The ones who did it claimed that it was all in fun — this being long before the word "lulz" was invented. But it was clear they were being bullies, and they enjoyed being bullies. It didn't matter if anyone got hurt, it was purely for their selfish enjoyment. Most were cowards, too, because they would not do anything that might endanger them, and when they could, they did things anonymously. The only ones who thought it was funny were the other dysfunctional jerks. Does that sound familiar?
Twenty years ago, I was objecting to the press holding up virus authors as unappreciated geniuses. They were portrayed as heroes, performing experiments and striking blows against the evil computer companies then prominent in the field. Many in the public and press (and even in the computing industry) had a sort of romantic view of them — as modern, swashbuckling, electronic pirates, of the sorts seen in movies. Now we can see the billions of dollars in damage wrought by those "geniuses" and their successors with Zeus and Conficker and the rest. The only difference is of time and degree — the underlying damage and amoral concern for others is simply visible to more people now. (And, by the way, the pirates off Somalia and in the Caribbean, some of whom simply kill their victims to steal their property, are real pirates, not the fictional, romantic versions in film.)
The next time you see a news article about some group, by whatever name, exposing data from a gaming company or law enforcement agency, think about the real evil left untouched. Think about who might actually be hurt or suffer loss. Think about the perpetrators hiding their identities, attacking the poorly defended, and bragging about how wonderful and noble and clever they are. Then ask if you are someone cheering on the bully or concerned about who is really getting hurt. And ask how others, including the press, are reporting it. All are choices with moral components. What are yours?
I have received several feedback comments to this (along with the hundreds of spam responses). Several were by people using anonymous addresses. We don't publish comments made anonymously or containing links to commercial sites. For this post, I am probably not going to pass through any rants, at least based on what I have seen. Furthermore, I don't have the time (or patience) to do a point-by-point commentary on the same things, again and again. However, I will make a few short comments on what I have received so far.
Several responses appear to be based on the assumption that I don't have knowledge or background to back up some of my statements. I'm not going to rebut those with a list of specifics. However, people who know what I've been doing over the few decades (or bothered to do a little research) — including work with companies, law enforcement, community groups, and government agencies — would hardly accuse me of being an observer with only an academic perspective.
A second common rant is that the government has done some bad things, or the police have done something corrupt, or corporations are greedy, and those facts somehow justify the behavior I described. Does the fact that a bully was knocked around by someone else and thus became a bully mean that if you are the victim, it's okay? If so, then the fact that the U.S. and U.K. have had terrorist attacks that have resulted in overly intrusive laws should make it all okay for you. After all, they had bad things happen to them, so their bad behavior is justified, correct? Wrong. That you became an abuser of others because you were harmed does not make it right. Furthermore, attacks such as the ones I discussed do nothing to fix those problems, but do have the potential to harm innocent parties as well as give ammunition to those who would pass greater restrictions on freedom. Based on statistics (for the US), a significant number of the people whining about government excess have not voted or bothered to make their opinions known to their elected representatives. The more people remain uninvolved, the more it looks like the population doesn't care or approves of those excesses, including sweetheart deals for corporations and invasions of privacy. Change is possible, but it is not going to occur by posting account details of people subscribed to Sony services, or giving out addresses and names of families of law enforcement officers, or defacing the NPR website. One deals with bullies by confronting them directly.
The third most common rant so far is to claim that it doesn't make any difference, for one reason or another: all the personal information is already out there on the net or will be soon, that the government (or government of another country) or criminals have already captured all that information, that it doesn't cost anything, security is illusory, et al. Again, this misses the point. Being a bully or vandal because you think it won't make any difference doesn't excuse the behavior. Because you believe that the effects of your behavior will happen anyhow is no reason to hasten those effects. If you believe otherwise, then consider: you are going to die someday, so it doesn't make a difference if you kill yourself, so you might as well do it now. Still there? Then I guess you agree that each act has implications that matter even if the end state is inevitable.
Fourth, some people claim that these attacks are a "favor" to the victims by showing them their vulnerabilities, or that the victims somehow deserved this because their defenses were weak. I addressed these claims in an article published in 2003. In short, blaming the victim is inappropriate. Yes, some may deserve some criticism for not having better defenses, but that does not justify an attack nor serve as a defense for the attackers. It is no favor either. If you are walking down a street at night and are assaulted by thugs who beat you with 2x4s and steal your money, you aren't likely to lie bleeding in the street saying to yourself "Gee, they did me a huge favor by showing I wasn't protected against a brutal assault. I guess I deserved that." Blaming the victim is done by the predators and their supporters to try to justify their behavior. And an intrusion or breach, committed without invitation or consent, is not a favor — it is a crime.
Fifth, if you support anarchy, then that is part of your moral choices. It does not invalidate what I wrote. I believe that doing things simply because they amuse you is a very selfish form of choice, and is the sort of reasoning many murderers, rapists, pedophiles and arsonists use to justify their actions. In an anarchy, they'd be able to indulge to their hearts content. Lotsa lulz. But don't complain if I and others don't share that view.
I am going to leave it here. As I said, I'm not interested in spending the next few weeks arguing on-line with people who are trying to justify behavior as bullies and vandals based on faulty assumptions.
I have a set of keywords registered with Google Alerts that result in a notification whenever they show up in a new posting. This helps me keep track of some particular topics of interest.
One of them popped up recently with a link to a review and some comments about a book I co-authored (Practical Unix & Internet Security, 3rd Edition). The latest revision is over 6 years old, but still seems to be popular with many security professionals; some of the specific material is out of date, but much of the general material is still applicable and is likely to be applicable for many years yet to come. At the time we wrote the first edition of the book there were only one or two books on computer security, so we included more material to make this a useful text and reference.
In general, I don't respond to reviews of my work unless there is an error of fact, and not always even then. If people like the book, great. If they don't, well, they're entitled to their opinions -- no matter how ignorant and ill-informed they may be.
This particular posting included reviews from Amazon that must have been posted about the 2nd edition of the book, nearly a decade old, although their dates as listed on this site make it look like they are recent. I don't recall seeing all of the reviews before this.
One of the responses in this case was somewhat critical of me rather than the book: the text by James Rothschadl. I'm not bothered by his criticism of my knowledge of security issues. Generally, hackers who specialize in the latest attacks dismiss anyone not versed in their tools as ignorant, so I have heard this kind of criticism before. It is still the case that the "elite" hackers who specialize in the latest penetration tools think that they are the most informed about all things security. Sadly, some decision-makers believe this too, much to their later regret, usually because they depend on penetration analysis as their primary security mechanism.
What triggered this blog posting was when I read the comments that included the repetition of erroneous information originally in the book Underground by Suelette Dreyfus. In that book, Ms. Dreyfus recounted the exploits of various hackers and miscreants -- according to them. One such claim, made by a couple of hackers, was that they had broken into my account circa 1990. I do not think Ms. Dreyfus sought independent verification of this, because the story is not completely correct. Despite this, some people have gleefully pointed this out as "Spaf got hacked."
There are two problems with this tale. First, the computer account they broke into was on the CS department machines at Purdue. It was not a machine I administered (and for which I did not have administrator rights) -- it was on shared a shared faculty machine. Thus, the perps succeeded in getting into a machine run by university staff that happened to have my account name but which I did not maintain. That particular instance came about because of a machine crash, and the staff restored the system from an older backup tape. There had been a security patch applied between the backup and the crash, and the staff didn't realize that the patch needed to be reapplied after the backup.
But that isn't the main problem with this story: rather, the account they broke into wasn't my real account! My real account was on another machine that they didn't find. Instead, the account they penetrated was a public "decoy" account that was instrumented to detect such behavior, and that contained "bait" files. For instance, the perps downloaded a copy of what they thought was the Internet Worm source code. It was actually a copy of the code with key parts missing, and some key variables and algorithms changed such that it would partially compile but not run correctly. No big deal.
Actually, I got log information on the whole event. It was duly provided to law enforcement authorities, and I seem to recall that it helped lead to the arrest of one of them (but I don't recall the details about whether there was a prosecution -- it was 20 years ago, after all).
At least 3 penetrations of the decoy account in the early 1990s provided information to law enforcement agencies, as well as inspired my design of Tripwire. I ran decoys for several years (and may be doing so to this day . I always had a separate, locked down account for personal use, and even now keep certain sensitive files encrypted on removable media that is only mounted when the underlying host is offline. I understand the use of defense-in-depth, and the use of different levels of protection for different kinds of information. I have great confidence in the skills of our current system admins. Still, I administer a second set of controls on some systems. But i also realize that those defenses may not be enough against really determined, resourced attacks. So, if someone wants to spend the time and effort to get in, fine, but they won't find much of interest -- and they may be providing data for my own research in the process!