Posts in General

Page Content

Déjà Vu All Over Again: The Attack on Encryption

Preface

by Spaf
Chair, ACM US Public Policy Council (USACM)

About 20 years ago, there was a heated debate in the US about giving the government mandatory access to encrypted content via mandatory key escrow. The FBI and other government officials predicted all sorts of gloom and doom if it didn’t happen, including that it would prevent them from fighting crime, especially terrorists, child pornographers, and drug dealers. Various attempts were made to legislate access, including forced key escrow encryption (the “Clipper Chip”). Those efforts didn’t come to pass because eventually enough sensible — and technically literate — people spoke up. Additionally, the economic realities also made it clear that people weren’t knowingly going to buy equipment with government backdoors built in.

Fast forward to today. In the intervening two decades, the forces of darkness did not overtake us as a result of no restrictions on encryption. Yes, there were some terrorist incidents, but either there was no encryption involved that made any difference (e.g., the Boston Marathon bombing), or there was plenty of other evidence but it was never used to prevent anything (e.g., the 9/11 tragedy). Drug dealers have not taken over the country (unless you consider Starbucks coffee a narcotic). Authorities are still catching and prosecuting criminals, including pedophiles and spies. Notably, even people who are using encryption in furtherance of criminal enterprises, such as Ross “Dread Pirate Roberts” Ulbricht, are being arrested and convicted. In all these years, the FBI has yet to point to anything significant where the use of encryption frustrated their investigations. The doomsayers of the mid-1990s were quite clearly wrong.

However, now in 2015 we again have government officials raising a hue and cry that civilization will be overrun, and law enforcement will be rendered powerless unless we pass laws mandating that back doors and known weaknesses be put into encryption on everything from cell phones to email. These arguments have a strong flavor of déjà vu for those of us who were part of the discussion in the 90s. They are even more troubling now, given the scope of government eavesdropping, espionage, and massive data thefts: arguably, encryption is more needed now that it was 20 years ago.

USACM, the Public Policy Council of the ACM, is currently discussing this issue — again. As a group, we made statements against the proposals 20 years ago. (See, for instance, the USACM and IEEE joint letter to Senator McCain in 1997). The arguments in favor of weakening encryption are as specious now as they were 20 years ago; here are a few reasons why:

  • Weakening encryption to catch a small number of “bad guys” puts a much larger community of law-abiding citizens and companies at risk. Strong encryption is needed to help protect data at rest and in transit against criminal interception;
  • A “golden key” or weakened cryptography is likely to be discovered by others. There is a strong community of people working in security — both legitimately and for criminal enterprises — and access to the “key” or methods to exploit the weaknesses will be actively sought. Once found, untold millions of systems will be defenseless — some, permanently.
  • There is no guarantee that the access methods won’t be leaked, even if they are closely held. There are numerous cases of blackmail and bribery of officials leading to leaked information. Those aren’t the only motives, either. Consider Robert Hanssen, Edward Snowden, and Chelsea (Bradley) Manning: three individuals with top security clearances who stole/leaked extremely sensitive and classified information. Those are only the ones publicly identified so far. Human nature and history instruct us that they won’t be the last.
  • As recently disclosed incidents — including data exfiltration from the State Department, IRS, and OPM — have shown, the government isn’t very good at protecting sensitive information. Keys will be high-value targets. How long before the government agencies (and agents) holding them are hacked?
  • Revelations of government surveillance in excess of legal authority, past and recent, suggest that any backdoor capability in the hands of the government may possibly (likely?) be misused. Strong encryption is a form of self-protection.
  • Consumers in other countries aren’t going to want to buy hardware/software that has backdoors built in for the US government. US companies will be at a huge disadvantage in selling into the international marketplace. Alternatively, other governments will demand the same keys/access, ostensibly for their own law enforcement purposes. Companies will need to accede to these requests, thus broadening the scope of potential disclosure, as well as making US data more accessible to espionage by those countries.
  • Cryptography is not a dark art. There are many cryptography systems available online. Criminals and terrorists will simply layer encryption by using other, stronger systems in addition to the mandated, weakened cryptography. Mandating backdoors will mostly endanger only the law-abiding.

There are other reasons, too, including cost, impact on innovation, and more. The essay below provides more rationale. Experts and organizations in the field have recently weighed in on this issue, and (as one of the individuals, and as chair of one of the organizations) I expect we will continue to do so.

With all that as a backdrop, I was reminded of an essay on this topic area by one of USACM’s leaders. It was originally given as a conference address two decades ago, then published in several places, including on the EPIC webpage of information about the 1990s anti-escrow battle. The essay is notable both because it was written by someone with experience in Federal criminal prosecution, and because it is still applicable, almost without change, in today’s debate. Perhaps in 20 more years this will be reprinted yet again, as once more memories dim of the arguments made against government-mandated surveillance capabilities. It is worth reading, and remembering.



The Law Enforcement Argument for Mandatory Key Escrow Encryption: The “Dank” Case Revisited

by Andrew Grosso, Esq.
Chair, USACM Committee on Law

(This article is a revised version of a talk given by the author at the 1996 RSA Data Security Conference, held in San Francisco, California. Mr. Grosso is a former federal prosecutor who now has his own law practice in Washington, D.C. His e-mail address is agrosso@acm.org.)

I would like to start by telling a war story. Some years ago, while I was an Assistant U.S. Attorney, I was asked to try a case which had been indicted by one of my colleagues. For reasons which will become clear, I refer to this case as “the Dank case.”

The defendant was charged with carrying a shotgun. This might not seem so serious, but the defendant had a prior record. In fact, he had six prior convictions, three of which were considered violent felonies. Because of that, this defendant was facing a mandatory fifteen years imprisonment, without parole. Clearly, he needed an explanation for why he was found in a park at night carrying a shotgun. He came up with one.

The defendant claimed that another person, called “Dank,” forced him to carry the gun. “Dank,” it seems, came up to him in the park, put the shotgun in his hands, and then pulled out a handgun and put the handgun to the defendant’s head. “Dank” then forced the defendant to walk from one end of the park to other, carrying this shotgun. When the police showed up, “Dank” ran away, leaving the defendant holding the bag, or, in this case, the shotgun.

The jurors chose not to believe the defendant’s story, although they spent more time considering it than I would like to admit. After the trial, the defendant’s story became known in my office as “the Dank defense.” As for myself, I referred to it as “the devil made me do it.”

I tell you this story because it reminds me of the federal government’s efforts to justify domestic control of encryption. Instead, of “Dank,” it has become, “drug dealers made me do it;” or “terrorists made me do it;” or “crypto anarchists made me do it.” There is as much of a rationale basis behind these claims as there was behind my defendant’s story of “Dank.” Let us examine some of the arguments the government has advanced.

It is said that wiretapping is indispensable to law enforcement. This is not the case. Many complex and difficult criminal investigations have been successfully concluded, and successfully argued to a jury, where no audio tapes existed of the defendants incriminating themselves. Of those significant cases, cited by the government, where audio tapes have proved invaluable, such as in the John Gotti trial, the tapes have been made through means of electronic surveillance other than wire tapping, for example, through the use of consensual monitoring or room bugs. The unfettered use of domestic encryption could have no effect on such surveillance.

It is also said that wiretapping is necessary to prevent crimes. This, also, is not the case. In order to obtain a court order for a wire tap, the government must first possess probable cause that a crime is being planned or is in progress. If the government has such probable cause concerning a crime yet in the planning stages, and has sufficient detail about the plan to tap an individual’s telephone, then the government almost always has enough probable cause to prevent the crime from being committed. The advantage which the government gains by use of a wiretap is the chance to obtain additional evidence which can later be used to convict the conspirators or perpetrators. Although such convictions are desirable, they must not be confused with the ability to prevent the crime.

The value of mandating key escrow encryption is further eroded by the availability of super encryption, that is, using an additional encryption where the key is not available to the government. True, the government’s mandate would make such additional encryption illegal; however the deterrence effect of such legislation is dubious at best. An individual planning a terrorist act, or engaging in significant drug importation, will be little deterred by prohibitions on the means for encoding his telephone conversations. The result is that significant crimes will not be affected or discouraged.

In a similar vein, the most recent estimates of the national cost for implementing the Digital Telephony law, which requires that commercial telecommunications companies wiretap our nation’s communications network for the government’s benefit, is approximately three billion dollars. Three billion dollars will buy an enormous number of police man hours, officer training, and crime fighting equipment. It is difficult to see that this amount of money, by being spent on wire tapping the nation, is being spent most advantageously with regard to law enforcement’s needs.

Finally, the extent of the federal government’s ability to legislate in this area is limited. Legislation for the domestic control of encryption must be based upon the commerce clause of the U.S. Constitution. That clause would not prohibit an individual in, say, the state of California from purchasing an encryption package manufactured in California, and using that package to encode data on the hard drive of his computer, also located in California. It is highly questionable whether the commerce clause would prohibit the in-state use of an encryption package which had been obtained from out of state, where all the encryption is done in-state and the encrypted data is maintained in- state. Such being the case, the value of domestic control of encryption to law enforcement is doubtful.

Now let us turn to the disadvantages of domestic control of encryption. Intentionally or not, such control would shift the balance which exists between the individual and the state. The individual would no longer be free to conduct his personal life, or his business, free from the risk that the government may be watching every move. More to the point, the individual would be told that he would no longer be allowed to even try to conduct his life in such a manner. Under our constitution, it has never been the case that the state had the right to evidence in a criminal investigation. Rather, under our constitution, the state has the right to pursue such evidence. The distinction is crucial: it is the difference between the operation of a free society, and the operation of a totalitarian state.

Our constitution is based upon the concept of ordered liberty. That is, there is a balance between law and order, on the one hand, and the liberty of the individual on the other. This is clearly seen in our country’s bill of rights, and the constitutional protections afforded our accused: evidence improperly obtained is suppressed; there is a ban on the use of involuntary custodial interrogation, including torture, and any questioning of the accused without a lawyer; we require unanimous verdicts for convictions; and double jeopardy and bills of attainder are prohibited. In other words, our system of government expressly tolerates a certain level of crime and disorder in order to preserve liberty and individuality. It is difficult to conceive that the same constitution which is prepared to let a guilty man go free, rather than admit an illegally seized murder weapon into evidence at trial, can be interpreted to permit whole scale, nationwide, mandatory surveillance of our nation’s telecommunications system for law enforcement purposes. It is impossible that the philosophy upon which our system of government was founded could ever be construed to accept such a regime.

I began this talk with a war story, and I would like to end it with another war story. While a law student, I had the opportunity to study in London for a year. While there, I took one week, and spent it touring the old Soviet Union. The official Soviet tour guide I was assigned was an intelligent woman. As a former Olympic athlete, she had been permitted in the 1960’s to travel to England to compete in international tennis matches. At one point in my tour, she asked me why I was studying in London. I told her that I wanted to learn what it was like to live outside of my own country, so I chose to study in a country where I would have little trouble with the language. I noticed a strange expression on her face as I said this. It was not until my tour was over and I looked back on that conversation that I realized why my answer had resulted in her having that strange look. What I had said to her was that *I* had chosen to go to overseas to study; further, I had said that *I* had chosen *where* to go. That I could make such decisions was a right which she and the fellow citizens did not have. Yes, she had visited England, but it was because her government chose her to go, and it was her government which decided where she should go. In her country, at that time, her people had order, but they had no liberty.

In our country, the domestic control of encryption represents a shift in the balance of our liberties. It is a shift not envisioned by our constitution. If ever to be taken, it must be based upon a better defense than what “Dank,” or law enforcement, can provide.




What you can do

Do you care about this issue? If so, consider contacting your elected legislators to tell them what you think, pro or con. Use this handy site to find out how to contact your Representative and Senators.

Interested in being involved with USACM? If so, visit this page. Note that you first need to be a member of ACM but that gets you all sorts of other benefits, too. We are concerned with issues of computing security, privacy, accessibility, digital governance, intellectual property, computing law, and e-voting. Check out our brochure for more information.


† — This blog post is not an official statement of USACM. However, USACM did issue the letter in 1997 and signed the joint letter earlier this year, as cited, so those two documents are official.

Teaching Information Security

Let me recommend an article in Communications of the ACM, June 2015, vol 6(2), pp. 64-69. The piece is entitled PLUS ÇA CHANGE, PLUS C’EST LA MÊME CHOSE, and the author is the redoubtable Corey Schou.

Corey has been working in information security education as long (and maybe longer) than anyone else in the field. What’s more, he has been involved in numerous efforts to help define the field, and make it more professional.

His essay distills a lot of his thinking about information security (and its name), its content, certification, alternatives, and the history of educational efforts in the area.

If you work in the field in any way — as a teacher, practitioner, policy-maker, or simply hobbyist, there is probably something in the piece for you.

(And yes, there are several indirect references to me in the piece. Two are clearer than others — can you spot them? I definitely endorse Corey’s conclusions so perhaps that is why I’m there. grin

—spaf

Short, Random Thought on Testing

In the late 1980s, around the time the Airbus A340 was introduced (1991), those of us working in software engineering/safety used to exchange a (probably apocryphal) story. The story was about how the fly-by-wire avionics software on major commercial airliners was tested.

According to the story, Airbus engineers employed the latest and greatest formal methods, and provided model checking and formal proofs of all of their avionics code. Meanwhile, according to the story, Boeing performed extensive design review and testing, and made all their software engineers fly on the first test flights. The general upshot of the story was that most of us (it seemed) felt more comfortable flying on Boeing aircraft. (It would be interesting to see if that would still be the majority opinion in the software engineering community.)

Today, in a workshop, I was reminded of this story. I realized how poor a security choice that second approach would be even if it might be a reasonable software test. All it would take is one engineer (or test pilot) willing to sacrifice himself/herself, or a well-concealed attack, or someone near the test field with an air to ground missile, and it would be possible to destroy the entire pool of engineers in one fell swoop…as well as the prototype, and possibly (eventually) the company.

Related to recent events, I would also suggest that pen-testing at the wrong time, with insufficient overall knowledge (or evil intent) could lead to consequences with some similar characteristics. Testing on live systems needs to be carefully considered if catastrophic failures are possibly realized.

No grand conclusions here, other than to think about how testing interacts with security. The threat to the design organization needs to be part of the landscape — not simply testing the deployed product to protect the end-users.

Two Items of interest

Here are a couple of items of possible interest to some of you.

First, a group of companies, organizations, and notable individuals signed on to a letter to President Obama urging that the government not mandate “back doors” in computing products. I was one of the signatories. You can find a news account about the letter here and you can read the letter itself here. I suggest you read the letter to see the list of signers and the position we are taking.

Second, I’ve blogged before about the new book by Carey Nachenberg — a senior malware expert who is one of the co-authors of Norton Security: The Florentine Deception. This is an entertaining mystery with some interesting characters and an intricate plot that ultimately involves a very real cyber security threat. It isn’t quite in the realm of an Agatha Christie or Charles Stross, but everyone I know how has read it (and me as well!) have found it an engrossing read.

So, why am I mentioning Carey’s book again? Primarily because Carey is donating all proceeds from sale of the book to a set of worthy charities. Also, it presents a really interesting cyber security issue presented in an entertaining manner. Plus, I wrote the introduction to the book, explaining a curious “premonition” of the plot device in the book. What device? What premonition? You’ll need to buy the book (and thus help contribute to the charities), read the book (and be entertained), and then get the answer!

You can see more about the book and order a copy at the website for The Florentine Deception.

Time Critical—Purdue Day of Giving

Dear Friends of CERIAS

This Wednesday, April 29, will be the second annual Purdue Day of Giving. During this 24-hour online event, CERIAS will be raising awareness and funds for infosec research, security education, and student initiatives.

Plus, through a generous pledge from Sypris Electronics, every donation received this Wednesday will be matched, dollar-for-dollar! So, whether its $10 or $10,000, your donation will be doubled and will have twice the impact supporting CERIAS research, education, and programs (i.e. Women in Infosec, student travel grants, student conference scholarships, the CERIAS Symposium, …)

Make your donation online here (CERIAS is listed in the left column, about 1/3 down).

Now through Wednesday help us spread the word by tagging your Twitter and Instragram posts with BOTH #PurdueDayofGiving and #CERIAS., and sharing our message on Facebook and LinkedIn. You can post your thoughts, share the Day of Giving video, or encourage others to donate.

Thank you for your continued support of CERIAS and for considering a Purdue Day of Giving donation this Wednesday (April 29).


Initial Thoughts on the RSA 2015 Conference

One again I have submitted myself to a week of talks, exhibits, walking, meetings, drinking, meetings, and more with 40,000 close associates (with one more day of it tomorrow). It’s the annual RSA conference in San Francisco. I’ve been to about 8, including the last 5.

Prior to starting this entry, I reread my blog post from after the 2014 RSA Conference. Not a lot has changed, at least as far as talks and exhibits. Pretty much everything I wrote last year is still accurate, so you can read that first. There were a few differences, and I’ll describe the prominent ones below.

Once again, I got pulled into meetings and conversations, so I didn’t attend as many of the talks as I really wanted. I caught portions of several, and I was impressed with more this year than last — I sensed less marketing. Thus, kudos to the program committee (and speakers). I am sorry I didn’t get to hear more of the talks. I hope they were recorded for us to view later.

Foremost differences from last year occurred outside the Moscone Center and on the exhibit floor — there was no boycott against RSA about alleged NSA collaboration, and the conference organizers adopted a policy against “booth babes” — yay! I don’t think I need to write about things that weren’t there this year, but I will say a big “thank you” to the RSA Conference team for the latter — it was a very welcome change.

  1. Last year’s big buzz phrase was “threat intelligence” with “big data” coming in second. This year, it was “IoT” with maybe “cloud” as second. i didn’t see much mention of “big data” in the materials or on the booths. There was some use of the term in presentations, however.
  2. Out of 400 booths I really only saw 2 or 3 totally new concepts. All the other products and services on display were either holdovers from prior years, of variations on older ideas.
  3. Many of the booth personnel were more cynical than last year about the conference, the field, their products, etc. This marks an interesting change: in prior years I barely detected cynicism.
  4. There seemed to be a little more international representation than last year — companies originating in other countries (Germany, Japan, China, Sweden, Korea, Taiwan, and Israel are ones I can recall).

I still did not speak in a session (even as a fill-in), it still costs quite a bit to attend, I still didn’t see many academics I knew,  

I saw only 3 products that were devoted to building secure systems — everything else was patching, monitoring, remediation, and training. That continues to be depressing.

Still the case there was limited emphasis on or solutions for privacy.

Andy Ellis provided me shielding for my badge so I could avoid being scanned onto mailing lists. I told people at most booths, but they tried anyhow. Some would try repeatedly, then tell me they couldn’t scan my badge. Duh! I just told you that! However, in every case, they still gave me a T-shirt or other swag.

Speaking of swag, this year, the top 3 raffle items were drones, Go-Pro cameras, and iWatches.

A few booths were very aggressive in trying to scan people. It almost felt like desperation. I had to duck and weave (not easy with a cracked rib) to avoid a few of those people and get past their booths. It felt like being in a video game.

This year, more vendors seemed willing to talk about donating their products to our (CERIAS) teaching and research labs. That is really promising, and helps our students a lot. (And, hint — it provides great visibility for the products, so you vendors can still do it!)

So, if I find the conference a little depressing, why do I still go? As I noted last year, besides hearing about trends and getting a stock of T-shirts, it is a great opportunity to see friends and acquaintances I don’t get to see that often otherwise because I have limited time and funds for travel. (And yes, Indiana is at the center of the known universe, but few flights stop here.) I have had some great conversations with these people — thought leaders and deep thinkers across the spectrum of infosec/cyber/etc.

Actually, it occurred to me over drinks that if I wanted to cause maximum disruption, I could have infected these highly-connected people with some awful disease, and within 72 hours they would have infected almost everyone in the field who have some level of clue. Luckily for the world, they only had to put up with my presence for a few minutes or so, each, and that isn’t contagious.

Here’s a partial list of the people I was happy to see (there were more, but this is who I can remember right now — my apologies for anyone I missed; plus, I may see more in the closing session tomorrow): Candy Alexander, Becky Bace, Robert Bigman, Bob Blakely, Josh Corman, Sam Curry, Jack Daniel, Michelle Dennedy, Matt Devost, Whit Diffie, Andy Ellis, Karen Evans, Dickie George, Greg Hogland, Brian Honan, Alex Hutton, Andrew Jacquith, Toney Jennings, John Johsnson, Gene Kim, Brian Krebs, Penny Leavy, Martin Libicki, Rich Marshall, Gary McGraw, Martin McKeay, Carey Nachenberg, Wendy Nather, Davi Ottenheimer, Andy Ozment, Kevin Poulsin, Paul Rosenzweig, Scott Rotondo, Marc Sachs, Howard Schmidt, Bruce Schneier, Corey Schou, Winn Schwartau, Chenxi Wang, Mark Weatherford, Bob West, Ira Winkler, and Amit Yoran.

Yes, I do know a rather eclectic set of people. Their karma must be bad, because they also know me.

Speaking of karma, I’m already planning to go to RSA 2016.


CERIAS 2015 Symposium Now Online!

The 2015 CERIAS symposium — held March 24 & 25, 2015 — was wonderful! We had a great array of speakers and panels, and one of our largest audiences in years. The talks were fascinating, the panels provocative, and the student research exciting (as usual).

Featured speakers included Sam Curry, CTO and CSO, Arbor Networks; Deborah Frincke, Director of Research, NSA/CSS; and Michelle Dennedy, VP & CPO McAfee/Intel Security.

If you were there and want to hear a repeat of a talk, or if you didn’t make it to the symposium and want to hear what went on, visit our website. We have videos of all the talks and panels plus links to the student research posters and other materials. Similar materials from our 2014 symposium are still online, too!

We haven’t yet set the dates for the 2016 CERIAS Symposium, but stay tuned for that.

What is wrong with all of you? Reflections on nude pictures, victim shaming, and cyber security

[This blog post was co-authored by Professor Samuel Liles and Spaf.]

.Over the last few days we have seen a considerable flow of news and social media coverage of untended exposure of celebrity photographs (e.g., here). Many (most?) of these photos were of attractive females in varying states of undress, and this undoubtedly added to the buzz.

We have seen commentary from some in the field of cybersecurity, as well as more generally-focused pundits, stating that the subjects of these photos “should have known better.” These commentators claim that it is generally known that passwords/cloud storage/phones have weak security, so the victims only have themselves to blame.

We find these kinds of comments ill-informed, disappointing, and in many cases, repugnant.

First, we note that the victims of these leaks were not trained in cyber security or computing. When someone falls ill, we don’t blame her for not having performed studies in advanced medicine. When someone’s car breaks down, we don’t blame him for failing to have a degree in mechanical engineering. Few people are deeply versed in fields well outside their profession.

The people involved in these unauthorized exposures apparently took prudent measures they were instructed to on the systems as they were designed. As an example, the passwords used must have passed the checks in place or they would not have been able to set them. It doesn’t matter if we approve of the passwords that passed those automated checks -- they were appropriate to the technical controls in place. What the people stored, how they did it, or the social bias against their state of dress has nothing to do with this particular issue.

Quite simply, the protection mechanisms were not up to the level of the information being protected. That is not the users’ fault. They were using market standards, represented as being secure. For instance, it is admitted that Apple products were among those involved (and that is the example in some of our links). People have been told for almost a decade that the iOS and Apple ecosystem is much more secure than other environments. That may or may not be true, but it certainly doesn’t take into account the persistent, group effort that appears to have been involved in this episode, or some of the other criminal deviants working in groups online. We have met a few actresses and models, and many young people. They don’t think of risk in the same way security professionals do, and having them depend on standard technology alone is clearly insufficient against such a determined threat.

Consider: assume you live in a nice house. You’ve got windows, doors, and locks on those windows and doors. You likely have some kind of curtains or window coverings. If you live in a house, even a house with no yard, if you close your curtains we accept that as a request for privacy. If I walk up on the sidewalk and attempt to peer into your windows, that is being a “peeping tom.” Even though I might have every right to stand on the sidewalk, face the direction I’m looking, and stop or pause, I do not have the right to violate your privacy.

Consider: Your house has a nice solid door with a nice lock. That lock likely has orders of magnitude less entropy than a password. Every night you walk through your house, lock your doors, and you go to sleep believing you are likely safe. Yet, that lock and that door will not stop a group of determined, well-equipped attackers or likely even slow them down. The police will not arrive for some amount of time and effective self-protection against unexpected provocation by a gang is uncertain, at best. As a polite and law-abiding society, we respect the door and the lock, and expect others to do the same. We understand that the door and lock keep honest people honest. They set a barrier to entry for criminals. Burglaries still happen and we use the power of law to enact punishment against criminals, although many crimes go unsolved.

If an unauthorized entry to your house occurs, whether by picking the lock, climbing through the window, or discovering a loose set of boards in the wall, we would never blame you, the victim — it is clear that the person who entered, unbidden, was to blame. Some of our peers would try to blame the companies that made the locks and windows, rather than acknowledge the bad intent of the burglar. Too many people in information security tend to think we can always build better locks, or that having “white hats” continually picking locks somehow will lead to them being unpickable. Many are so enamored of the technology of those locks and the pastime of picking them that they will blame the victim instead of anything to do with the production or design of the locks themselves. (This is not a new phenomenon: Spafford wrote about this topic 22 years ago.)

One fundamental error here is that all locks are by design meant to be opened. Furthermore, the common thinking ignores the many failures (and subsequent losses) before any "super lock" might appear. We also observe that few will be able to afford any super-duper unpickable locks, or carry the 20-pound key necessary to operate them. Technological protections must be combined with social and legal controls to be effective.

This leads to our second major point.

Imagine if that burglary occurred at your house, and you suffered a major loss because the agent of the crime discovered your artwork or coin collection. We would not dream of blaming you for the loss, somehow implying that you were responsible by having your possessions stored in your house. If somebody were to say anything like that, we would reproach them for blaming/shaming the victim. Society in general would not try to castigate you for having possessions that others might want to steal.

Unfortunately, many computer professionals (and too many others, outside the profession) have the mindset that crimes on computers are somehow the fault of the victim (and this has been the case for many years). We must stop blaming the victims in cases such as this, especially when what they were doing was not illegal. We see criticism of their activities instead of the privacy invasion as blaming/shaming no less atrocious as that of how rape victims are treated — and that is also usually badly, unfortunately.

If we give users lousy technology and tell them it is safe, they use it according to directions, and they do not understand its limitations, they should not be blamed for the consequences. That is true of any technology. The fault lies with the providers and those who provide vague assurances about it. Too bad we let those providers get away with legally disclaiming all responsibility.

We are sympathetic to the argument that these exposures of images should perhaps be considered as a sex crime. They were acts of taking something without permission that violated the privacy and perceptions of safety of the victim for the sexual gratification and sense of empowerment of the perpetrator (and possibly also other reasons). Revenge porn, stalking, assault, and rape are similar...and we should not blame the victims for those acts, either. The sexually-themed abuse of female journalists and bloggers is also in this category -- and if you aren't aware of it, then you should be: women who write things online that some disagree with will get threats of violence (including rape), get abusive and pornographic messaging and images (deluges of it), and called incredibly offensive names...sometimes for years. It is beyond bullying and into something that should be more actively abhorred.

Some male members of the cyber security community are particularly bad in their treatment of women, too.

Between the two of us, Sam and Spaf, we have served as professors, counselors, and one of us (Liles) as a law enforcement officer; we have well over 50 years combined professional experience with both victims and bad behavior of their abusers. We have counseled female students and colleagues who have been stalked and harassed online for years. They keep encountering law enforcement officials and technologists who ask "What are you doing to encourage this?" None of them encourage it, and some live in real terror 24x7 of what their stalkers will do next. Some have had pictures posted that are mortifying to them and their loved ones, they've lost jobs, had to move, withdraw from on-line fora, lost relationships, and even change their names and those of their children. This can last for years.

Sexual offenders blame the victim to absolve themselves of responsibility, and thus, guilt. "She was acting suggestively," "she was dressed that way," etc. If the people around them chime in and blame the victim, they are excusing the criminal -- they are reinforcing the idea that somehow the victim provoked it and the abuser "obviously couldn't help himself.” They thus add unwarranted guilt and shame to the victim while excusing the criminal. We generally reject this with offenses against children, realizing that the children are not responsible for being abused. We must stop blaming all adult victims (mostly female, but others also get abused this way), too.

Victim blaming (and the offensively named slut shaming -- these aren't "sluts," they are victimized women) must STOP. Do you side with privacy rights and protection of the public, or with the rapists and abusers? There is no defendable middle ground in these cases.

We are also horrified by the behavior of some of the media surrounding this case. The crimes have been labeled as leaks, which trivializes the purposeful invasion and degradation performed. Many outlets provided links to the pictures, as did at least one well-known blogger. That illustrates everything wrong about the paparazzi culture, expressed via computer. To present these acts as somehow accidental (“leak”) and blame the victims not only compounds the damage, but glosses over the underlying story — this appears to be the result of a long-term criminal conspiracy of peeping toms using technologies to gather information for the purpose of attacking the privacy of women. This has allegedly been going on for years and law enforcement has apparently had almost no previous interest in the cases — why isn’t that the lead story? The purposeful exploitation of computer systems and exposure of people's private information is criminal. Some pundits only began to indicate concern when it was discovered that some of the pictures were of children.

It is clear we have a long way to go as a society. We need to do a better job of building strong technology and then deploying it so that it can be used correctly. We need to come up with better social norms and legal controls to hold miscreants accountable. We need better tools and training for law enforcement to investigate cyber crimes without also creating openings for them to be the ones who are regularly violating privacy. We need to find better ways of informing the public how to make cyber risk-related decisions.

But most of all, we need to find our collective hearts. Instead of idealizing and idolizing the technology with which we labor, deflecting criticisms for faults onto victims and vendors, we need to do a much better job of understanding the humanity — including both strengths and weaknesses — of the people who depend on that technology. The privacy violations, credit card fraud, espionage, harassment, and identity thefts all have real people as victims. Cyber security is, at its core, protecting people, and the sooner we all take that to heart, the better.

Videos from the 15th Annual CERIAS Symposium

We are now releasing videos of our sessions at this year’s CERIAS Symposium from late March.

We had a fascinating session with David Medine, chair of the PCLOB discussing privacy and government surveillance with Mark Rasch, currently the CPO for SAIC. If you are interested in the issues of security, counterterrorism, privacy, and/or government surveillance, you will probably find this interesting:
https://www.youtube.com/watch?v=kHO7F8XjvrI

We are also making available videos of some of our other speakers — Amy Hess, Exec. Deputy Director of the FBI; George Kurtz, President & CEO of CrowdStrike; Josh Corman, CTO of Sonatype; and two of our other panel sessions: http://www.cerias.purdue.edu/site/symposium_video/

(You have to put up with my introductions of the speakers, but into every life a little rain must fall.)

That was the 15th Annual CERIAS Symposium. Planning for the 16th Symposium is underway for March 24 & 25, 2015: http://www.cerias.purdue.edu/site/symposium2015

Update on “Patching is Not Security”

A few weeks ago, I wrote a post entitled “Patching Is Not Security.” Among other elements, I described a bug in some Linksys routers that was not patched and was supporting the Moon worm.

Today, I received word that the same unpatched flaw in the router is being used to support DDOS attacks. These are not likely to be seen by the owners/operators of the routers because all the traffic involved is external to their networks — it is outbound from the router and is therefore “invisible” to most tools. About all they might see is some slowdown in their connectivity.

Here’s some of the details, courtesy of Brett Glass, the ISP operator who originally found the worm on some customer routers; I have replaced hostnames with VICTIM and ROUTER in his account:

Today, a user reported a slow connection and we tapped in with a packet sniffer to investigate. The user had a public, static IP on a Linksys E1000, with remote administration enabled on TCP port 8080. The router was directing SYN floods against several targets on the Telus network in Canada. For example:

10:00:44.544036 IP ROUTER.3070 > VICTIM.8080: Flags [S],
seq 3182338706, win 5680, options [mss 1420,sackOK,TS val 44990601 ecr 0,nop,scale 0], length 0
10:00:44.573042 IP ROUTER.3071 > VICTIM.8080: Flags [S],
seq 3180615688, win 5680, options [mss 1420,sackOK,TS val 44990603 ecr 0,nop,scale 0], length 0
10:00:44.575908 IP ROUTER.3077 > VICTIM.8080: Flags [S], se
q 3185404669, win 5680, options [mss 1420,sackOK,TS val 44990604 ecr 0,nop,scale 0], length 0
10:00:44.693528 IP ROUTER.3072 > VICTIM.8080: Flags [S],
seq 3188188011, win 5680, options [mss 1420,sackOK,TS val 44990616 ecr 0,nop,scale 0], length 0
10:00:44.713312 IP v ROUTER.3073 > VICTIM.http: Flags [S],
seq 3174550053, win 5680, options [mss 1420,sackOK,TS val 44990618 ecr 0,nop,scale 0], length 0
10:00:45.544854 IP ROUTER.3078 > VICTIM.http: Flags [S],
seq 3192591720, win 5680, options [mss 1420,sackOK,TS val 44990701 ecr 0,nop,scale 0], length 0
10:00:45.564454 IP ROUTER.3079 > VICTIM.http: Flags [S],
seq 3183453748, win 5680, options [mss 1420,sackOK,TS val 44990703 ecr 0,nop,scale 0], length 0
10:00:45.694227 IP ROUTER.3080 > VICTIM.http: Flags [S],
seq 3189966250, win 5680, options [mss 1420,sackOK,TS val 44990716 ecr 0,nop,scale 0], length 0
10:00:45.725956 IP ROUTER.3081 > VICTIM.8080: Flags [S], se
q 3184379372, win 5680, options [mss 1420,sackOK,TS val 44990719 ecr 0,nop,scale 0], length 0
10:00:45.983883 IP ROUTER.3074 > VICTIM.8080: Flags [S],
seq 3186948470, win 5680, options [mss 1420,sackOK,TS val 44990745 ecr 0,nop,scale 0], length 0
10:00:46.985034 IP ROUTER.3082 > VICTIM.http: Flags [S],
seq 3194003065, win 5680, options [mss 1420,sackOK,TS val 44990845 ecr 0,nop,scale 0], length 0

In short, the vulnerability used by the "Moon" worm is no longer being used just to experiment; it's being used to enlist routers in botnets and actively attack targets.

One interesting thing we found about this most recent exploit is that the DNS settings on the routers were permanently changed. The router was set to use domain name servers at the addresses

107.170.168.61

and

107.170.189.30

The "Moon" worm was completely ephemeral and did not change the contents of flash memory (either the configuration or the firmware). The exploit I found today changes at least the DNS settings.

Shame on Belkin for dragging their feet on getting a fix out to the public. But more to the point, this is yet another example why relying on patching to provide security is fundamentally a Bad Thing.