I created a YouTube channel a while back, and began uploading my videos and linking in videos of me that were online. Yes, it’s a dedicated Spaf channel! However, I’m not on camera eating Tide pods, or doing odd skateboard stunts. This is a set of videos with my research and views over the years on information (cyber) security, research, education, and policies.
There are two playlists under the channel — one for interviews that people have conducted with me over the years, and the other being various conference and seminar talks.
One of the seminar talks was one I did at Bellcore on the Internet Worm — about 6 weeks after it occurred (yes, that’s 1988)! Many of my observations and recommendations in that talk seem remarkably current — which I don’t think is necessarily a good observation about how current practice has (not) evolved.
My most recent talk/video is a redo of my keynote address at the 2017 CISSE conference held in June, 2017 in Las Vegas. The talk specifically addresses what I see as the needs in current information security education. CISSE was unable to record it at the time, so I redid it for posterity based on the speaker notes. It only runs about 35 minutes long (there were no introductions or Q&A to field) so it is a quicker watch than being at the conference!
I think there are some other goodies in all of those videos, including views of my bow ties over the years, plus some of my predictions (most of which seem to have been pretty good). However, I am putting these out without having carefully reviewed them — there may be some embarrassing goofs among the (few) pearls of wisdom. It is almost certain that many things changed away from the operational environment that existed at the time I gave some of these talks, so I’m sure some comments will appear “quaint” in retrospect. However, I decided that I would share what I could because someone, somewhere, might find these of value.
If you know of a recording I don’t have linked in to one of the lists, please let me know.
Comments appreciated. Give it a look!
On September 24 and 25 of this year, Purdue University hosted the second Dawn or Doom symposium. The event — a follow-up to the similarly-named event held last year — was focused on talks, movie, presentations, and more related to advanced technology. In particular, the focus has been on technology that poses great potential to advance society, but also potential for misuse or accident that could cause great devastation.
I was asked to speak this year on the implications of surveillance capabilities. These have the promise of improving use of resources, better marketing, improved health care, and reducing crime. However, those same capabilities also threaten our privacy, decrease some potential for freedom of political action, and create an enduring record of our activities that may be misused.
My talk was videotaped and is now available for viewing. The videographers did not capture my introduction and the first few seconds of my remarks.The remaining 40 or so minutes of me talking about surveillance, privacy, and tradeoffs are there, along with a few audience questions and my answers.
If you are interested, feel free to check it out. Comments welcome, especially if I got something incorrect — I was doing this from memory, and as I get older I find my memory not not be quite as trustworthy as it used to be.
You can find video of most of the other Dawn or Doom 2 events online here. The videos of last year's Dawn or Doom event are also online. I spoke last year about some of the risks of embedding computers everywhere, and giving those systems control over safety-critical decisions without adequate safeguards. That talk, Faster Than Our Understanding , includes some of the same privacy themes as the most recent talk, along with discussion of security and safety issues.
Yes, if you saw the news reports, the Dawn or Doom 2 event is also where this incident involving Barton Gellman occurred. Please note that other than some communication with Mr. Gellman, I played absolutely no role in the taping or erasure of his talk. Those issues are outside my scope of authority and responsibility at the university, and based on past experience, almost no one here listens to my advice even if they solicit it. I had no involvement in any of this, other than as a bystander.
Purdue University issued a formal statement on this incident. Related to that statement, for the record, I don’t view Mr. Gellman’s reporting as “an act of civil disobedience.” I do not believe that activities of the media, as protected by the First Amendment of the US Constitution and by legal precedent, can be viewed as “civil disobedience” any more than can be voting, invoking the right to a jury trial, or treating people equally under the law no matter their genders or skin colors. I also share some of Mr. Gellman’s concerns about the introduction of national security restrictions into the entire academic environment, although I also support the need to keep some sensitive government information out of the public view.
That may provide the topic for my talk next year, if I am invited to speak again.
The U.S. limits the export of certain high-tech items that might be used inappropriately (from the government’s point of view). This is intended to prevent (or slow) the spread of technologies that could be used in weapons, used in hostile intelligence operations, or used against a population in violation of their rights. Some are obvious, such as nuclear weapons technology and armor piercing shells. Others are clear after some thought, such as missile guidance software and hardware, and stealth coatings. Some are not immediately clear at all, and may have some benign civilian uses too, such as supercomputers, some lasers, and certain kinds of metal alloys.
Recently, there have been some proposed changes to the export regulations for some computing-related items. In what follows, I will provide my best understanding of both the regulations and the proposed changes. This was produced with the help of one of the professional staff at Purdue who works in this area, and also a few people in USACM who provided comments (I haven’t gotten permission to use their names, so they’re anonymous for now). I am not an expert in this area so please do not use this to make important decisions about what is covered or what you can send outside the country! If you see something in what follows that is in error, please let me know so I can correct it. If you think you might have an export issue under this, consult with an appropriate subject matter expert.
Some export restrictions are determined, in a general way, as part of treaties (e.g., nuclear non-proliferation). A large number are as part of the Wassenaar Arrangement — a multinational effort by 41 countries generally considered to be allies of the US, including most of NATO; a few major countries such as China are not, nor are nominal allies such as Pakistan and Saudi Arabia (to name a few). The Wassenaar group meets regularly to review technology and determine restrictions, and it is up to the member states to pass rules or legislation for themselves. The intent is to help promote international stability and safety, although countries not within Wassenaar might not view it that way.
In the U.S., there are two primary regulatory regimes for exports: ITAR and EAR. ITAR is the International Traffic in Arms Regulations in the Directorate of Defense Trade Controls at the Department of State. ITAR provides restrictions on sale and export of items of primary (or sole) use in military and intelligence operations. The EAR is the Export Administration Regulations in the Bureau of Industry and Security at the Department of Commerce. EAR rules generally cover items that have “dual use” — both military and civilian uses.
These are extremely large, dense, and difficult to understand sets of rules. I had one friend label these as “clear as mud.” After going through them for many hours, I am convinced that mud is clearer!
Items designed explicitly for civilian applications without consideration to military use, or with known dual-use characteristics, are not subject to the ITAR because dual-use and commodity items are explicitly exempted from ITAR rules (see sections 121.1(d) and 120.41(b) of the ITAR). However, being exempt from ITAR does not make an item exempt from the EAR!
If any entity in the US — company, university, or individual — wishes to export an item that is covered under one of these two regimes, that entity must obtain an export license from the appropriate office. The license will specify what can be exported, to what countries, and when. Any export of a controlled item without a license is a violation of Federal law, with potentially severe consequences. What constitutes an export is broader than some people may realize, including:
Those last two items are what as known as a deemed export because the item didn’t leave the U.S., but information about it is given to a non-US person. There are many other special cases of export, and nuances (giving control of a spacecraft to a foreign national is prohibited, for example, as are certain forms of reexport). This is all separate from disclosure of classified materials, although if you really want trouble, you can do both at the same time!
This whole export thing may seem a bit extreme or silly, especially when you look at the items involved, but it isn’t — economic and military espionage to get this kind of material and information is a big problem, even at research labs and universities. Countries that don’t have the latest and greatest factories, labs, and know-how are at a disadvantage both militarily and economically. For instance, a country (.e.g, Iran) that doesn’t have advanced metallurgy and machining may not be able to make specialized items (e.g., the centrifuges to separate fissionable uranium), so they will attempt to steal or smuggle the technology they need. The next best approach is to get whatever knowledge is needed to recreate the expertise locally. You only need to look at the news over a period of a few months to see many stories of economic theft and espionage, as well as state-sponsored incidents.
This brings us to the computing side of things. High speed computers, advanced software packages, cryptography, and other items all have benign commercial uses. However, they all also have military and intelligence uses. High speed computers can be used in weapons guidance and design systems, advanced software packages can be used to model and refine nuclear weapons and stealth vehicles, and cryptography can be used to hide communications and data. As such, there are EAR restriction on many of these items. However, because the technology is so ubiquitous and the economic benefit to the U.S. is so significant, the restrictions have been fairly reasonable to date for most items.
Software is a particularly unusual item to regulate. The norm in the community (for much of the world) is to share algorithms and software. By its nature, huge amounts of software can be copied onto small artifacts and taken across the border, or published on an Internet archive. In universities we regularly give students from around the world access to advanced software, and we teach software engineering and cryptography in our classes. Restriction on these kinds of items would be difficult to enforce, and in some cases, simply silly to restrict.
Thus, the BIS export rules contain a number of exemptions that remove some items from control entirely. (In the following, designations such as 5D002 refer to classes of items as specified in the EAR, and 734.8 refers to section 734 paragraph 8.)
The exemption for publication is interesting. Anyone doing research on controlled items appears to have an exemption under EAR 740.13(e) where they can publish (including posting on the Internet) the source code from research that falls under ECCN 5D002 (typically, cryptography) without restriction, but must notify BIS and NSA of digital publication (email is fine, see 740.13(e.3)); there is no restriction or notification requirement for non-digital print. What is not included is any publication or export (including deemed export) of cryptographic devices or object code not otherwise exempt (object code whose corresponding source code is exempt is itself exempt), or for knowing export to one of the prohibited countries (E:1 from supplement 1 of section 740 — Cuba, Iran, DPRK, Sudan and Syria, although Cuba may have just been removed.)
As part of an effort to harmonize the EAR and ITAR, a proposed revision to both has been published on June 3 (80 FR 31505) that has a nice side-by-side chart of some of these exemptions, along with some small suggested changes.
The Wassenaar group agreed to some changes in December 2013 to include intrusion software and network monitoring items of certain kinds on their export control lists. The E.U. adopted new rules in support of this in October of 2014. On May 20, 2015, the Department of Commerce published — in the Federal Register (80 FR 28853) — a request for comments on its proposed rule to amend the EAR. Specifically, the notice stated:
The Bureau of Industry and Security (BIS) proposes to implement the agreements by the Wassenaar Arrangement (WA) at the Plenary meeting in December 2013 with regard to systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software; software specially designed or modified for the development or production of such systems, equipment or components; software specially designed for the generation, operation or delivery of, or communication with, intrusion software; technology required for the development of intrusion software; Internet Protocol (IP) network communications surveillance systems or equipment and test, inspection, production equipment, specially designed components therefor, and development and production software and technology therefor. BIS proposes a license requirement for the export, reexport, or transfer (in-country) of these cybersecurity items to all destinations, except Canada. Although these cybersecurity capabilities were not previously designated for export control, many of these items have been controlled for their "information security" functionality, including encryption and cryptanalysis. This rule thus continues applicable Encryption Items (EI) registration and review requirements, while setting forth proposed license review policies and special submission requirements to address the new cybersecurity controls, including submission of a letter of explanation with regard to the technical capabilities of the cybersecurity items. BIS also proposes to add the definition of "intrusion software" to the definition section of the EAR pursuant to the WA 2013 agreements. The comments are due Monday, July 20, 2015.
The actual modifications are considerably more involved than the above paragraph, and you should read the Federal Register notice to see the details.
This proposed change has caused some concern in the computing community, perhaps because the EAR and ITAR are so difficult to understand, and because of the recent pronouncements by the FBI seeking to mandate “back doors” into communications and computing.
The genesis of the proposed changes is stated to match the Wassenaar additions of (basically) methods of building, controlling, and inserting intrusion software; technologies related to the production of intrusion software; technology for IP network analysis and surveillance, or for the development and testing of same. These are changes to support national security, regional stability, and counter terrorism.
According to the notice, intrusion software includes items that are intended to avoid detection or defeat countermeasures such as address randomization and sandboxing, and exfiltrate data or change execution paths to provide for execution of externally provided instructions. Debuggers, hypervisors, reverse engineering, and other software tools are exempted. Software and technology designed or specially modified for the development, generation, operation, delivery, or communication with intrusion software is controlled — not the intrusion software itself. It is explicitly stated that rootkits and zero-day exploits will presumptively be denied licenses for export.
The proposed changes for networking equipment/systems would require that it have all 5 of the following characteristics to become a controlled item:
Equipment specially designed for QoS, QoE, or marketing is exempt from this classification.
Two other proposed changes would remove the 740.13(d) exemption for mass-market products, and would make software controlled by one of these new sections and containing encryption would now be dual-listed in two categories. There are other changes for wording, cleaning up typos, and so on.
I don’t believe there are corresponding changes to ITAR because these all naturally fall under the EAR.
Although social media has had a number of people posting vitriol and warnings of the impending Apocalypse, I can’t see it in this change. If anything, this could be a good thing — people who are distributing tools to build viruses, botnets, rootkits and the like may now be prosecuted. The firms selling network monitoring equipment that is being used to oppress political and religious protesters in various countries may now be restrained. The changes won’t harm legitimate research and teaching, because the exemptions I listed above will still apply in those cases. There are no new restrictions on defensive tools. There are no new restrictions on cryptography.
Companies and individuals making software or hardware that will fall under these new rules will now have to go through the process of applying for export licenses, It may also be the case those entities may find their markets reduced. I suspect that it is a small population that will be subject to such a restriction, and for some of them, given their histories, I’m not at all bothered by the idea.
I have seen some analyses that claim that software to jailbreak cellphones might now be controlled. However, if published online without fee (as is often the case), it would be exempt under 734.7. It arguably is a debugging tool, which is also exempt.
I have also seen claims that any technology for patching would fall under these new rules. Legitimate patching doesn’t seek to avoid detection or defeat countermeasures, which are specifically defined as “techniques designed to ensure the safe exertion of code.” Thus, legitimate patching won’t fall within the scope of control.
Jennifer Granick wrote a nice post about the changes. She rhetorically asked at the end whether data loss prevention tools would fall under this new rule. I don’t see how — those tools don’t operate on national grade backbones or index the data they extract. She also posed a question about whether this might hinder research into security vulnerabilities. Given that fundamental research is still exempt under 734.8 as are published results under 734.7, I don’t see the same worry.
The EFF also posted about the proposed rule changes, with some strong statements against them. Again, the concern they stated is about researchers and the tools they use. As I read the EAR and the proposed rule, this is not an issue if the source code for any tools that are exported is published, as per 734.7. The only concern would if the tools were exported and the source code was not publicly available, i.e., private tools exported. I have no idea how often this happens; in my experience, either the tools are published or else they aren’t shared at all, and neither case runs afoul of the rule. The EFF post also tosses up fuzzing, vulnerability reporting, and jailbreaking as possible problems. Fuzzing tools might possibly be a problem under a strict interpretation of the rules, but the research and publication exemptions would seem to come to the rescue. Jailbreaing I addressed, above. I don’t see how reporting vulnerabilities would be export of technology or software for building or controlling intrusion software, so maybe I don’t understand the point.
At first I was concerned about how this rule might affect research at the university, or the work at companies I know about. As I have gotten further into it, I am less and less worried. it seems that there are very reasonable exceptions in place, and I have yet to see a good example of something we might legitimately want to do that would now be prohibited under these rules.
However, your own reading of the proposed rule changes may differ from mine. If so, note the difference in comment to this essay and I’ll either respond privately or post your comment. Of course, commenting here won’t affect the rule! If you want to do that, you should use the formal comment mechanism listed in the Federal Register notice, on or before July 20, 2015.
Update July 17: The BIS has an (evolving) FAQ on the changes posted online. It makes clear the exemptions I described, above. The regulations only cover tools specially designed to design, install, or communicate with intrusion software as they define it. Sharing of vulnerabilities and proof of exploits is not regulated. Disclosing vulnerabilities is not regulated so long as the sharing does not include tools or technologies to install or operate the exploits.
As per the two blog posts I cite above
Look at the FAQ for more detail.
by Spaf
Chair, ACM US Public Policy Council (USACM)†
About 20 years ago, there was a heated debate in the US about giving the government mandatory access to encrypted content via mandatory key escrow. The FBI and other government officials predicted all sorts of gloom and doom if it didn’t happen, including that it would prevent them from fighting crime, especially terrorists, child pornographers, and drug dealers. Various attempts were made to legislate access, including forced key escrow encryption (the “Clipper Chip”). Those efforts didn’t come to pass because eventually enough sensible — and technically literate — people spoke up. Additionally, the economic realities also made it clear that people weren’t knowingly going to buy equipment with government backdoors built in.
Fast forward to today. In the intervening two decades, the forces of darkness did not overtake us as a result of no restrictions on encryption. Yes, there were some terrorist incidents, but either there was no encryption involved that made any difference (e.g., the Boston Marathon bombing), or there was plenty of other evidence but it was never used to prevent anything (e.g., the 9/11 tragedy). Drug dealers have not taken over the country (unless you consider Starbucks coffee a narcotic). Authorities are still catching and prosecuting criminals, including pedophiles and spies. Notably, even people who are using encryption in furtherance of criminal enterprises, such as Ross “Dread Pirate Roberts” Ulbricht, are being arrested and convicted. In all these years, the FBI has yet to point to anything significant where the use of encryption frustrated their investigations. The doomsayers of the mid-1990s were quite clearly wrong.
However, now in 2015 we again have government officials raising a hue and cry that civilization will be overrun, and law enforcement will be rendered powerless unless we pass laws mandating that back doors and known weaknesses be put into encryption on everything from cell phones to email. These arguments have a strong flavor of déjà vu for those of us who were part of the discussion in the 90s. They are even more troubling now, given the scope of government eavesdropping, espionage, and massive data thefts: arguably, encryption is more needed now that it was 20 years ago.
USACM, the Public Policy Council of the ACM, is currently discussing this issue — again. As a group, we made statements against the proposals 20 years ago. (See, for instance, the USACM and IEEE joint letter to Senator McCain in 1997). The arguments in favor of weakening encryption are as specious now as they were 20 years ago; here are a few reasons why:
There are other reasons, too, including cost, impact on innovation, and more. The essay below provides more rationale. Experts and organizations in the field have recently weighed in on this issue, and (as one of the individuals, and as chair of one of the organizations) I expect we will continue to do so.
With all that as a backdrop, I was reminded of an essay on this topic area by one of USACM’s leaders. It was originally given as a conference address two decades ago, then published in several places, including on the EPIC webpage of information about the 1990s anti-escrow battle. The essay is notable both because it was written by someone with experience in Federal criminal prosecution, and because it is still applicable, almost without change, in today’s debate. Perhaps in 20 more years this will be reprinted yet again, as once more memories dim of the arguments made against government-mandated surveillance capabilities. It is worth reading, and remembering.
by Andrew Grosso, Esq.
Chair, USACM Committee on Law
(This article is a revised version of a talk given by the author at the 1996 RSA Data Security Conference, held in San Francisco, California. Mr. Grosso is a former federal prosecutor who now has his own law practice in Washington, D.C. His e-mail address is agrosso@acm.org.)
I would like to start by telling a war story. Some years ago, while I was an Assistant U.S. Attorney, I was asked to try a case which had been indicted by one of my colleagues. For reasons which will become clear, I refer to this case as “the Dank case.”
The defendant was charged with carrying a shotgun. This might not seem so serious, but the defendant had a prior record. In fact, he had six prior convictions, three of which were considered violent felonies. Because of that, this defendant was facing a mandatory fifteen years imprisonment, without parole. Clearly, he needed an explanation for why he was found in a park at night carrying a shotgun. He came up with one.
The defendant claimed that another person, called “Dank,” forced him to carry the gun. “Dank,” it seems, came up to him in the park, put the shotgun in his hands, and then pulled out a handgun and put the handgun to the defendant’s head. “Dank” then forced the defendant to walk from one end of the park to other, carrying this shotgun. When the police showed up, “Dank” ran away, leaving the defendant holding the bag, or, in this case, the shotgun.
The jurors chose not to believe the defendant’s story, although they spent more time considering it than I would like to admit. After the trial, the defendant’s story became known in my office as “the Dank defense.” As for myself, I referred to it as “the devil made me do it.”
I tell you this story because it reminds me of the federal government’s efforts to justify domestic control of encryption. Instead, of “Dank,” it has become, “drug dealers made me do it;” or “terrorists made me do it;” or “crypto anarchists made me do it.” There is as much of a rationale basis behind these claims as there was behind my defendant’s story of “Dank.” Let us examine some of the arguments the government has advanced.
It is said that wiretapping is indispensable to law enforcement. This is not the case. Many complex and difficult criminal investigations have been successfully concluded, and successfully argued to a jury, where no audio tapes existed of the defendants incriminating themselves. Of those significant cases, cited by the government, where audio tapes have proved invaluable, such as in the John Gotti trial, the tapes have been made through means of electronic surveillance other than wire tapping, for example, through the use of consensual monitoring or room bugs. The unfettered use of domestic encryption could have no effect on such surveillance.
It is also said that wiretapping is necessary to prevent crimes. This, also, is not the case. In order to obtain a court order for a wire tap, the government must first possess probable cause that a crime is being planned or is in progress. If the government has such probable cause concerning a crime yet in the planning stages, and has sufficient detail about the plan to tap an individual’s telephone, then the government almost always has enough probable cause to prevent the crime from being committed. The advantage which the government gains by use of a wiretap is the chance to obtain additional evidence which can later be used to convict the conspirators or perpetrators. Although such convictions are desirable, they must not be confused with the ability to prevent the crime.
The value of mandating key escrow encryption is further eroded by the availability of super encryption, that is, using an additional encryption where the key is not available to the government. True, the government’s mandate would make such additional encryption illegal; however the deterrence effect of such legislation is dubious at best. An individual planning a terrorist act, or engaging in significant drug importation, will be little deterred by prohibitions on the means for encoding his telephone conversations. The result is that significant crimes will not be affected or discouraged.
In a similar vein, the most recent estimates of the national cost for implementing the Digital Telephony law, which requires that commercial telecommunications companies wiretap our nation’s communications network for the government’s benefit, is approximately three billion dollars. Three billion dollars will buy an enormous number of police man hours, officer training, and crime fighting equipment. It is difficult to see that this amount of money, by being spent on wire tapping the nation, is being spent most advantageously with regard to law enforcement’s needs.
Finally, the extent of the federal government’s ability to legislate in this area is limited. Legislation for the domestic control of encryption must be based upon the commerce clause of the U.S. Constitution. That clause would not prohibit an individual in, say, the state of California from purchasing an encryption package manufactured in California, and using that package to encode data on the hard drive of his computer, also located in California. It is highly questionable whether the commerce clause would prohibit the in-state use of an encryption package which had been obtained from out of state, where all the encryption is done in-state and the encrypted data is maintained in- state. Such being the case, the value of domestic control of encryption to law enforcement is doubtful.
Now let us turn to the disadvantages of domestic control of encryption. Intentionally or not, such control would shift the balance which exists between the individual and the state. The individual would no longer be free to conduct his personal life, or his business, free from the risk that the government may be watching every move. More to the point, the individual would be told that he would no longer be allowed to even try to conduct his life in such a manner. Under our constitution, it has never been the case that the state had the right to evidence in a criminal investigation. Rather, under our constitution, the state has the right to pursue such evidence. The distinction is crucial: it is the difference between the operation of a free society, and the operation of a totalitarian state.
Our constitution is based upon the concept of ordered liberty. That is, there is a balance between law and order, on the one hand, and the liberty of the individual on the other. This is clearly seen in our country’s bill of rights, and the constitutional protections afforded our accused: evidence improperly obtained is suppressed; there is a ban on the use of involuntary custodial interrogation, including torture, and any questioning of the accused without a lawyer; we require unanimous verdicts for convictions; and double jeopardy and bills of attainder are prohibited. In other words, our system of government expressly tolerates a certain level of crime and disorder in order to preserve liberty and individuality. It is difficult to conceive that the same constitution which is prepared to let a guilty man go free, rather than admit an illegally seized murder weapon into evidence at trial, can be interpreted to permit whole scale, nationwide, mandatory surveillance of our nation’s telecommunications system for law enforcement purposes. It is impossible that the philosophy upon which our system of government was founded could ever be construed to accept such a regime.
I began this talk with a war story, and I would like to end it with another war story. While a law student, I had the opportunity to study in London for a year. While there, I took one week, and spent it touring the old Soviet Union. The official Soviet tour guide I was assigned was an intelligent woman. As a former Olympic athlete, she had been permitted in the 1960’s to travel to England to compete in international tennis matches. At one point in my tour, she asked me why I was studying in London. I told her that I wanted to learn what it was like to live outside of my own country, so I chose to study in a country where I would have little trouble with the language. I noticed a strange expression on her face as I said this. It was not until my tour was over and I looked back on that conversation that I realized why my answer had resulted in her having that strange look. What I had said to her was that *I* had chosen to go to overseas to study; further, I had said that *I* had chosen *where* to go. That I could make such decisions was a right which she and the fellow citizens did not have. Yes, she had visited England, but it was because her government chose her to go, and it was her government which decided where she should go. In her country, at that time, her people had order, but they had no liberty.
In our country, the domestic control of encryption represents a shift in the balance of our liberties. It is a shift not envisioned by our constitution. If ever to be taken, it must be based upon a better defense than what “Dank,” or law enforcement, can provide.
Do you care about this issue? If so, consider contacting your elected legislators to tell them what you think, pro or con. Use this handy site to find out how to contact your Representative and Senators.
Interested in being involved with USACM? If so, visit this page. Note that you first need to be a member of ACM but that gets you all sorts of other benefits, too. We are concerned with issues of computing security, privacy, accessibility, digital governance, intellectual property, computing law, and e-voting. Check out our brochure for more information.
† — This blog post is not an official statement of USACM. However, USACM did issue the letter in 1997 and signed the joint letter earlier this year, as cited, so those two documents are official.
By Mark Rasch† and friends
Last post, we wrote about the NSA‟s secret program to obtain and then analyze the telephone metadata relating to foreign espionage and terrorism by obtaining the telephone metadata relating to everyone. In this post, we will discuss a darker, but somewhat less troubling program called PRISM. As described in public media as leaked PowerPoint slides, PRISM and its progeny is a program to permit the NSA, with approval of the super-secret Foreign Intelligence Surveillance Court (FISC) to obtain “direct access” to the servers of internet companies (e.g., AOL, Google, Microsoft, Skype, and Dropbox) to search for information related to foreign terrorism – or more accurately, terrorism and espionage by “non US persons.”
Whether you believe that PRISM is a wonderful program narrowly designed to protect Americans from terrorist attacks or a massive government conspiracy to gather intimate information to thwart Americans political views, or even a conspiracy to run a false-flag operation to start a space war against alien invaders, what the program actually is, and how it is regulated, depends on how the program operates. When Sir Isaac Newton published his work Opticks in 1704, he described how a PRISM could be used to – well, shed some light on the nature of electromagnetic radiation. Whether you believe that the Booz Allen leaker was a hero, or whether you believe that he should be given the full Theon Greyjoy for treason, there is little doubt that he has sparked a necessary conversation about the nature of privacy and data mining. President Obama is right when he says that, to achieve the proper balance we need to have a conversation. To have a conversation, we have to have some knowledge of the programs we are discussing.
Unlike the telephony metadata, the PRISM programs involve a different character of information, obtained in a potentially different manner. As reported, the PRISM programs involve not only metadata (header, source, location, destination, etc.) but also content information (e-mails, chats, messages, stored files, photographs, videos, audio recordings, and even interception of voice and video Skype calls.)
Courts (including the FISA Court) treat content information differently from “header”information. For example, when the government investigated the ricin-laced letters sent to President Obama and NYC Mayor Michael Bloomberg, they reportedly used the U.S. Postal Service‟s Mail Isolation Control and Tracking (MICT) system which photographs the outside of every letter or parcel sent through the mails – metadata. When Congress passed the Communications Assistance to Law Enforcement Act (CALEA), which among other things established procedures for law enforcement agencies to get access to both “traffic” (non-content) and content information, the FBI took the posistion that it could, without a wiretap order, engage in what it called “Post-cut-through dialed digit extraction” -- that is, when you call your bank and it prompts you to enter your bank account number and password, the FBI wanted to “extract” that information (Office of Information Retrival) as “traffic” not “content.” So the lines between “content” and “non-content”may be blurry. Moreover, with enough context, we can infer content. As Justice Sotomeyor observed in the 2012 GPS privacy case:
… it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742, 99 S.Ct. 2577; United States v. Miller, 425 U.S. 435, 443, 96 S.Ct. 1619, 48 L.Ed.2d 71 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.
But the PRISM program is clearly designed to focus on content. Thus, parts of the Supreme Court‟s holding in Smith v. Maryland that people have no expectation of privacy in the numbers called, etc. therefore does not apply to the PRISM-type information. Right?
Again, not so fast.
Simple question. Do you have a reasonable expectation of privacy in the contents of your e-mail?
Short answer: Yes.
Longer answer: No.
Better answer: Vis a vis whom, and for what purposes. You see, privacy is not black and white. It is multispectral – you know, like light through a triangular piece of glass.
When the government was conducting a criminal investigation of the manufacturer of Enzyte (smiling Bob and his gigantic – um – putter) they subpoenaed his e-mails from, among others, Yahoo! The key word here is subpoena – not search warrant. Now that‟s the thing about data and databases -- if information exists it can be subpoenaed. In fact, a Florida man has now demanded production of cell location data from – you guessed it – the NSA.
But content information is different from other information. And cloud information is different. The telephone records are the records of the phone company about how you used their service. The contents of emails and documents stored in the cloud are your records of which the provider has incidental custody. It would be like the government subpoenaing your landlord for the contents of your apartment (they could, of course subpoena you for this, but then you would know), or subpoenaing the U-stor-it for the contents of your storage locker (sparking a real storage war). They could, with probable cause and a warrant, seach the locker (if you have a warrant, I guess you‟re cooing to come in), but a subpoena to a third party is dicey.
So the Enzyte guy had his records subpoenaed. This was done pursuant to the stored communications act which permits it. The government argued that they didn‟t need a search warrant to read Enzyte guy‟s email, because – you guessed it – he had no expectation of privacy in the contents of his mail. Hell, he stored it unencrypted with a thjird party. Remember Smith v. Maryland? The phone company case? You trust a third party with your records, you risk exposure. Or as Senator Blutarsky (I. NH?) might opine, “you ()*^#)( up, you trusted us…”(actually Otter said that, with apologies to Animal House fans.)
Besides, cloud provider contracts, and email and internet provider privacy policies frequently limit privacy rights of users. In the Enzyte case, the government argued that terms of service that permitted scanning of the contents of email for viruses or spam (or in the case of Gmail or others, embedding context based ads) meant that the user of the email service “consented” to have his or her mail read, and therefore had no privacy rights in the content. (“Yahoo! reserves the right in their sole discretion to pre-screen, refuse, or move any Content that is available via the Service.”) Terms of service which provided that the ISP would respond to lawful subpoenas made them a “joint custodian” of your email and other records (like your roommate) who could consent to the production of your communications or files. Those policies that your employer has that says, “employees have no expectation of privacy in their emails or files"? While you thought that meant that your boss (and the IT guy) can read your emails, the FBI or NSA may take the position that “no expectation of privacy” means exactly that.
Fortunately, most courts don’t go so far. In general, courts have held that the contents of communications and information stored privately online (not on publicly accessible Facebook or Twitter feeds) are entitled to legal protection even if they are in the hands of potentially untrustworthy third parties. But this is by no means assured.
But clearly the data in the PRISM case is more sensitive and entitled to a greater level of legal protection than that in the telephony metadata case. That doesn‟t mean that the government, with a court order, can't search or obtain it. It means that companies like Google and Facebook probably can't just “give it” to the government. I''s not their data.
So the NSA wants to have access to information in a massive database. They may want to read the contents of an email, a file stored on Dropbox, whatever. They may want to track a credit card through the credit card clearing process, or a banking transaction through the interbank funds transfer network. They may want to track travel records – planes, trains or automobiles. All of this information is contained in massive databases or storage facilities held by third parties – usually commercial entities. Banks. VISA/MasterCard. Airlines. Google.
The information can be tremendously useful. The NSA may have lawful authority (a Court order) to obtain it. But there is a practical problem. How does the NSA quickly and efficiently seek and obtain this information from a variety of sources without tipping those sources off about the individual searches it is conducting – information which itself is classified? That appears to be the problem attempted to be solved by PRISM programs.
In the telephony program, the NSA “solved” the problem by simply taking custody of the database.
In PRISM, they apparently did not. And that is a good thing. The databases remain the custody of those who created them.
Here‟s where it gets dicey – factually.
The reports about PRISM indicate that the NSA had “direct access” to the servers of all of these Internet companies. Reports have been circulating that the NSA had similar “direct access” to financial and credit card databases as well. The Internet companies have all issued emphatic denials. So what gives?
Speculation time. The NSA and Internet companies could be outright lying. David Drummond, Google‟s Chief Legal Officer aint going to jail for this. Second, they could be reinterpreting the term “direct” access. When General Alexander testified under oath that the NSA did not “collect any type of data on millions of Americans” he took the term “collect” to mean “read” rather than “obtain.”
Most likely, however, is that the NSA PRISM program is a protocol for the NSA, with FISC approval, to task the computers at these Internet companies to perform a search. This tasking is most likely indirect. How it works is, at this point, rank speculation. What is likely is that an NSA analyst, say in Honolulu, wants to get the communications (postings, YouTube videos, stored communications, whatever) of Abu Nazir, a non-US person, which are stored on a server in the U.S., or stored on a server in the Cloud operated by a US company. The analyst gets “approval” for the “search,” by which I mean that a flock of lawyers from the NSA, FBI and DOJ descend (what is the plural of lawyers? [ a "plague"? --spaf] ) and review the request to ensure that it asks for info about a non US person, that it meets the other FISA requirements, that there is minimization, etc. Then the request is transmitted to the FISC for a warrant. Maybe. Or maybe the FISC has approved the searches in bulk (raising the Writ of Assistance issue we described in the previous post.) We don‟t know. But assuming that the FISC approves the “search,” the request has to be transmitted to, say Google, for their lawyers to review, and then the data transmitted back to the NSA. To the analyst in Honolulu, it may look like “direct access.” I type in a search, and voilia! Results show up on the screen. It is this process that appears to be within the purview of PRISM. It may be a protocol for effectuating court-approved access to information in a database, not direct access to the database.
Or maybe not. Maybe it is a direct pipe into the servers, which the NSA can task, and for which the NSA can simply suck out the entire database and perform their own data analytics. Doubtful, but who knows? That‟s the problem with rank speculation. Aliens, anyone?
But are basing this analysis on what we believe is reasonable to assume.
So, is it legal? Situation murky. Ask again later.
If the FISC approves the search, with a warrant, within the scope of the NSA‟s authority, on a non-US person, with minimization, then it is legal in the U.S., while probably violating the hell out of most EU and other data privacy laws. But that is the nature of the FISA law and the USA PATRIOT Act which amended it. Like the PowerPoint slides said, most internet traffic travels through the U.S., which means we have the ability (and under USA PATRIOT, the authority) to search it.
While the PRISM programs are targeted at much more sensitive content information, if conducted as described above, they actually present fewer domestic legal issues than the telephony metadata case. If they are a dragnet, or if the NSA is actually conducting data mining on these databases to identify potential targets, then there is a bigger issue.
The government has indicated that they may release an unclassified version of at least one FISC opinion related to this subject. That‟s a good thing. Other redacted legal opinions should also be released so we can have the debate President Obama has called for. And let some light pass through this PRISM.
† Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.
Rasch Cyberlaw (301) 547-6925 www.raschcyber.com