Posts in Policies & Law

Page Content

Proposed Changes in Export Control

The U.S. limits the export of certain high-tech items that might be used inappropriately (from the government’s point of view). This is intended to prevent (or slow) the spread of technologies that could be used in weapons, used in hostile intelligence operations, or used against a population in violation of their rights. Some are obvious, such as nuclear weapons technology and armor piercing shells. Others are clear after some thought, such as missile guidance software and hardware, and stealth coatings. Some are not immediately clear at all, and may have some benign civilian uses too, such as supercomputers, some lasers, and certain kinds of metal alloys.

Recently, there have been some proposed changes to the export regulations for some computing-related items. In what follows, I will provide my best understanding of both the regulations and the proposed changes. This was produced with the help of one of the professional staff at Purdue who works in this area, and also a few people in USACM who provided comments (I haven’t gotten permission to use their names, so they’re anonymous for now). I am not an expert in this area so please do not use this to make important decisions about what is covered or what you can send outside the country! If you see something in what follows that is in error, please let me know so I can correct it. If you think you might have an export issue under this, consult with an appropriate subject matter expert.

Export Control

Some export restrictions are determined, in a general way, as part of treaties (e.g., nuclear non-proliferation). A large number are as part of the Wassenaar Arrangement — a multinational effort by 41 countries generally considered to be allies of the US, including most of NATO; a few major countries such as China are not, nor are nominal allies such as Pakistan and Saudi Arabia (to name a few). The Wassenaar group meets regularly to review technology and determine restrictions, and it is up to the member states to pass rules or legislation for themselves. The intent is to help promote international stability and safety, although countries not within Wassenaar might not view it that way.

In the U.S., there are two primary regulatory regimes for exports: ITAR and EAR. ITAR is the International Traffic in Arms Regulations in the Directorate of Defense Trade Controls at the Department of State. ITAR provides restrictions on sale and export of items of primary (or sole) use in military and intelligence operations. The EAR is the Export Administration Regulations in the Bureau of Industry and Security at the Department of Commerce. EAR rules generally cover items that have “dual use” — both military and civilian uses.

These are extremely large, dense, and difficult to understand sets of rules. I had one friend label these as “clear as mud.” After going through them for many hours, I am convinced that mud is clearer!

Items designed explicitly for civilian applications without consideration to military use, or with known dual-use characteristics, are not subject to the ITAR because dual-use and commodity items are explicitly exempted from ITAR rules (see sections 121.1(d) and 120.41(b) of the ITAR). However, being exempt from ITAR does not make an item exempt from the EAR!

If any entity in the US — company, university, or individual — wishes to export an item that is covered under one of these two regimes, that entity must obtain an export license from the appropriate office. The license will specify what can be exported, to what countries, and when. Any export of a controlled item without a license is a violation of Federal law, with potentially severe consequences. What constitutes an export is broader than some people may realize, including:

  • Shipping something outside the U.S. as a sale or gift is an export, even if only a single instance is sent.
  • Sending something outside the U.S. under license, knowing (or suspecting) it will then be sent to a 3rd location is a violation.
  • Providing a controlled item to a foreign-controlled company or organization even if in the U.S. may be an export.
  • Providing keys or passwords that would allow transfer of controlled information or materials to a foreign national is an export
  • Designing or including a controlled item in something that is not controlled, or which separately has a license, and exporting that may be a violation.
  • Giving a non-U.S. person (someone not a citizen or permanent resident) access to an item to examine or use may be an export.
  • Providing software, drawings, pictures, or data on the Internet, on a DVD, on a USB stick, etc to a non-U.S. person may be an export

Those last two items are what as known as a deemed export because the item didn’t leave the U.S., but information about it is given to a non-US person. There are many other special cases of export, and nuances (giving control of a spacecraft to a foreign national is prohibited, for example, as are certain forms of reexport). This is all separate from disclosure of classified materials, although if you really want trouble, you can do both at the same time!

This whole export thing may seem a bit extreme or silly, especially when you look at the items involved, but it isn’t — economic and military espionage to get this kind of material and information is a big problem, even at research labs and universities. Countries that don’t have the latest and greatest factories, labs, and know-how are at a disadvantage both militarily and economically. For instance, a country (.e.g, Iran) that doesn’t have advanced metallurgy and machining may not be able to make specialized items (e.g., the centrifuges to separate fissionable uranium), so they will attempt to steal or smuggle the technology they need. The next best approach is to get whatever knowledge is needed to recreate the expertise locally. You only need to look at the news over a period of a few months to see many stories of economic theft and espionage, as well as state-sponsored incidents.

This brings us to the computing side of things. High speed computers, advanced software packages, cryptography, and other items all have benign commercial uses. However, they all also have military and intelligence uses. High speed computers can be used in weapons guidance and design systems, advanced software packages can be used to model and refine nuclear weapons and stealth vehicles, and cryptography can be used to hide communications and data. As such, there are EAR restriction on many of these items. However, because the technology is so ubiquitous and the economic benefit to the U.S. is so significant, the restrictions have been fairly reasonable to date for most items.


Software is a particularly unusual item to regulate. The norm in the community (for much of the world) is to share algorithms and software. By its nature, huge amounts of software can be copied onto small artifacts and taken across the border, or published on an Internet archive. In universities we regularly give students from around the world access to advanced software, and we teach software engineering and cryptography in our classes. Restriction on these kinds of items would be difficult to enforce, and in some cases, simply silly to restrict.

Thus, the BIS export rules contain a number of exemptions that remove some items from control entirely. (In the following, designations such as 5D002 refer to classes of items as specified in the EAR, and 734.8 refers to section 734 paragraph 8.)

  • EAR 734.3(b.3) exempts technology except software classified under 5D002 (primarily cryptography) if it is
    • arising from fundamental research (described in 734.8), or
    • is already published or will be published (described in 734.7), or
    • is educational information (described in 734.9).
    Exempt from 5D002 is any printed source code, including encryption code, or object code whose corresponding source code is otherwise exempt. See also my note below about 740.13(e.3).
  • EAR 734.7 defines publication as appearing in books, journals, or any media that is available for free or at a price not to exceed the cost of distribution; or freely available in libraries; or released at an open gathering conference, meeting, or seminar open to the qualified public; or otherwise made available,
  • EAR 734.8 defines fundamental research that is ordinarily published in the course of that research.
  • EAR 734.9 defines educational information (which is excluded from the EAR) as information that is released by instruction in catalog courses and associated teaching laboratories of academic institutions. This provision applies to all information, software or technology except certain encryption software, but if the source code of encryption software is publicly available as described in 740.13(e), it can also be considered educational information.
  • We still have some deemed export issues if we are doing research that doesn’t meet the definition of fundamental research (e.g. because it requires signing an NDA or there is a publication restriction in the agreement) and a researcher or recipient involved is not a US person (citizen or permanent resident) employed full time by the university, and with a permanent residence in the US, and is not a national of a D:5 country (Afghanistan, Belarus, Burma, the CAR, China (PRC), the DRC, Core d’Ivorie, Cuba, Cyprus, Eritrea, Fiji (!), Haiti, Iran, Iraq, DPRK, Lebanon, Liberia, Libya, Somalia, Sri Lanka, Sudan, Syria, Venezuela, Vietnam, Zimbabwe). However, that is easily flagged by contracts officers and should be the norm at most universities or large institutions with a contracts office.
  • EAR 740.13(d) exempts certain mass-market software that is sold retail and installed by end-users so long as it does not contain cryptography with keys longer than 64 bits.

The exemption for publication is interesting. Anyone doing research on controlled items appears to have an exemption under EAR 740.13(e) where they can publish (including posting on the Internet) the source code from research that falls under ECCN 5D002 (typically, cryptography) without restriction, but must notify BIS and NSA of digital publication (email is fine, see 740.13(e.3)); there is no restriction or notification requirement for non-digital print. What is not included is any publication or export (including deemed export) of cryptographic devices or object code not otherwise exempt (object code whose corresponding source code is exempt is itself exempt), or for knowing export to one of the prohibited countries (E:1 from supplement 1 of section 740 — Cuba, Iran, DPRK, Sudan and Syria, although Cuba may have just been removed.)

As part of an effort to harmonize the EAR and ITAR, a proposed revision to both has been published on June 3 (80 FR 31505) that has a nice side-by-side chart of some of these exemptions, along with some small suggested changes.


The Wassenaar group agreed to some changes in December 2013 to include intrusion software and network monitoring items of certain kinds on their export control lists. The E.U. adopted new rules in support of this in October of 2014. On May 20, 2015, the Department of Commerce published — in the Federal Register (80 FR 28853) — a request for comments on its proposed rule to amend the EAR. Specifically, the notice stated:

The Bureau of Industry and Security (BIS) proposes to implement the agreements by the Wassenaar Arrangement (WA) at the Plenary meeting in December 2013 with regard to systems, equipment or components specially designed for the generation, operation or delivery of, or communication with, intrusion software; software specially designed or modified for the development or production of such systems, equipment or components; software specially designed for the generation, operation or delivery of, or communication with, intrusion software; technology required for the development of intrusion software; Internet Protocol (IP) network communications surveillance systems or equipment and test, inspection, production equipment, specially designed components therefor, and development and production software and technology therefor. BIS proposes a license requirement for the export, reexport, or transfer (in-country) of these cybersecurity items to all destinations, except Canada. Although these cybersecurity capabilities were not previously designated for export control, many of these items have been controlled for their "information security" functionality, including encryption and cryptanalysis. This rule thus continues applicable Encryption Items (EI) registration and review requirements, while setting forth proposed license review policies and special submission requirements to address the new cybersecurity controls, including submission of a letter of explanation with regard to the technical capabilities of the cybersecurity items. BIS also proposes to add the definition of "intrusion software" to the definition section of the EAR pursuant to the WA 2013 agreements. The comments are due Monday, July 20, 2015.

The actual modifications are considerably more involved than the above paragraph, and you should read the Federal Register notice to see the details.

This proposed change has caused some concern in the computing community, perhaps because the EAR and ITAR are so difficult to understand, and because of the recent pronouncements by the FBI seeking to mandate “back doors” into communications and computing.

The genesis of the proposed changes is stated to match the Wassenaar additions of (basically) methods of building, controlling, and inserting intrusion software; technologies related to the production of intrusion software; technology for IP network analysis and surveillance, or for the development and testing of same. These are changes to support national security, regional stability, and counter terrorism.

According to the notice, intrusion software includes items that are intended to avoid detection or defeat countermeasures such as address randomization and sandboxing, and exfiltrate data or change execution paths to provide for execution of externally provided instructions. Debuggers, hypervisors, reverse engineering, and other software tools are exempted. Software and technology designed or specially modified for the development, generation, operation, delivery, or communication with intrusion software is controlled — not the intrusion software itself. It is explicitly stated that rootkits and zero-day exploits will presumptively be denied licenses for export.

The proposed changes for networking equipment/systems would require that it have all 5 of the following characteristics to become a controlled item:

  1. It operates on a carrier class IP network (e.g., national grade backbone)
  2. Performs analysis at OSI layer 7
  3. Extracts metadata and content and indexes what it extracts
  4. Executes searches based on hard selectors (e.g., name, address)
  5. Performs mapping of relational networks among people or groups

Equipment specially designed for QoS, QoE, or marketing is exempt from this classification.

Two other proposed changes would remove the 740.13(d) exemption for mass-market products, and would make software controlled by one of these new sections and containing encryption would now be dual-listed in two categories. There are other changes for wording, cleaning up typos, and so on.

I don’t believe there are corresponding changes to ITAR because these all naturally fall under the EAR.


Although social media has had a number of people posting vitriol and warnings of the impending Apocalypse, I can’t see it in this change. If anything, this could be a good thing — people who are distributing tools to build viruses, botnets, rootkits and the like may now be prosecuted. The firms selling network monitoring equipment that is being used to oppress political and religious protesters in various countries may now be restrained. The changes won’t harm legitimate research and teaching, because the exemptions I listed above will still apply in those cases. There are no new restrictions on defensive tools. There are no new restrictions on cryptography.

Companies and individuals making software or hardware that will fall under these new rules will now have to go through the process of applying for export licenses, It may also be the case those entities may find their markets reduced. I suspect that it is a small population that will be subject to such a restriction, and for some of them, given their histories, I’m not at all bothered by the idea.

I have seen some analyses that claim that software to jailbreak cellphones might now be controlled. However, if published online without fee (as is often the case), it would be exempt under 734.7. It arguably is a debugging tool, which is also exempt.

I have also seen claims that any technology for patching would fall under these new rules. Legitimate patching doesn’t seek to avoid detection or defeat countermeasures, which are specifically defined as “techniques designed to ensure the safe exertion of code.” Thus, legitimate patching won’t fall within the scope of control.

Jennifer Granick wrote a nice post about the changes. She rhetorically asked at the end whether data loss prevention tools would fall under this new rule. I don’t see how — those tools don’t operate on national grade backbones or index the data they extract. She also posed a question about whether this might hinder research into security vulnerabilities. Given that fundamental research is still exempt under 734.8 as are published results under 734.7, I don’t see the same worry.

The EFF also posted about the proposed rule changes, with some strong statements against them. Again, the concern they stated is about researchers and the tools they use. As I read the EAR and the proposed rule, this is not an issue if the source code for any tools that are exported is published, as per 734.7. The only concern would if the tools were exported and the source code was not publicly available, i.e., private tools exported. I have no idea how often this happens; in my experience, either the tools are published or else they aren’t shared at all, and neither case runs afoul of the rule. The EFF post also tosses up fuzzing, vulnerability reporting, and jailbreaking as possible problems. Fuzzing tools might possibly be a problem under a strict interpretation of the rules, but the research and publication exemptions would seem to come to the rescue. Jailbreaing I addressed, above. I don’t see how reporting vulnerabilities would be export of technology or software for building or controlling intrusion software, so maybe I don’t understand the point.

At first I was concerned about how this rule might affect research at the university, or the work at companies I know about. As I have gotten further into it, I am less and less worried. it seems that there are very reasonable exceptions in place, and I have yet to see a good example of something we might legitimately want to do that would now be prohibited under these rules.

However, your own reading of the proposed rule changes may differ from mine. If so, note the difference in comment to this essay and I’ll either respond privately or post your comment. Of course, commenting here won’t affect the rule! If you want to do that, you should use the formal comment mechanism listed in the Federal Register notice, on or before July 20, 2015.

Update July 17: The BIS has an (evolving) FAQ on the changes posted online. It makes clear the exemptions I described, above. The regulations only cover tools specially designed to design, install, or communicate with intrusion software as they define it. Sharing of vulnerabilities and proof of exploits is not regulated. Disclosing vulnerabilities is not regulated so long as the sharing does not include tools or technologies to install or operate the exploits.

As per the two blog posts I cite above

  • research into security vulnerabilities is explicitly exempt so long as it is simply the research
  • export of vulnerability toolkits and intrusion software would be regulated if those tools are not public domain
  • fuzzing is explicitly listed as exempt because it is not specifically for building intrusion software
  • jailbreaking is exempt, as is publicly available tools for jailbreaking. Tools to make jrailbreaks would likely be regulated.

Look at the FAQ for more detail.

Déjà Vu All Over Again: The Attack on Encryption


by Spaf
Chair, ACM US Public Policy Council (USACM)

About 20 years ago, there was a heated debate in the US about giving the government mandatory access to encrypted content via mandatory key escrow. The FBI and other government officials predicted all sorts of gloom and doom if it didn’t happen, including that it would prevent them from fighting crime, especially terrorists, child pornographers, and drug dealers. Various attempts were made to legislate access, including forced key escrow encryption (the “Clipper Chip”). Those efforts didn’t come to pass because eventually enough sensible — and technically literate — people spoke up. Additionally, the economic realities also made it clear that people weren’t knowingly going to buy equipment with government backdoors built in.

Fast forward to today. In the intervening two decades, the forces of darkness did not overtake us as a result of no restrictions on encryption. Yes, there were some terrorist incidents, but either there was no encryption involved that made any difference (e.g., the Boston Marathon bombing), or there was plenty of other evidence but it was never used to prevent anything (e.g., the 9/11 tragedy). Drug dealers have not taken over the country (unless you consider Starbucks coffee a narcotic). Authorities are still catching and prosecuting criminals, including pedophiles and spies. Notably, even people who are using encryption in furtherance of criminal enterprises, such as Ross “Dread Pirate Roberts” Ulbricht, are being arrested and convicted. In all these years, the FBI has yet to point to anything significant where the use of encryption frustrated their investigations. The doomsayers of the mid-1990s were quite clearly wrong.

However, now in 2015 we again have government officials raising a hue and cry that civilization will be overrun, and law enforcement will be rendered powerless unless we pass laws mandating that back doors and known weaknesses be put into encryption on everything from cell phones to email. These arguments have a strong flavor of déjà vu for those of us who were part of the discussion in the 90s. They are even more troubling now, given the scope of government eavesdropping, espionage, and massive data thefts: arguably, encryption is more needed now that it was 20 years ago.

USACM, the Public Policy Council of the ACM, is currently discussing this issue — again. As a group, we made statements against the proposals 20 years ago. (See, for instance, the USACM and IEEE joint letter to Senator McCain in 1997). The arguments in favor of weakening encryption are as specious now as they were 20 years ago; here are a few reasons why:

  • Weakening encryption to catch a small number of “bad guys” puts a much larger community of law-abiding citizens and companies at risk. Strong encryption is needed to help protect data at rest and in transit against criminal interception;
  • A “golden key” or weakened cryptography is likely to be discovered by others. There is a strong community of people working in security — both legitimately and for criminal enterprises — and access to the “key” or methods to exploit the weaknesses will be actively sought. Once found, untold millions of systems will be defenseless — some, permanently.
  • There is no guarantee that the access methods won’t be leaked, even if they are closely held. There are numerous cases of blackmail and bribery of officials leading to leaked information. Those aren’t the only motives, either. Consider Robert Hanssen, Edward Snowden, and Chelsea (Bradley) Manning: three individuals with top security clearances who stole/leaked extremely sensitive and classified information. Those are only the ones publicly identified so far. Human nature and history instruct us that they won’t be the last.
  • As recently disclosed incidents — including data exfiltration from the State Department, IRS, and OPM — have shown, the government isn’t very good at protecting sensitive information. Keys will be high-value targets. How long before the government agencies (and agents) holding them are hacked?
  • Revelations of government surveillance in excess of legal authority, past and recent, suggest that any backdoor capability in the hands of the government may possibly (likely?) be misused. Strong encryption is a form of self-protection.
  • Consumers in other countries aren’t going to want to buy hardware/software that has backdoors built in for the US government. US companies will be at a huge disadvantage in selling into the international marketplace. Alternatively, other governments will demand the same keys/access, ostensibly for their own law enforcement purposes. Companies will need to accede to these requests, thus broadening the scope of potential disclosure, as well as making US data more accessible to espionage by those countries.
  • Cryptography is not a dark art. There are many cryptography systems available online. Criminals and terrorists will simply layer encryption by using other, stronger systems in addition to the mandated, weakened cryptography. Mandating backdoors will mostly endanger only the law-abiding.

There are other reasons, too, including cost, impact on innovation, and more. The essay below provides more rationale. Experts and organizations in the field have recently weighed in on this issue, and (as one of the individuals, and as chair of one of the organizations) I expect we will continue to do so.

With all that as a backdrop, I was reminded of an essay on this topic area by one of USACM’s leaders. It was originally given as a conference address two decades ago, then published in several places, including on the EPIC webpage of information about the 1990s anti-escrow battle. The essay is notable both because it was written by someone with experience in Federal criminal prosecution, and because it is still applicable, almost without change, in today’s debate. Perhaps in 20 more years this will be reprinted yet again, as once more memories dim of the arguments made against government-mandated surveillance capabilities. It is worth reading, and remembering.

The Law Enforcement Argument for Mandatory Key Escrow Encryption: The “Dank” Case Revisited

by Andrew Grosso, Esq.
Chair, USACM Committee on Law

(This article is a revised version of a talk given by the author at the 1996 RSA Data Security Conference, held in San Francisco, California. Mr. Grosso is a former federal prosecutor who now has his own law practice in Washington, D.C. His e-mail address is

I would like to start by telling a war story. Some years ago, while I was an Assistant U.S. Attorney, I was asked to try a case which had been indicted by one of my colleagues. For reasons which will become clear, I refer to this case as “the Dank case.”

The defendant was charged with carrying a shotgun. This might not seem so serious, but the defendant had a prior record. In fact, he had six prior convictions, three of which were considered violent felonies. Because of that, this defendant was facing a mandatory fifteen years imprisonment, without parole. Clearly, he needed an explanation for why he was found in a park at night carrying a shotgun. He came up with one.

The defendant claimed that another person, called “Dank,” forced him to carry the gun. “Dank,” it seems, came up to him in the park, put the shotgun in his hands, and then pulled out a handgun and put the handgun to the defendant’s head. “Dank” then forced the defendant to walk from one end of the park to other, carrying this shotgun. When the police showed up, “Dank” ran away, leaving the defendant holding the bag, or, in this case, the shotgun.

The jurors chose not to believe the defendant’s story, although they spent more time considering it than I would like to admit. After the trial, the defendant’s story became known in my office as “the Dank defense.” As for myself, I referred to it as “the devil made me do it.”

I tell you this story because it reminds me of the federal government’s efforts to justify domestic control of encryption. Instead, of “Dank,” it has become, “drug dealers made me do it;” or “terrorists made me do it;” or “crypto anarchists made me do it.” There is as much of a rationale basis behind these claims as there was behind my defendant’s story of “Dank.” Let us examine some of the arguments the government has advanced.

It is said that wiretapping is indispensable to law enforcement. This is not the case. Many complex and difficult criminal investigations have been successfully concluded, and successfully argued to a jury, where no audio tapes existed of the defendants incriminating themselves. Of those significant cases, cited by the government, where audio tapes have proved invaluable, such as in the John Gotti trial, the tapes have been made through means of electronic surveillance other than wire tapping, for example, through the use of consensual monitoring or room bugs. The unfettered use of domestic encryption could have no effect on such surveillance.

It is also said that wiretapping is necessary to prevent crimes. This, also, is not the case. In order to obtain a court order for a wire tap, the government must first possess probable cause that a crime is being planned or is in progress. If the government has such probable cause concerning a crime yet in the planning stages, and has sufficient detail about the plan to tap an individual’s telephone, then the government almost always has enough probable cause to prevent the crime from being committed. The advantage which the government gains by use of a wiretap is the chance to obtain additional evidence which can later be used to convict the conspirators or perpetrators. Although such convictions are desirable, they must not be confused with the ability to prevent the crime.

The value of mandating key escrow encryption is further eroded by the availability of super encryption, that is, using an additional encryption where the key is not available to the government. True, the government’s mandate would make such additional encryption illegal; however the deterrence effect of such legislation is dubious at best. An individual planning a terrorist act, or engaging in significant drug importation, will be little deterred by prohibitions on the means for encoding his telephone conversations. The result is that significant crimes will not be affected or discouraged.

In a similar vein, the most recent estimates of the national cost for implementing the Digital Telephony law, which requires that commercial telecommunications companies wiretap our nation’s communications network for the government’s benefit, is approximately three billion dollars. Three billion dollars will buy an enormous number of police man hours, officer training, and crime fighting equipment. It is difficult to see that this amount of money, by being spent on wire tapping the nation, is being spent most advantageously with regard to law enforcement’s needs.

Finally, the extent of the federal government’s ability to legislate in this area is limited. Legislation for the domestic control of encryption must be based upon the commerce clause of the U.S. Constitution. That clause would not prohibit an individual in, say, the state of California from purchasing an encryption package manufactured in California, and using that package to encode data on the hard drive of his computer, also located in California. It is highly questionable whether the commerce clause would prohibit the in-state use of an encryption package which had been obtained from out of state, where all the encryption is done in-state and the encrypted data is maintained in- state. Such being the case, the value of domestic control of encryption to law enforcement is doubtful.

Now let us turn to the disadvantages of domestic control of encryption. Intentionally or not, such control would shift the balance which exists between the individual and the state. The individual would no longer be free to conduct his personal life, or his business, free from the risk that the government may be watching every move. More to the point, the individual would be told that he would no longer be allowed to even try to conduct his life in such a manner. Under our constitution, it has never been the case that the state had the right to evidence in a criminal investigation. Rather, under our constitution, the state has the right to pursue such evidence. The distinction is crucial: it is the difference between the operation of a free society, and the operation of a totalitarian state.

Our constitution is based upon the concept of ordered liberty. That is, there is a balance between law and order, on the one hand, and the liberty of the individual on the other. This is clearly seen in our country’s bill of rights, and the constitutional protections afforded our accused: evidence improperly obtained is suppressed; there is a ban on the use of involuntary custodial interrogation, including torture, and any questioning of the accused without a lawyer; we require unanimous verdicts for convictions; and double jeopardy and bills of attainder are prohibited. In other words, our system of government expressly tolerates a certain level of crime and disorder in order to preserve liberty and individuality. It is difficult to conceive that the same constitution which is prepared to let a guilty man go free, rather than admit an illegally seized murder weapon into evidence at trial, can be interpreted to permit whole scale, nationwide, mandatory surveillance of our nation’s telecommunications system for law enforcement purposes. It is impossible that the philosophy upon which our system of government was founded could ever be construed to accept such a regime.

I began this talk with a war story, and I would like to end it with another war story. While a law student, I had the opportunity to study in London for a year. While there, I took one week, and spent it touring the old Soviet Union. The official Soviet tour guide I was assigned was an intelligent woman. As a former Olympic athlete, she had been permitted in the 1960’s to travel to England to compete in international tennis matches. At one point in my tour, she asked me why I was studying in London. I told her that I wanted to learn what it was like to live outside of my own country, so I chose to study in a country where I would have little trouble with the language. I noticed a strange expression on her face as I said this. It was not until my tour was over and I looked back on that conversation that I realized why my answer had resulted in her having that strange look. What I had said to her was that *I* had chosen to go to overseas to study; further, I had said that *I* had chosen *where* to go. That I could make such decisions was a right which she and the fellow citizens did not have. Yes, she had visited England, but it was because her government chose her to go, and it was her government which decided where she should go. In her country, at that time, her people had order, but they had no liberty.

In our country, the domestic control of encryption represents a shift in the balance of our liberties. It is a shift not envisioned by our constitution. If ever to be taken, it must be based upon a better defense than what “Dank,” or law enforcement, can provide.

What you can do

Do you care about this issue? If so, consider contacting your elected legislators to tell them what you think, pro or con. Use this handy site to find out how to contact your Representative and Senators.

Interested in being involved with USACM? If so, visit this page. Note that you first need to be a member of ACM but that gets you all sorts of other benefits, too. We are concerned with issues of computing security, privacy, accessibility, digital governance, intellectual property, computing law, and e-voting. Check out our brochure for more information.

† — This blog post is not an official statement of USACM. However, USACM did issue the letter in 1997 and signed the joint letter earlier this year, as cited, so those two documents are official.

Opticks and a Treatise on the PRISM Surveillance Program (Guest Blog)

By Mark Rasch and Sophia Hannah

Last post, we wrote about the NSA‟s secret program to obtain and then analyze the telephone metadata relating to foreign espionage and terrorism by obtaining the telephone metadata relating to everyone. In this post, we will discuss a darker, but somewhat less troubling program called PRISM. As described in public media as leaked PowerPoint slides, PRISM and its progeny is a program to permit the NSA, with approval of the super-secret Foreign Intelligence Surveillance Court (FISC) to obtain “direct access” to the servers of internet companies (e.g., AOL, Google, Microsoft, Skype, and Dropbox) to search for information related to foreign terrorism – or more accurately, terrorism and espionage by “non US persons.”

Whether you believe that PRISM is a wonderful program narrowly designed to protect Americans from terrorist attacks or a massive government conspiracy to gather intimate information to thwart Americans political views, or even a conspiracy to run a false-flag operation to start a space war against alien invaders, what the program actually is, and how it is regulated, depends on how the program operates. When Sir Isaac Newton published his work Opticks in 1704, he described how a PRISM could be used to – well, shed some light on the nature of electromagnetic radiation. Whether you believe that the Booz Allen leaker was a hero, or whether you believe that he should be given the full Theon Greyjoy for treason, there is little doubt that he has sparked a necessary conversation about the nature of privacy and data mining. President Obama is right when he says that, to achieve the proper balance we need to have a conversation. To have a conversation, we have to have some knowledge of the programs we are discussing.

Different Data

Unlike the telephony metadata, the PRISM programs involve a different character of information, obtained in a potentially different manner. As reported, the PRISM programs involve not only metadata (header, source, location, destination, etc.) but also content information (e-mails, chats, messages, stored files, photographs, videos, audio recordings, and even interception of voice and video Skype calls.)

Courts (including the FISA Court) treat content information differently from “header”information. For example, when the government investigated the ricin-laced letters sent to President Obama and NYC Mayor Michael Bloomberg, they reportedly used the U.S. Postal Service‟s Mail Isolation Control and Tracking (MICT) system which photographs the outside of every letter or parcel sent through the mails – metadata. When Congress passed the Communications Assistance to Law Enforcement Act (CALEA), which among other things established procedures for law enforcement agencies to get access to both “traffic” (non-content) and content information, the FBI took the posistion that it could, without a wiretap order, engage in what it called “Post-cut-through dialed digit extraction” -- that is, when you call your bank and it prompts you to enter your bank account number and password, the FBI wanted to “extract” that information (Office of Information Retrival) as “traffic” not “content.” So the lines between “content” and “non-content”may be blurry. Moreover, with enough context, we can infer content. As Justice Sotomeyor observed in the 2012 GPS privacy case:

… it may be necessary to reconsider the premise that an individual has no reasonable expectation of privacy in information voluntarily disclosed to third parties. E.g., Smith, 442 U.S., at 742, 99 S.Ct. 2577; United States v. Miller, 425 U.S. 435, 443, 96 S.Ct. 1619, 48 L.Ed.2d 71 (1976). This approach is ill suited to the digital age, in which people reveal a great deal of information about themselves to third parties in the course of carrying out mundane tasks. People disclose the phone numbers that they dial or text to their cellular providers; the URLs that they visit and the e-mail addresses with which they correspond to their Internet service providers; and the books, groceries, and medications they purchase to online retailers.

But the PRISM program is clearly designed to focus on content. Thus, parts of the Supreme Court‟s holding in Smith v. Maryland that people have no expectation of privacy in the numbers called, etc. therefore does not apply to the PRISM-type information. Right?

Again, not so fast.

Expecting Privacy

Simple question. Do you have a reasonable expectation of privacy in the contents of your e-mail?

Short answer: Yes.

Longer answer: No.

Better answer: Vis a vis whom, and for what purposes. You see, privacy is not black and white. It is multispectral – you know, like light through a triangular piece of glass.

When the government was conducting a criminal investigation of the manufacturer of Enzyte (smiling Bob and his gigantic – um – putter) they subpoenaed his e-mails from, among others, Yahoo! The key word here is subpoenanot search warrant. Now that‟s the thing about data and databases -- if information exists it can be subpoenaed. In fact, a Florida man has now demanded production of cell location data from – you guessed it – the NSA.

But content information is different from other information. And cloud information is different. The telephone records are the records of the phone company about how you used their service. The contents of emails and documents stored in the cloud are your records of which the provider has incidental custody. It would be like the government subpoenaing your landlord for the contents of your apartment (they could, of course subpoena you for this, but then you would know), or subpoenaing the U-stor-it for the contents of your storage locker (sparking a real storage war). They could, with probable cause and a warrant, seach the locker (if you have a warrant, I guess you‟re cooing to come in), but a subpoena to a third party is dicey.

So the Enzyte guy had his records subpoenaed. This was done pursuant to the stored communications act which permits it. The government argued that they didn‟t need a search warrant to read Enzyte guy‟s email, because – you guessed it – he had no expectation of privacy in the contents of his mail. Hell, he stored it unencrypted with a thjird party. Remember Smith v. Maryland? The phone company case? You trust a third party with your records, you risk exposure. Or as Senator Blutarsky (I. NH?) might opine, “you ()*^#)( up, you trusted us…”(actually Otter said that, with apologies to Animal House fans.)

Besides, cloud provider contracts, and email and internet provider privacy policies frequently limit privacy rights of users. In the Enzyte case, the government argued that terms of service that permitted scanning of the contents of email for viruses or spam (or in the case of Gmail or others, embedding context based ads) meant that the user of the email service “consented” to have his or her mail read, and therefore had no privacy rights in the content. (“Yahoo! reserves the right in their sole discretion to pre-screen, refuse, or move any Content that is available via the Service.”) Terms of service which provided that the ISP would respond to lawful subpoenas made them a “joint custodian” of your email and other records (like your roommate) who could consent to the production of your communications or files. Those policies that your employer has that says, “employees have no expectation of privacy in their emails or files"? While you thought that meant that your boss (and the IT guy) can read your emails, the FBI or NSA may take the position that “no expectation of privacy” means exactly that.

Fortunately, most courts don’t go so far. In general, courts have held that the contents of communications and information stored privately online (not on publicly accessible Facebook or Twitter feeds) are entitled to legal protection even if they are in the hands of potentially untrustworthy third parties. But this is by no means assured.

But clearly the data in the PRISM case is more sensitive and entitled to a greater level of legal protection than that in the telephony metadata case. That doesn‟t mean that the government, with a court order, can't search or obtain it. It means that companies like Google and Facebook probably can't just “give it” to the government. I''s not their data.

The PRISM Problem

So the NSA wants to have access to information in a massive database. They may want to read the contents of an email, a file stored on Dropbox, whatever. They may want to track a credit card through the credit card clearing process, or a banking transaction through the interbank funds transfer network. They may want to track travel records – planes, trains or automobiles. All of this information is contained in massive databases or storage facilities held by third parties – usually commercial entities. Banks. VISA/MasterCard. Airlines. Google.

The information can be tremendously useful. The NSA may have lawful authority (a Court order) to obtain it. But there is a practical problem. How does the NSA quickly and efficiently seek and obtain this information from a variety of sources without tipping those sources off about the individual searches it is conducting – information which itself is classified? That appears to be the problem attempted to be solved by PRISM programs.

In the telephony program, the NSA “solved” the problem by simply taking custody of the database.

In PRISM, they apparently did not. And that is a good thing. The databases remain the custody of those who created them.

Here‟s where it gets dicey – factually.

The reports about PRISM indicate that the NSA had “direct access” to the servers of all of these Internet companies. Reports have been circulating that the NSA had similar “direct access” to financial and credit card databases as well. The Internet companies have all issued emphatic denials. So what gives?

Speculation time. The NSA and Internet companies could be outright lying. David Drummond, Google‟s Chief Legal Officer aint going to jail for this. Second, they could be reinterpreting the term “direct” access. When General Alexander testified under oath that the NSA did not “collect any type of data on millions of Americans” he took the term “collect” to mean “read” rather than “obtain.”

Most likely, however, is that the NSA PRISM program is a protocol for the NSA, with FISC approval, to task the computers at these Internet companies to perform a search. This tasking is most likely indirect. How it works is, at this point, rank speculation. What is likely is that an NSA analyst, say in Honolulu, wants to get the communications (postings, YouTube videos, stored communications, whatever) of Abu Nazir, a non-US person, which are stored on a server in the U.S., or stored on a server in the Cloud operated by a US company. The analyst gets “approval” for the “search,” by which I mean that a flock of lawyers from the NSA, FBI and DOJ descend (what is the plural of lawyers? [ a "plague"? --spaf] ) and review the request to ensure that it asks for info about a non US person, that it meets the other FISA requirements, that there is minimization, etc. Then the request is transmitted to the FISC for a warrant. Maybe. Or maybe the FISC has approved the searches in bulk (raising the Writ of Assistance issue we described in the previous post.) We don‟t know. But assuming that the FISC approves the “search,” the request has to be transmitted to, say Google, for their lawyers to review, and then the data transmitted back to the NSA. To the analyst in Honolulu, it may look like “direct access.” I type in a search, and voilia! Results show up on the screen. It is this process that appears to be within the purview of PRISM. It may be a protocol for effectuating court-approved access to information in a database, not direct access to the database.

Or maybe not. Maybe it is a direct pipe into the servers, which the NSA can task, and for which the NSA can simply suck out the entire database and perform their own data analytics. Doubtful, but who knows? That‟s the problem with rank speculation. Aliens, anyone?

But are basing this analysis on what we believe is reasonable to assume.

So, is it legal? Situation murky. Ask again later.

If the FISC approves the search, with a warrant, within the scope of the NSA‟s authority, on a non-US person, with minimization, then it is legal in the U.S., while probably violating the hell out of most EU and other data privacy laws. But that is the nature of the FISA law and the USA PATRIOT Act which amended it. Like the PowerPoint slides said, most internet traffic travels through the U.S., which means we have the ability (and under USA PATRIOT, the authority) to search it.

While the PRISM programs are targeted at much more sensitive content information, if conducted as described above, they actually present fewer domestic legal issues than the telephony metadata case. If they are a dragnet, or if the NSA is actually conducting data mining on these databases to identify potential targets, then there is a bigger issue.

The government has indicated that they may release an unclassified version of at least one FISC opinion related to this subject. That‟s a good thing. Other redacted legal opinions should also be released so we can have the debate President Obama has called for. And let some light pass through this PRISM.

Mark Rasch, is the former head of the United States Department of Justice Computer Crime Unit, where he helped develop the department’s guidelines for computer crimes related to investigations, forensics and evidence gathering. Mr. Rasch is currently a principal with Rasch Technology and Cyberlaw and specializes in computer security and privacy.

Sophia Hannah has a BS degree in Physics with a minor in Computer Science and has worked in scientific research, information technology, and as a computer programmer. She currently manages projects with Rasch Technology and Cyberlaw and researches a variety of topics in cyberlaw.

Rasch Cyberlaw (301) 547-6925

Some thoughts on “cybersecurity” professionalization and education

[I was recently asked for some thoughts on the issues of professionalization and education of people working in cyber security. I realize I have been asked this many times, I and I keep repeating my answers, to various levels of specificity. So, here is an attempt to capture some of my thoughts so I can redirect future queries here.]

There are several issues relating to the area of personnel in this field that make issues of education and professional definition more complex and difficult to define. The field has changing requirements and increasing needs (largely because industry and government ignored the warnings some of us were sounding many years ago, but that is another story, oft told -- and ignored).

When I talk about educational and personnel needs, I discuss it metaphorically, using two dimensions. Along one axis is the continuum (with an arbitrary directionality) of science, engineering, and technology. Science is the study of fundamental properties and investigation of what is possible -- and the bounds on that possibility. Engineering is the study of design and building new artifacts under constraints. Technology is the study of how to choose from existing artifacts and employ them effectively to solve problems.


The second axis is the range of pure practice to abstraction. This axis is less linear than the other (which is not exactly linear, either), and I don't yet have a good scale for it. However, conceptually I relate it to applying levels of abstraction and anticipation. At its "practice" end are those who actually put in the settings and read the logs of currently-existing artifacts; they do almost no hypothesizing. Moving the other direction we see increasing interaction with abstract thought, people and systems, including operations, law enforcement, management, economics, politics, and eventually, pure theory. At one end, it is "hands-on" with the technology, and at the other is pure interaction with people and abstractions, and perhaps no contact with the technology.

There are also levels of mastery involved for different tasks, such as articulated in Bloom's Taxonomy of learning. Adding that in would provide more complexity than can fit in this blog entry (which is already too long).

The means of acquisition of necessary expertise varies for any position within this field. Many technicians can be effective with simple training, sometimes with at most on-the-job experience. They usually need little or no background beyond everyday practice. Those at the extremes of abstract thought in theory or policy need considerably more background, of the form we generally associate with higher education (although that is not strictly required), often with advanced degrees. And, of course, throughout, people need some innate abilities and motivation for the role they seek; Not everyone has ability, innate or developed, for each task area.

We have need of the full spectrum of these different forms of expertise, with government and industry currently putting an emphasis on the extremes of the quadrant involving technology/practice -- they have problems, now, and want people to populate the "digital ramparts" to defend them. This emphasis applies to those who operate the IDS and firewalls, but also to those who find ways to exploit existing systems (that is an area I believe has been overemphasized by government. Cf. my old blog post and a recent post by Gary McGraw). Many, if not most, of these people can acquire needed skills via training -- such as are acquired on the job, in 1-10 day "minicourses" provided by commercial organizations, and vocational education (e.g, some secondary ed, 2-year degree programs). These kinds of roles are easily designated with testing and course completion certificates.

Note carefully that there is no value statement being made here -- deeply technical roles are fundamental to civilization as we know it. The plumbers, electricians, EMTs, police, mechanics, clerks, and so on are key to our quality of life. The programs that prepare people for those careers are vital, too.

Of course, there are also careers that are directly located in many other places in the abstract plane illustrated above: scientists, software engineers, managers, policy makers, and even bow tie-wearing professors. grin

One problem comes about when we try to impose sharply-defined categories on all of this, and say that person X has sufficient mastery of the category to perform tasks A, B, and C that are perceived as part of that category. However, those categories are necessarily shifting, not well-defined, and new needs are constantly arising. For instance, we have someone well trained in selecting and operating firewalls and IDS, but suddenly she is confronted with the need to investigate a possible act of nation-state espionage, determine what was done, and how it happened. Or, she is asked to set corporate policy for use of BYOD without knowledge of all the various job functions and people involved. Further deployment of mobile and embedded computing will add further shifts. The skills to do most of these tasks are not easily designated, although a combination of certificates and experience may be useful.

Too many (current) educational programs stress only the technology -- and many others include significant technology training components because of pressure by outside entities -- rather than a full spectrum of education and skills. We have a real shortage of people who have any significant insight into the scope of application of policy, management, law, economics, psychology and the like to cybersecurity, although arguably, those are some of the problems most obvious to those who have the long view. (BTW, that is why CERIAS was founded 15 years ago including faculty in nearly 20 academic departments: "cybersecurity" is not solely a technology issue; this has more recently been recognized by several other universities that are now also treating it holistically.) These other skill areas often require deeper education and repetition of exercises involving abstract thought. It seems that not as many people are naturally capable of mastering these skills. The primary means we use to designate mastery is through postsecondary degrees, although their exact meaning does vary based on the granting institution.

So, consider some the bottom line questions of "professionalization" -- what is, exactly, the profession? What purposes does it serve to delineate one or more niche areas, especially in a domain of knowledge and practice that changes so rapidly? Who should define those areas? Do we require some certification to practice in the field? Given the above, I would contend that too many people have too narrow a view of the domain, and they are seeking some way of ensuring competence only for their narrow application needs. There is therefore a risk that imposing "professional certifications" on this field would both serve to further skew the perception of what is involved, and discourage development of some needed expertise. Defining narrow paths or skill sets for "the profession" might well do the same. Furthermore, much of the body of knowledge is heuristics and "best practice" that has little basis in sound science and engineering. Calling someone in the 1600s a "medical professional" because he knew how to let blood, apply leeches, and hack off limbs with a carpenter's saw using assistants to hold down the unanesthitized patient creates a certain cognitive dissonance; today, calling someone a "cyber security professional" based on knowledge of how to configure Windows, deploy a firewall, and install anti-virus programs should probably be viewed as a similar oddity. We need to evolve to where the deployed base isn't so flawed, and we have some knowledge of what security really is -- evolve from the equivalent of "sawbones" to infectious disease specialists.

We have already seen some of this unfortunate side-effect with the DOD requirements for certifications. Now DOD is about to revisit the requirements, because they have found that many people with certifications don't have the skills they (DOD) think they want. Arguably, people who enter careers and seek (and receive) certification are professionals, at least in a current sense of that word. It is not their fault that the employers don't understand the profession and the nature of the field. Also notable are cases of people with extensive experience and education, who exceed the real needs, but are not eligible for employment because they have not paid for the courses and exams serving as gateways for particular certificates -- and cash cows for their issuing organizations. There are many disconnects in all of this. We also saw skew develop in the academic CAE program.

Here is a short parable that also has implications for this topic.

In the early 1900s, officials with the Bell company (telephones) were very concerned. They told officials and the public that there was a looming personnel crisis. They predicted that, at the then-current rate of growth, by the end of the century everyone in the country would need to be a telephone operator or telephone installer. Clearly, this was impossible.

Fast forward to recent times. Those early predictions were correct. Everyone was an installer -- each could buy a phone at the corner store, and plug it into a jack in the wall at home. Or, simpler yet, they could buy cellphones that were already on. And everyone was an operator -- instead of using plugboards and directory assistance, they would use an online service to get a phone number and enter it in the keypad (or speed dial from memory). What happened? Focused research, technology evolution, investment in infrastructure, economics, policy, and psychology (among others) interacted to "shift the paradigm" to one that no longer had the looming personnel problems.

If we devoted more resources and attention to the broadly focused issues of information protection (not "cyber" -- can we put that term to rest?), we might well obviate many of the problems that now require legions of technicians. Why do we have firewalls and IDS? In large part, because the underlying software and hardware was not designed for use in an open environment, and its development is terribly buggy and poorly configured. The languages, systems, protocols, and personnel involved in the current infrastructure all need rethinking and reengineering. But so long as the powers-that-be emphasize retaining (and expanding) legacy artifacts and compatibility based on up-front expense instead of overall quality, and in training yet more people to be the "cyber operators" defending those poor choices, we are not going to make the advances necessary to move beyond them (and, to repeat, many of us have been warning about that for decades). And we are never going to have enough "professionals" to keep them safe. We are focusing on the short term and will lose the overall struggle; we need to evolve our way out of the problems, not meet them with an ever-growing band of mercenaries.

The bottom line? We should be very cautious in defining what a "professional" is in this field so that we don't institutionalize limitations and bad practices. And we should do more to broaden the scope of education for those who work in those "professions" to ensure that their focus -- and skills -- are not so limited as to miss important features that should be part of what they do. As one glaring example, think "privacy" -- how many of the "professionals" working in the field have a good grounding and concern about preserving privacy (and other civil rights) in what they do? Where is privacy even mentioned in "cybersecurity"? What else are they missing?

[If this isn't enough of my musings on education, you can read two of my ideas in a white paper I wrote in 2010. Unfortunately, although many in policy circles say they like the ideas, no one has shown any signs of acting as a champion for either.]

[3/2/2013] While at the RSA Conference, I was interviewed by the Information Security Media Group on the topic of cyber workforce. The video is available online.

A Cautionary Incident

Recently, Amazon's cloud service failed for several customers, and has not come back fully for well over 24 hours. As of the time I write this, Amazon has not commented as to what caused the problem, why it took so long to fix, or how many customers it affected.

It seems a client of Amazon was not able to contact support, and posted in a support forum under the heading "Life of our patients is at stake - I am desperately asking you to contact." The body of the message was that "We are a monitoring company and are monitoring hundreds of cardiac patients at home. We were unable to see their ECG signals"

What ensued was a back-and-forth with others incredulous that such a service would not have a defined disaster plan and alternate servers defined, with the original poster trying to defend his/her position. At the end, as the Amazon service slowly came back, the original poster seemed to back off from the original claim, which implies either an attempt to evade further scolding (and investigation), or that the original posting was a huge exaggeration to get attention. Either way, the prospect of a mission critical system depending on the service was certainly disconcerting.

Personnel from Amazon apparently never contacted the original poster, despite that company having a Premium service contract.

25 or so years ago, Brian Reid defined a distributed system as " where I can't get my work done because a computer I never heard of is down." (Since then I've seen this attributed to Leslie Lamport, but at the time heard it attributed to Reid.) It appears that "The Cloud" is simply today's buzzword for a distributed system. There have been some changes to hardware and software, but the general idea is the same — with many of the limitations and cautions attendant thereto, plus some new ones unique to it. Those who extol its benefits (viz., cost) without understanding the many risks involved (security, privacy, continuity, legal, etc.) may find themselves someday making similar postings to support fora — as well as "position wanted" sites.

The full thread is available here.

A Recent Interview, and other info

I have not been blogging here for a while because of some health and workload issues. I hope to resume regular posts before too much longer.

Recently, I was interviewed about the current state of security . I think the interview came across fairly well, and captured a good cross-section of my current thinking on this topic. So, I'm posting a link to that interview here with some encouragement for you to go read it as a substitute for me writing a blog post:

Complexity Is Killing Us: A Security State of the Union With Eugene Spafford of CERIAS

Also, let me note that our annual CERIAS Symposium will be held April 5th & 6th here at Purdue. You can register and find more information via our web site.

But that isn't all!

Plus, all of the above are available via RSS feeds.  We also have a Twitter feed: @cerias. Not all of our information goes out on the net, because some of it is restricted to our partner organizations, but eventually the majority of it makes it out to one of the above outlets.

So, although I haven't been blogging recently, there has still been a steady stream of activity from the 150+ people who make up the CERIAS "family."   

What About the Other 11 Months?

October is "officially" National Cyber Security Awareness Month. Whoopee! As I write this, only about 27 more days before everyone slips back into their cyber stupor and ignores the issues for the other 11 months.

Yes, that is not the proper way to look at it. The proper way is to look at the lack of funding for long-term research, the lack of meaningful initiatives, the continuing lack of understanding that robust security requires actually committing resources, the lack of meaningful support for education, almost no efforts to support law enforcement, and all the elements of "Security Theater" (to use Bruce Schneier's very appropriate term) put forth as action, only to realize that not much is going to happen this month, either. After all, it is "Awareness Month" rather than "Action Month."

There was a big announcement at the end of last week where Secretary Napolitano of DHS announced that DHS had new authority to hire 1000 cybersecurity experts. Wow! That immediately went on my list of things to blog about, but before I could get to it, Bob Cringely wrote almost everything that I was going to write in his blog post The Cybersecurity Myth - Cringely on technology. (NB. Similar to Bob's correspondent, I have always disliked the term "cybersecurity" that was introduced about a dozen years ago, but it has been adopted by the hoi polloi akin to "hacker" and "virus.") I've testified before the Senate about the lack of significant education programs and the illusion of "excellence" promoted by DHS and NSA -- you can read those to get my bigger picture view of the issues on personnel in this realm. But, in summary, I think Mr. Cringely has it spot on.

Am I being too cynical? I don't really think so, although I am definitely seen by many as a professional curmudgeon in the field. This is the 6th annual Awareness Month and things are worse today than when this event was started. As one indicator, consider that the funding for meaningful education and research have hardly changed. NITRD (National Information Technology Research & Development) figures show that the fiscal 2009 allocation for Cyber Security and Information Assurance (their term) was about $321 million across all Federal agencies. Two-thirds of this amount is in budgets for Defense agencies, with the largest single amount to DARPA; the majority of these funds have gone to the "D" side of the equation (development) rather than fundamental research, and some portion has undoubtedly gone to support offensive technologies rather than building safer systems. This amount has perhaps doubled since 2001, although the level of crime and abuse has risen far more -- by at least two levels of magnitude. The funding being made available is a pittance and not enough to really address the problems.

Here's another indicator. A recent conversation with someone at McAfee revealed that new pieces of deployed malware are being indexed at a rate of about 10 per second -- and those are only the ones detected and being reported! Some of the newer attacks are incredibly sophisticated, defeating two-factor authentication and falsifying bank statements in real time. The criminals are even operating a vast network of fake merchant sites designed to corrupt visitors' machines and steal financial information.   Some accounts place the annual losses in the US alone at over $100 billion per year from cyber crime activities -- well over 300 times everything being spent by the US government in R&D to stop it. (Hey, but what's 100 billion dollars, anyhow?) I have heard unpublished reports that some of the criminal gangs involved are spending tens of millions of dollars a year to write new and more effective attacks. Thus, by some estimates, the criminals are vastly outspending the US Government on R&D in this arena, and that doesn't count what other governments are spending to steal classified data and compromise infrastructure. They must be investing wisely, too: how many instances of arrests and takedowns can you recall hearing about recently?

Meanwhile, we are still awaiting the appointment of the National Cyber Cheerleader. For those keeping score, the President announced that the position was critical and he would appoint someone to that position right away. That was on May 29th. Given the delay, one wonders why the National Review was mandated as being completed in a rush 60 day period. As I noted in that earlier posting, an appointment is unlikely to make much of a difference as the position won't have real authority. Even with an appointment, there is disagreement about where the lead for cyber should be, DHS or the military. Neither really seems to take into account that this is at least as much a law enforcement problem as it is one of building better defenses. The lack of agreement means that the tenure of any appointment is likely to be controversial and contentious at worst, and largely ineffectual at best.

I could go on, but it is all rather bleak, especially when viewed through the lens of my 20+ years experience in the field.  The facts and trends have been well documented for most of that time, too, so it isn't as if this is a new development. There are some bright points, but unless the problem gets a lot more attention (and resources) than it is getting now, the future is not going to look any better.

So, here are my take-aways for National Cyber Security Awareness:

  • the government is more focused on us being "aware" than "secure"
  • the criminals are probably outspending the government in R&D
  • no one is really in charge of organizing the response, and there isn't agreement about who should
  • there aren't enough real experts, and there is little real effort to create more
  • too many people think "certification" means "expertise"
  • law enforcement in cyber is not a priority
  • real education is not a real priority

But hey, don't give up on October! It's also Vegetarian Awareness Month, National Liver Awareness Month, National Chiropractic Month, and Auto Battery Safety Month (among others). Undoubtedly there is something to celebrate without having to wait until Halloween. And that's my contribution for National Positive Attitude Month.

Still no sign of land

I am a big fan of the Monty Python troupe. Their silly take on several topics helped point out the absurd and pompous, and still do, but sometimes were simply lunatic in their own right.

One of their sketches, about a group of sailors stuck in a lifeboat came to mind as I was thinking about this post. The sketch starts (several times) with the line "Still no sign of land." The sketch then proceeds to a discussion of how they are so desperate that they may have to resort to cannibalism.

So why did that come to mind?

We still do not have a national Cyber Cheerleader in the Executive Office of the President. On May 29th, the President announced that he would appoint one – that cyber security was a national priority.

Three months later – nada.

Admittedly, there are other things going on: health care reform, a worsening insurgency problem in Afghanistan, hesitancy in the economic recovery, and yet more things going on that require attention from the White House. Still, cyber continues to be a problem area with huge issues. See some of the recent news to see that there is no shortage of problems – identity theft, cyber war questions, critical infrastructure vulnerability, supply chain issues, and more.

Rumor has it that several people have been approached for the Cheerleader position, but all have turned it down. This isn't overly surprising – the position has been set up as basically one where blame can be placed when something goes wrong rather than as a position to support real change. There is no budget authority, seniority, or leverage over Federal agencies where the problems occur, so there is no surprise that it is not wanted. Anyone qualified for a high-level position in this area should recognize what I described 20 years ago in "Spaf's First Law":

If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong.

I wonder how many false starts it will take before it is noticed that there is something wrong with the position if good people don't want it? And will that be enough to result in a change in the way the position is structured?

Meanwhile, we are losing good people from what senior leadership exists. Melissa Hathaway has resigned from the temporary position at the NSC from which she led the 60-day study, and Mischel Kwon has stepped down from leadership of US-CERT. Both were huge assets to the government and the public, and we have all lost as a result of their departure.

The crew of the lifeboat is dwindling. Gee, what next? Well, funny you should mention that.

Last week, I attended the "Cyber Leap Year Summit," which I have variously described to people who have asked as "An interesting chance to network" to "Two clowns short of a circus." (NB. I was there, so it was not three clowns short.)

The implied premise of the Summit, that bringing together a group of disparate academics and practitioners can somehow lead to a breakthrough is not a bad idea in itself. However, when you bring together far too many of them under a facilitation protocol that most of them have not heard of coupled with a forced schedule, it shouldn't be a surprise if the result in much other than some frustration. At least, that is what I heard from most of the participants I spoke with. It remains to be seen if the reporters from the various sections are able to glean something useful from the ideas that were so briefly discussed. (Trying to winnow "the best" idea from 40 suggestions given only 75 minutes and 40 type A personalities is not a fun time.)

There was also the question of "best" being brought together. In my session, there were people present who had no idea about basic security topics or history. Some of us made mention of well-known results or systems, and they went completely over the heads of the people present. Sometimes, they would point this out, and we lost time explaining. As the session progressed, the parties involved seemed to simply assume that if they hadn't heard about it, it couldn't be important, so they ignored the comments.

Here are three absurdities that seem particularly prominent to me about the whole event:

  1. Using "game change" as the fundamental theme is counter-productive to the issue. Referring to cyber security and privacy protection as a "game" trivializes it, and if nothing substantial occurs, it suggests that we simply haven't won the "game" yet. But in truth, these problems are something fundamental to the functioning of society, the economy, national defense, and even the rule of law. We cannot afford to "not win" this. We should not trivialize it by calling it a "game."
  2. Putting an arbitrary 60-90 day timeline on the proposed solutions exacerbates the problems. There was no interest in discussing the spectrum of solutions, but only talking about things that could be done right away. Unfortunately, this tends to result in people talking about more patches rather than looking at fundamental issues. It also means that potential solutions that require time (such as phasing in some product liability for bad software) are outside the scope of both discussion and consideration, and this continues to perpetuate the idea that quick fixes are somehow the solution.
  3. Suggesting that all that is needed is for the government to sponsor some group-think, feel-good meeting to come up with solutions is inane. Some of us have been looking at the problem set for decades, and we know some of what is needed. It will take sustained effort and some sacrifice to make a difference. Other parts of the problem are going to require sustained investigation and data gathering. There is no political will for either. Some of the approaches were even brought up in our sessions; in the one I was in, which had many economists and people from industry, the ideas were basically voted down (or derided, contrary to the protocol of the meeting) and dropped. This is part of the issue: the parties most responsible for the problem do not want to bear any responsibility for the fixes.

I raised the first two issues as the first comments in the public Q&A session on Day 1. Aneesh Chopra, the Federal Chief Technology Officer (CTO), and Susan Alexander, the Chief Technology Officer for Information and Identity Assurance at DoD, were on the panel to which I addressed the questions. I was basically told not to ask those kinds of questions, and to sit down. although the response was phrased somewhat less forcefully than that. Afterwards, no less than 22 people told me that they wanted to ask the same questions (I started counting after #5). Clearly, I was not alone in questioning the formulation of the meeting.

Do I seem discouraged? A bit. I had hoped that we would see a little more careful thought involved. There were many government observers present, and in private, one-on-one discussions with them, it was clear they were equally discouraged with what they were hearing, although they couldn't state that publicly.

However, this is yet another in long line of meetings and reports with which I have had involvement, where the good results are ignored, and the "captains of industry and government" have focused on the wrong things. But by holding continuing workshops like this one, at least it appears that the government is doing something. If nothing comes of it, they can blame the participants in some way for not coming up with good enough ideas rather than take responsibility for not asking the right questions or being willing to accept answers that are difficult to execute.

Too cynical? Perhaps. But I will continue to participate because this is NOT a "game," and the consequences of continuing to fail are not something we want to face — even with "...white wine sauce with shallots, mushrooms and garlic."

Other cybersecurity legislation in the U.S.

In response to my last post, several people have pointed out to me some other initiatives before Congress. Here are some brief comments on a few of them, based on what is available via the Thomas service. I am not going to provide a section-by-section analysis of any of these.

S.921, the US Information and Communications Enforcement Act of 2009

Introduced by Senator Carper and cosponsored by Senator Burris, this act would modify Title 44 (chapter 35) of the US Code to establish the National Office for Cyberspace within the Executive Office of the President (EOP). The intent is that this office would address "...assured, reliable, secure, and survivable global information and communications infrastructure and related capabilities."

There are several other provisions in the act that make agency heads responsible for security of their systems, requires annual security reviews, requires cooperation with the US-CERT, requires establishment of automated reporting, and that charges the Department of Commerce with setting guidelines and standards but allows agencies to employ more stringent standards.

The director of the office created by this bill does not have a defined reporting chain. However, the office is given explicit responsibility for coordinating policy, consulting with agencies, ad working with OMB. Note that the interaction with OMB is coordination of OMB's actions and is not a role with any direct control.

The authority of this new office would not extend to Defense or any of the DNI's agencies.

There is a very short timeline to produce some initial reports (180 days) on the effects of cost savings by using better security. It might take that long simply to begin to define what to measure!

Every Federal agency would have to appoint a CISO (Chief Information Security Officer) responsible for all the things that a CISO normally does in a large organization, including establishing monitoring and response documentation, training, purchasing, and so on. This would be a massive undertaking for some agencies, even if appropriate budget was allocated (something this bill does not do).

The bill require every agency to have an independent (external) evaluation every year! The cost and effort of such an option would be huge, and it is not clear that it would provide a return equal to cost.

Overall, there are some worthwhile ideas in here, but if passed as is, this would cripple many smaller agencies without sufficient budget, and tie up the rest in lots of red tape.   

S. 1438 Fostering a Global Response to Cyber Attacks Act

Introduced by Senator Gillibrand, this bill would state a "sense of the Senate" and require the Secretary of State to report on efforts to work with other countries on cyber security and response. Section 21 of S.778 provides better coverage of the topic.

S. 946 Critical Electric Infrastructure Protection Act

Introduced by Senator Lieberman with no cosponsors, this bill directs the Secretary of DHS (working with other agencies) to direct a study and report if federally-owned elements of the power grid have been compromised in any way. It further tasks the Federal Electric Reguatory Commission (FERC) to establish interim measures to protect those resources.

It makes the Secretary of Homeland Security responsible for on-going assessments and reporting of critical infrastructure, including the electric infrastructure. Hmmm, no mention of the Secretary of Energy here. This will probably provoke a turf battle if it gets considered at length.

H.R. 2195 by Representative Bennie Thompson and 16 cosponsors is the same bill on the House side.

H.R. 2165 by Rep. John Barrow is related somewhat, in that it designates FERC as responsible for securing the power system. It goes further, however, by giving FERC some emergency regulatory powers under Presidential directive. It also creates yet another class of restricted but unclassified information. Both of those last two points make this a troubling proposal.

H.R. 266 Cybersecurity Education and Enhancement Act of 2009

Introduced by Representative Sheila Jackson-Lee, this act has two major components:

  1. It would task NSF with setting up programs, funded by & coordinated with DHS, for professional education and associated degrees in cyber security. Funding would also be given for equipment for such programs.
  2. It would establish a DHS-run Fellows program to bring state, local, tribal and private sectors officials into the DHS National Cybersecurity Division to become more familiar with the capabilities and missions there.

This would address some real needs in a reasonable way.


Clearly, there is growing interest in cyber within the government, and recognition of some of the weaknesses in procurement, training, response, standards, and information dissemination. However, not all of the bills being proposed really address the underlying problems, and some may cause new problems.   

The legislative process does not lend itself to solutions. The House and Senate deal with issues via an established committee structure, and those committee boundaries don't match cyber, which is a cross-cutting problem. Thus, it is difficult to get a bill started that mandates changes across several Federal agencies and cabinet positions, because the bill would then need to go through a bunch of committees -- and in too many cases there are members of those committees who will feel the need to rewrite the bill. This especially comes into play thinking about the future: if there will be new programs and authorities, it is generally the rule that each committee would like to "own" those activities . Likewise, the members and staff don't like to see any authority taken away from their committees.

This makes it problematic for cyber. It will require thoughtful support across a number of areas. It will require the leadership of both houses of Congress to exert some leadership to ensure that good legislation gets through, without too much unnecessary tweaking along the way.

Let's keep our fingers crossed.

(Oh, and my post about the "cyber cheerleader" caused a reader to remind of Spaf's First Law, articulated over two decades ago:

If you have responsibility for security but have no authority to set rules or punish violators, your own role in the organization is to take the blame when something big goes wrong.

Thus, people who are being approached for the position may not be eager to take it if they understand this. It has been demonstrated for this sort of position before.

Cybersecurity Legislation

Cyber seems to be one of the buzzwords in Washington these days, with the recent botnet attacks generating a lot of extra noise. This has included at least one rather bellicose response from a US Representative who either is reading much more interesting information than the rest of us, or is not reading anything at all.

Meanwhile, in the background, various bits of legislation are being worked on by several committees in both the House and Senate to address various aspects of the perceived problems. Two notable instances are legislation proposed by Senator Rockefeller and others that followed closely after my testimony before their committee. I have heard that at least one of these pieces of proposed legislation is being revised, and will be reintroduced. Back in April, I sent comments on both proposed bills to committee staff, but never heard a response. I hope my input had some impact.

It occurred to me that I did not blog about the legislation or my comments. So, to correct that oversight, you will find the enclosed, which are my original comments with some newer perspective gained over the last few months. You can find the text of these bills via Thomas.

(I will post a follow-up when I see what the revised bills are like.)

The National Cybersecurity Advisor Act of 2009, S. 778

This proposed legislation, cosponsored by Senators Snowe, Bayh and Nelson, was a bit of a puzzle to me when it was introduced. The timing was such that the President's 60-day review report had not yet been delivered, and so it seemed premature to me. However, in retrospect, the 60-day review didn't end up suggesting a powerful office within the EOP for cyber, and so this bill was right on target.

The bill would establish an office of National Cybersecurity Advisor , with the head of that office reporting directly to the President. That person would have authority to hire consultants, consult with any Federal agency, approve clearances of personnel related to cyber, and have access to all classified programs relating to cyber. More importantly, the advisor "...shall review and approve all cybersecurity-related budget requests submitted to the Office of Management and Budget" and would "...serve as the principal advisor to the President for all cybersecurity-related matters." Both of these would be an improvement over the suggestions in the final 60-day review.

The bill has had two readings and has been referred to the Committee on Homeland Security and Governmental Affairs.

(I note that the 60 day review would have been delivered to the President on April 9. It is now more than 3 months later, and still no appointment of the cybersecurity cheerleader proposed by that document.)

The National Cybersecurity Act of 2009, S 773

This was also introduced before the 60-day review was released. It contains 23 sections. It has been read twice and referred to the Committee on Commerce, Science, and Transportation. It also is cosponsored by Snowe, Nelson and Bayh.

Sec. 1: Title And Table Of Contents.

Pro forma material.

Sec 2: Findings

This is a section devoted to bits of information that justify the bill. Several people are cited for things they have said on the topics; I was not one of them, although Purdue was mentioned in point 13, and the PITAC report I helped prepare was listed in point 14.

Sec 3: Cybersecurity Advisory Panel

This section defines the creation of a high-level, Presidential advisory panel. The panel will be composed of individuals from a broad cross-section of society, and will provide the President with advice on strategy, trends, priorities, and civil liberties related to cyber security. The panel will be required to provide a report at least once every 2 years.

This looks to be well-designed and potentially very useful. Panels such as this depend on the alacrity with which a President appoints appropriate members, whether those members actually get something useful done, and whether the President heeds their advice. But at least this framework is off to a good start.

Sec 4. Real-time Cybersecurity Dashboard

The Secretary of Commerce is mandated to develop a "real-time dashboard" within a year. This dashboard is supposed to show the cybersecurity status and vulnerability information of all networks managed by the Department of Commerce.

This is quite puzzling. It isn't clear to me why this is restricted to Commerce, although notes I have from staff indicate that the intent is to serve as a pilot for other parts of government. But that isn't the end of the puzzle. Who is supposed to view this dashboard? What do they do after they see something on it? And what the heck does it really measure? (Hopefully not a dynamic FISMA score!)

Of course, I can't help noting that having one location to collect and display vulnerability information is a very bad idea.

Sec 5. State And Regional Cybersecurity Enhancement Program

This section describes the creation of a set of centers around the country to assist small businesses with cybersecurity. It is modeled on the Hollings Manufacturing Extension Partnership (MEP) and would be run by the Department of Commerce. The centers would receive up to 1/2 of their initial funding from the Federal government, with the rest to come from states, regional groups, and fees paid by members. The centers would provide expertise and resources to small companies.

Although I have some misgivings about this, it is the best suggestion I have seen yet on how to get cybersecurity technology out to small businesses in an affordable manner. I was not familiar with this program and had suggested something similar to our agricultural extension model, so this is in keeping with that. The questions I have are whether these will attract the necessary funding and talent to be viable. But it is probably worth the experiment.

Sec 6. NIST Standards Development And Compliance

This section sets out that, within a year, the Secretary of Commerce will establish a research plan for security metrics, establish a whole set of metrics and compliance measures for vulnerabilities and testing, set all these as standards, and apply them to all vendors and government systems. This will also constrain acceptable configurations, and provide accreditation of suppliers.

Whew! This is way off base. We don't know how to do many of these things, and I fear that setting a deadline will mean that a number of poor standards and requirements will be established. Not only that, having a set of uniform configurations (and required compliance to them) is a sure way to weaken our security rather than strengthen it -- diversity and uncertainty have protective effects when used appropriately. Requiring everyone to code the same way, and configure only approved systems the same way is not going to be helpful -- except to the bad guys.

This is also a good way to kill innovation in an area (software development and security deployment) where innovation is badly needed.   

This is a bad idea.

Sec 7. Licensing And Certification Of Cybersecurity Professionals

This provision requires Commerce to develop a national licensing and certification program for cybersecurity professionals. Within 3 years, it would be unlawful to provide security services to any government or national security system without the certification.

This is worse than section 6! We don't know yet what the appropriate skills are for professionals. In fact, there are a wide range of skills, not all of which are needed by each person.

The result of this, if it gets enacted, is either that we will have a least-common denominator for skills that will get taught by a lot of training organizations that will enrich them but do nothing for the nation, or the bar will be set so high that we will have a shortage of qualified personnel. Either way, it may also stifle enhanced and unconventional training that could produce new talent.

I have been working as an educator in this field for two decades. This section presents an awful idea.

Sec 8. Review Of NTIA Domain Name Contracts

Basically requires the Advisory Panel (Sec 3) to review any contract renewal with ICANN, and gives it veto authority.

Reasonable. it doesn't address some of the problems with ICANN, but it isn't clear that Congress can do that.

Sec 9. Secure Domain Name Addressing System

Within 3 years, the Commerce Department must come up with a strategy and schedule to implement DNSSEC, and the President must require all agencies and departments to follow that plan.

Probably reasonable, and with a more realistic timetable than some of the other sections.

Sec 10. Promoting Cybersecurity Awareness

Basically, the Secretary of Commerce is charged with finding ways to increase public awareness of cybersecurity. Not a bad idea, but the real issue occurs when budgets are allocated. Commerce gets stuck with lots of unfunded mandates, and I don't see this as ranking up there with, say, maintaining the nation's atomic clocks or evaluating the next digital signature standard. So, if the budgets are cramped, this won't happen.

Sec 11. Federal Cybersecurity Research And Development

This directs NSF to provide more funding towards some specific hard research issues (assurance, attribution, insider threat, privacy protection, etc.), and to help ensure that students get some training in secure code production techniques (although that is a somewhat nebulous concept). It also authorizes significant new funding levels for research, establishment of centers, and funding traineeships.

Overall, I think the intent is good. The issue is once again one of appropriations each year to fund these initiatives. if "new" funding is available, that is great. However, if this ends up eating into other research thrusts, it is generally not good for the community as a whole.

It is also the case that when substantial blocks of money are made available, suddenly "experts" come out the woodwork to compete for it. New ideas and new blood are needed in the area, but it is almost certain that a significant part of this will not accomplish what is intended, although what is accomplished may still have value. I would hope that the NSF doesn't try to address this by tying funds to the Centers of Excellence (sic).

Sec 12. Federal Cyber Scholarship-For-Service Program

The NSF SoS program would be expanded in size and scope, and codify it in law. The Scholarship for Service program grew out of an idea I presented to Congress back in 1997. It has functioned well, although it has not attracted large numbers of students, for a variety of reasons. The expansion of the program in this draft bill doesn't really change the nature of the program, so I would be very surprised if the 1000 students per year would actually matriculate. I suppose the numbers might get pumped up if more schools participated, but we don't have the faculty or educational materials nationally to do that. Thus, I have reservations about this, too.

Sec 13. Cybersecurity Competition And Challenge

This would direct NIST to set up national competitions at different levels for cybersecurity. There is also authorization to solicit for and award prize money to winners.

I can see where this might increase interest in the field, and bring more people out to solve problems. However, the majority of challenges held in the field right now are "hacking into the opposing server" challenges, and I have contended over the years that such an approach should not be encouraged. It we are looking for employees of cyber military groups, this might be okay. But hack challenges don't really recognize the well-rounded and adept defenders and researchers. Attack challenges also don't tend to engage women, who are already badly underrepresented in the field.

So, this is another qualified "maybe" section: good intent, but a lot depends on implementation.

Sec 14. Public-Private Clearinghouse

This establishes Commerce as the home of vulnerability and threat information for government systems and critical private infrastructure. Commerce also has to come up with methods and standards for protecting and sharing this information.

  Hmmm, I thought DHS was supposed to be doing all this now?

Sec 15. Cybersecurity Risk Management Report

The President is supposed to come up with a report on the feasibility of a risk and insurance market for cyber risk. The report is also supposed to include the feasibility of including that risk in bond ratings.

I've often said that if we could get the insurance industry engaged, we might well see some progress in private sector security. However, without some liability for companies (above and beyond loss risk) it still might not be enough. This bill doesn't touch the liability issue, which is likely to be a third rail issue for any legislation.

Sec 16. Legal Framework Review And Report

This section of the bill would mandate review of existing law that touches on cyber, and require recommendations for any necessary changes. This includes the ECPA, the Privacy Act, FISMA, and others. This would be a very good idea. The review would be delivered to Congress. At that point, there is no way to predict what might happen, but a review is definitely needed.

Sec 17. Authentication And Civil Liberties Report'

Briefly mandates study of a national identification and authentication program, including the civil liberties issues associated therewith.

This is another touchy topic. There are many groups advocating for strongly authenticated ID, but there are also reasons to proceed with caution. Performing an in-depth study is probably worthwhile, but I'd prefer to see the National Academies tasked with it than an agency of government.

Sec 18. Cybersecurity Responsibilities And Authority

This would give the President authority to disconnect government or critical infrastructure systems in the event of an emergency. it would also grant authority for mapping systems, setting standards, monitoring performance, and other activities to protect and defend national-interest systems. It also allows the President to designate an agency or organization to be in charge during any cyber incident – presumably including Department of Defense agencies.

This has been controversial because of the "disconnect" provision. It isn't clear to me that there are situations that would be helped by a disconnect, although I can certainly imagine some that might be made worse by disconnection. I'm not sure that the current infrastructure would even allow disconnection! So, on balance, if it were left out I don't think it would matter, but it might make some people less nervous.

Most of the other parts of the section seem reasonable.

Sec 19. Quadrennial Cyber Review

Every four years there would need to be a review of cybersecurity posture, strategy, partnerships, threats, and so on. The Advisory Panel (Sec 3) would be involved. "The review shall include a comprehensive examination of the cyber strategy, force structure, modernization plans, infrastructure, budget plan, the Nation's ability to recover from a cyberemergency, and other elements of the cyber program and policies with a view toward determining and expressing the cyber strategy of the United States and establishing a revised cyber program for the next 4 years." Wow!

This is modeled after the Defense Department's review of the same name, I assume. It would be a tremendous amount of work, and might be a huge distraction. However, it also might help to highlight some of the shortfalls and dangers in a way that would be useful for policymakers.

One consideration from the DoD side: structuring reporting in this way tends to move planning from annual or biennial cycles to quadrennial or octennial cycles. In a fast-moving field such as cyber, this might well be counterproductive.

Sec 20. Joint Intelligence Threat Assessment

it states "The Director of National Intelligence and the Secretary of Commerce shall submit to the Congress an annual assessment of, and report on, cybersecurity threats to and vulnerabilities of critical national information, communication, and data network infrastructure."   

Well, that's reasonable. Hmm, where is DHS?

Sec 21. International Norms And Cybersecurity Deterrance Measures

The President is directed to work with foreign governments to increase engagement and cooperation in cybersecurity.   

We can hardly argue with that!

Sec 22. Federal Secure Products And Services Acquisitions Board

This would establish a board to set and review requirements for Federal acquisitions to ensure that cybersecurity standards are met.

My comments on section 6 hold here as well.

Sec 23. Definitions

Assorted definitions to interpret other parts of the bill.


S. 778 seems like a reasonable idea, although it isn't clear that enough responsibility is given to the position. Merging with S773 might be reasonable with many of the tasks in S.773 currently delegated to the President instead delegated to the new position.

S.773 is best where it encourages new development. reporting, education and response. Unfortunately, some of the restrictions and mandates, especially Sections 6 and 7, make the bill more toxic than helpful.

The new funding required to carry everything out would be in the many hundreds of millions of dollars per year. Most of that is explicitly authorized in this legislation, but corresponding appropriation is not a certainty...and given the current economic climate, it is unlikely. Thus, there are some things contained in here that would end up as unfunded mandates on a few agencies (such as NIST) that are already laboring under a huge taskload with insufficient resources.

No mention is made of bolstering law enforcement at any level to help deal with cybersecurity issues. That is unfortunate, because it is one place where some immediate impact could definitely be made. However, given the way this will wend through committees, that is not unexpected. Commerce gets the bill first, so they get the direction.

DHS isn't mentioned anywhere. Again, that may be because of the path the bill will take through committees. However, I can't help but think it also has to do with the way that DHS has screwed up in this whole arena.

Overall, this bill evidences a great deal of careful thought and deep concern. There are many great ideas in here, as well as a few flawed ones. I have my fingers crossed that the rumored revision addresses the flaws and results in something that can get passed into law. Even a pared-down law consisting of sections 3, 5, 9, 10, 11, 12, 16 and 21 would have a lot of positive impact.