Posts in Policies & Law

Unsecured Economies, and Overly-secured Reports

The Report

Over the last few months, CERIAS faculty members Jackie Rees and Karthik Kannan have been busy analyzing data collected from IT executives around the world, and have been interviewing a variety of experts in cybercrime and corporate strategy. The results of their labors were published yesterday by the McAfee Corporation (a CERIAS Tier II partner) as the report Unsecured Economies: Protecting Vital Information.

The conclusions of the report are somewhat pessimistic about prospects for cyber security in the coming few years. The combination of economic pressures, weak efforts at law enforcement, international differences in perceptions of privacy and security, and the continuing challenges of providing secured computing are combining to place vast amounts of valuable intellectual property (IP) at risk. The report presents estimates that IP worth billions of dollars (US) was stolen or damaged last year, and we can only expect the losses to increase.

Additionally, the report details five general conclusions derived from the data:

  • The recession will put intellectual property at risk
  • There is considerable international variation in the commitment (management and resources) to protect cyber
  • Intellectual property is now an "international currency" that is as much a target as actual currency
  • Employees steal intellectual property for financial gain and competitive advantage
  • Geopolitical aspects present differing risk profiles for information stored "offshore" from "home" countries.

None of these should be a big surprise to anyone who has been watching the field or listening to those of us who are working in it. What is interesting about the report is the presented magnitude and distribution of the issues. This is the first truely global study of these issues, and thus provides an important step forward in understanding the scope of these issues.

I will repeat here some of what I wrote for the conclusion of the report; I have been saying these same things for many years, and the report simply underscores the importance of this advice:

“Information security has transformed from simply ’preventing bad things from happening ’into a fundamental business component.' C-level executives must recognize this change. This includes viewing cybersecurity as a critical business enabler rather than as a simple cost center that can be trimmed without obvious impact on the corporate bottom line; not all of the impact will be immediately and directly noticeable. In some cases, the only impact of degraded cybersecurity will be going from ‘Doing okay’ to ‘Completely ruined’ with no warning before the change.

Cybersecurity fills multiple roles in a company, and all are important for organizational health.

  • First, cybersecurity provides positive control over resources that provide the company a competitive advantage: intellectual property, customer information, trends and projections,financial and personnel records and so on. Poor security puts these resources at risk.
  • Second, good security provides executives with confidence that the data they are seeing is accurate and true, thus leading to sound decisions and appropriate compliance with regulation and policy
  • Third, strong cybersecurity supports businesses taking new risks and entering new markets with confidence in their ability to respond appropriately to change
  • And fourth, good cybersecurity is necessary to build and maintain a reputation for reliability and sound behavior, which in turn are necessary to attract and retain customers and partners.
  • This study clearly shows that some customers are unwilling to do business with entities they consider poorly secured. Given massive market failures, significant fraud and increasing threats of government oversight and regulation, companies with strong controls, transparent recordkeeping, agile infrastructures and sterling reputations are clearly at an advantage -- and strong cybersecurity is a fundamental component of all four. Executives who understand this will be able to employ cybersecurity as an organic element of company (and government) survival -- and growth.“

We are grateful to McAfee, Inc. for their support and assistance in putting this report together.

Getting the Report

Update: You can now download the report sans-registration from CERIAS.

Report cover The report is available at no charge and the PDF can be downloaded (click on the image of the report cover to the left, or here). Note that to download the report requires registration.

Some of you may be opposed to providing your contact information to obtain the report, especially as that information may be used in marketing. Personally, I believe that the registration should be optional. However, the McAfee corporation paid for the report, and they control the distribution.

As such, those of us at CERIAS will honor their decision.

However, I will observe that many other people object to these kinds of registration requirements (the NY Times is another notable example of a registration-required site). As a result, they have developed WWW applications, such as BugMeNot, which are freely available for others to use to bypass these requirements. Others respond to these requests by identifying company personnel from information on corporate sites and then using that information to register -- both to avoid giving out their own information and to add some noise to the data being collected.

None of us here at CERIAS are suggesting that you use one of the above-described methods. We do, however, encourage you to get the report, and to do so in an appropriate manner. We hope you will find it informative.

E-voting rears its head. Again.

Over the last few years, I have been involved in issues related to the use of computerization in voting. This has come about because of my concerns about computer security, privacy and reliability, and from my role as chair of the ACM U.S. Public Policy Committee (USACM). USACM has taken a strong position as regards use of computers as voting stations and voting over the internet.

Two recent items address the issue of voting over the Internet.

The first is a study released by NIST about the threats posed by internet voting. This is a well-written document describing problems that would be encountered with any online voting system. Their conclusion is that, for public elections, distribution of blank ballots (on paper) is the only reasonable improvement that we can make with current technology.

The second is a note from my colleague, Yvo Desmedt, one of the senior leaders in information security He has asked that I circulate this to a wider audience:

  IACR (the International Association for Cryptologic Research) has changed its bylaws to allow e-voting over the internet to elect its board members and other purposes. IACR will likely move towards internet e-voting. The IACR Board of Directors subcommittee on internet e-voting has published a list of requirements for such a system at: http://www.iacr.org/elections/eVoting/requirements.html This is evidently a first step and the question remains whether the system the International Association for Cryptologic Research will choose will be easy to hack or not. So, security experts should follow this development.

The problems that need to be addressed by any voting technology are mostly obvious: impersonation of the voter, impersonation of the voting system, disclosure of the ballot, multiple voting, loss of votes, denial of access, and a number of other issues. The problems are complicated by the requirements of a fair voting system, one of which is that of vote deniability—that the voter is able to deny (or claim) that her/his vote was cast a particular way. This is important to prevent vote buying, or more importantly, retribution against voters who do not cast ballots in a particular way. It isn’t difficult to find stories where voters have been beaten or killed because of how they voted (or were presumed to have intended to vote). Thus, the tried-and-true concept of providing a receipt (ala ATM machines) is not a workable solution.

My intent in making this post isn’t to discuss all the issues behind e-voting—that is well beyond the scope of a single posting, and is covered well many other places. My main goal is to give some wider circulation to Yvo’s statement. However, in light of the recent problem with certificate issuance, it is also worth noting that schemes requiring encryption to secure voting may have hidden vulnerabilities that could lead to compromise and/or failures in the future.   

In the end, it comes down to a tradeoff of risk/reward (as do all security choices): can we accurately quantify the risks with a particular approach, and are we willing to assume them? Do we have appropriate mechanisms to eliminate, mitigate or shift the risks? Are we willing to accept the risks associated with adopting a particular form of e-voting in return for the potential benefit of better access for remote voters? Or are accurate (fair) results all the time more important than complete results?

Note that one objection often raised to USACM as we argue these points is “There is no evidence there has ever been a failure or tampering with a vote.” In addition to being incorrect (there are numerous cases of computer-based voting failures), this misses two key issues:

     
  • How do you tell if there is tampering if there are no safeguards that definitively disclose such tampering? That you have not detected something does not mean it has not occurred.
  •  
  • The past does not predict the future in such a case. That no failure (accidental or otherwise) has occurred does not mean it will not occur in the future. Worse, a string of occurrences without a failure may help cloud a future discovered discrepancy!

In the case of IACR, it is obvious why this group of cryptography professionals would wish to adopt techniques that show confidence in cryptography. However, the example they set could be very damaging for other groups—and populations—if their confidence is misplaced. Given the long history of spectacular failures in cryptography—often going unannounced while being exploited—it is somewhat surprising that the IACR is not more explicit in their statement about the risks of technological failures.

 

Follow-up on the CA Hack

Yesterday, I posted a long entry on the recent news about how some researchers obtained a “rogue” certificate from one of the Internet Certificate Authorities. There are some points I missed in the original post that should be noted.

     
  • The authors of the exploit have a very readable, interesting description of what they did and why it worked. I should have included a link to it in the original posting, but forgot to edit it in. The interested reader should definitely see that article online, include the animations.
  •  
  • There are other ways this attack can be defeated, certainly, but they are stop-gap measures. I didn’t explain them because I don’t view them as other than quick patches. However, if you are forced to continue to use MD5 and you issue certificates, then it is important to randomize the certificate serial number that is issued, and to insert a random delay interval in the validity time field. Both will introduce enough random bits so as to make this particular attack against the CA infeasible given current technology.
  •  
  • I suggested that vendors use another hash algorithm, and have SHA-1 as an example. SHA-2 would be better, as SHA-1 has been shown to have a theoretical weakness similar to MD5, although it has proven more resistant to attack to date. Use of SHA-1 could possible result in a similar problem within a few years (or, as suggested in the final part of my post, within a few weeks if a breakthrough occurs). However, use of SHA-1 would be preferable to MD5!
  •  
  • MD5 is not “broken” in a complete way. There are several properties of a message digest that are valuable, including collision resistance: that it is infeasible to end up with two inputs giving the same hash value. To the best of my knowledge, MD5 has only been shown to be susceptible to “weak collisions”—instances where the attacker can pick one or both inputs so as to produce identical hash values. The stronger form of preimage resistance, where there is an arbitrary hash output H and an attacker cannot form an input that also produces H, still holds for MD5. Thus, applications that depend on this property (including many signing applications and integrity tools) are apparently still okay.
  •  
  • One of our recent PhD grads, William Speirs, worked on defining hash functions for his PhD dissertation. His dissertation, Dynamic Cryptographic Hash Functions, is available online for those interested in seeing it.

I want to reiterate that there are more fundamental issues of trust involved than what hash function is used. The whole nature of certificates is based around how much we trust the certificate authorities that issue the certificates, and the correctness of the software that verifies those certificates then shows us the results. If an authority is careless or rogue, then the certificates may be technically valid but not match our expectations for validity. If our local software (such as a WWW browser) incorrectly validates a certificate, or presents the results incorrectly, we may trust a certificate we shouldn’t. Even such mundane issues as having one’s system at the correct time/date can be important: the authors of this particular hack demonstrated that by backdating their rogue certificate.

My continuing message to the community is to not lose sight of those things we assume. Sometimes, changes in the world around us render those assumptions invalid, and everything built on them becomes open to question. If we forget those assumptions—and our chains of trust built on them—we will continue to be surprised by the outcomes.

That is perhaps fitting to state (again) on the last day of the year. Let me observe that as human beings we sometimes take things for granted in our lives. Spend a few moments today (and frequently, thereafter) to pause and think about the things in your life that you may be taking for granted: family, friends, love, health, and the wonder of the world around you. Then as is your wont, celebrate what you have.

Best wishes for a happy, prosperous, safe—and secure—2009.



[12/31/08 Addition]: Steve Bellovin has noted that transition to the SHA-2 hash algorithm in certificates (and other uses) would not be simple or quick. He has written a paper describing the difficulties and that paper is online.

 

A Serious Threat to Online Trust

There are several news stories now appearing (e.g., Security News) about a serious flaw in how certificates used in online authentication are validated. Ed Felten gives a nice summary of how this affects online WWW site authentication in his Freedom to Tinker blog posting. Brian Krebs also has his usual readable coverage of the problem in his Washington Post article. Steve Bellovin has some interesting commentary, too, about the legal climate.

Is there cause to be concerned? Yes, but not necessarily about what is being covered in the media. There are other lessons to be learned from this.

Short tutorial

First, for the non-geek reader, I’ll briefly explain certificates.

Think about how, online, I can assure myself that the party at the other end of a link is really who they claim to be. What proof can they offer, considering that I don’t have a direct link? Remember that an attacker can send any bits down the wire to me and may access to faster computers than I do.

I can’t base my decision on how the WWW pages appear, or embedded images. Phishing, for instance, succeeds because the phishers set up sites with names and graphics that look like the real banks and merchants, and users trust the visual appearance. This is a standard difficulty for people—understanding the difference between identity (claiming who I am) and authentication (proving who I am).

In the physical world, we do this by using identity tokens that are issued by trusted third parties. Drivers licenses and passports are two of the most common examples. To get one, we need to produce sufficient proof of identity to a third party to meet its standards of proof. Then, the third party issues a document that is very difficult to forge (almost nothing constructed is impossible to forge or duplicate—but some things require so much time and expenditure it isn’t worthwhile). Because the criteria for proof of identity and strength of construction of the document are known, various other parties will accept the document as “proof” of identity. Of course, other problems occur that I’m not going to address—this USACM whitepaper (of which I was principal author) touches on many of them.

Now, in the online world we cannot issue or see physical documents. Instead, we use certificates. We do this by putting together an electronic document that gives the information we want some entity to certify as true about us. The format of this certificate is generally fixed by standards, the most common one being the X.509 suite. This document is sent to an organization known as a Certificate Authority (CA), usually along with a fee. The certificate authority is presumably well-known, and performs a check (to their own standards) that the information in the document is correct, and it has the right form. The CA then calculate a digital hash value of the data, and creates a digital signature of that hash value. This is then added to the certificate and sent back to the user. This is the equivalent of putting a signature on a license and then sealing it in plastic. Any alteration of the data will change the digital hash, and a third party will find that the new hash and the hash value signed with the key of the CA don’t match. The reason this works is that the hash function and encryption algorithm used are presumed to be so computationally difficult to forge that it is basically not possible.

As an example of a certificate , if you visit “https://www.cerias.purdue.edu” you can click on the little padlock icon that appears somewhere in the browser window frame (this is browser dependent) to view details of the CERIAS SSL certificate.

You can get more details on all this by reading the referenced Wikipedia pages, and by reading chapters 5 & 7 in Web Security, Privacy and Commerce.

Back to the hack

In summary, some CAs have been negligent about updating their certificate signing mechanisms in the wake of news that MD5 is weak, published back in 2004. The result is that malicious parties can generate and obtain a certificate “authenticating” them as someone else. What makes it worse is that the root certificate of most of these CAs are “built in” to browser and application trust lists to simplify look-up of new certificates. Thus, most people using standard WWW browsers can be fooled into thinking they have connected to real, valid sites—even through they are connecting to rogue sites.

The approach is simple enough: a party constructs two certificates. One is for the false identity she wishes to claim, and the other is real. She crafts the contents of the certificate so that the MD5 hash of the two, in canonical format, is the same. She submits the real identity certificate to the authority, which verifies her bona fides, and returns the certificate with the MD5 hash signed with the CA private key. Our protagonist then copies that signature to the false certificate, which has the same MD5 hash value and thus the same digital signature, and proceeds with her impersonation!

What makes this worse is that the false key she crafts is for a secondary certificate authority. She can publish this in appropriate places, and is now able to mint as many false keys as she wishes—and they will all have signatures that verify in the chain of trust back to the issuer! She can even issue these new certificates using a stronger hash algorithm than MD5!

What makes this even worse is that it has been known for years that MD5 is weak, yet some CAs have continued to use it! Particularly unfortunate is the realization that Lenstra, Wang and de Weger described how this could be done back in 2005. Methinks that may be grounds for some negligence lawsuits if anyone gets really burned by this….

And adding to the complexity of all this is the issue of certificates in use for other purposes. For example, certificates are used with encrypted S/MIME email to digitally sign messages. Certificates are used to sign ActiveX controls for Microsoft software. Certificates are used to verify the information on many identity cards, including (I believe) government-issued Common Access Cards (CAC). Certificates also provide identification for secured instant messaging sessions (e.g., iChat). There may be many other sensitive uses because certificates are a “known” mechanism. Cloud computing services , software updates, and more may be based on these same assumptions. Some of these services may accept and/or use certificates issued by these deficient CAs.

Fixes

Fixing this is not trivial. Certainly, all CAs need to start issuing certificates based on other message digests, such as SHA-1. However, this will take time and effort, and may not take effect before this problem can be exploited by attackers. Responsible vendors will cease to issue certificates until they get this fixed, but that has an economic impact some many not wish to incur.

We can try to educate end-users about this, but the problem is so complicated with technical details, the average person won’t know how to actually make a determination about valid certificates. It might even cause more harm by leading people to distrust valid certificates by mistake!

It is not possible to simply say that all existing applications will no longer accept certificates rooted at those CAs, or will not accept certificates based on MD5: there are too many extant, valid certificates in place to do that. Eventually, those certificates will expire, and be replaced. That will eventually take care of the problem—perhaps within the space of the next 18 months or so (most certificates are issued for only a year at a time, in part for reasons such as this).

Vendors of applications, and especially WWW browsers, need to give careful thought about updates to their software to flag MD5-based certificates as deserving of special attention. This may or may not be a worthwhile approach, for the reason given above: even with a warning, too few people will be able to know what to do.

Bigger issue

We base a huge amount of trust on certificates and encryption. History has shown how easy it is to get implementations and details wrong. History has also shown how quickly things can be destabilized with advances in technology.

In particular, too many people and organizations take for granted the assumptions on which this vast certificate system is based. For instance, we assume that the hash/digest functions in use are computationally difficult to reverse or cause collisions. We also assume that certain mathematical functions underlying public/private key encryption are too difficult to reverse or “brute force.” However, all it takes is some new insight or analysis, or maybe new, affordable technology (e.g., practical quantum computing, or massively parallel computing) to violate those assumptions.

If you look at the way our systems are constructed, too little thought is given to what happens to existing infrastructure when something breaks. Designs can include compensating and recovery code, but doing so requires some cost in space or time. However, all too often people are willing to avoid the investment by putting off the danger to “if and when that happens.” Thus, we instance such as the Y2K problems and the issues here with potentially rogue CAs.

(I’ll note as an aside, that when I designed the original version of Tripwire and what became the Signacert product, I specifically included simultaneous use of several different message digest functions in different families for this very reason. I knew it was a matter of time before one or two were broken. I still believe that it is beyond reason to find files that will match multiple, different algorithms simultaneously.)

Another issue is the whole question of who we trust, and for what. As noted in the USACM whitepaper, authentication is always relative to a third party. How much do we trust those third parties? How much trust have we invested in the companies and agencies issuing certificates? Are they properly verifying identities? How good is there internal security? How do we know, and how much is at risk from our trust in those entities?

Let me leave you with a final thought. How do we know that this problem has not already been quietly exploited? The basic concept has been in the open literature for years. The general nature of this attack on certificates has been known for well over a decade, if not two. Given the technical and infrastructure resources available to national agencies and organized criminals, and given the motivation to use this hack selectively and quietly, how can we know that it is not already being used?


[Added 12/31/2008]: A follow-up post to this one is available in the blog.

 

Rethinking computing insanity, practice and research

[A portion of this essay appeared in the October 2008 issue of Information Security magazine. My thanks to Dave Farber for a conversation that spurred me to post this expanded version.]

[Small typos corrected in April 2010.]

I’d like to repeat (portions of) a theme I have been speaking about for over a decade. I’ll start by taking a long view of computing.

Fifty years ago, IBM introduced the first all-transistor computer (the 7000 series). Transistors were approximately $60 apiece (in current dollars). Secondary storage was about 10 cents per byte (also in current dollars) and had a density of approximately 2000 bits per cubic inch. According to Wikipedia, a working IBM 7090 system with a full 32K of memory (the capacity of the machine) cost about $3,000,000 to purchase—over $21,000,000 in current dollars. Software, peripherals, and maintenance all cost more. Rental of a system (maintenance included) could be well over $500,000 per month (in 1958 dollars). Other vendors soon brought their own transistorized systems to market, at similar costs.

These early computing systems came without an operating system. However, the costs of having such a system sit idle between jobs (and during I/O) led the field to develop operating systems that supported sharing of hardware to maximize utilization. It also led to the development of user accounts for cost accounting. And all of these soon led to development of security features to ensure that the sharing didn’t go too far, and that accounting was not disabled or corrupted. As the hardware evolved and became more capable, the software also evolved and took on new features.

Costs and capabilities of computing hardware have changed by a factor of tens of millions in five decades. Currently, transistors cost less than 1/7800 of a cent apiece in modern CPU chips (Intel Itanium). Assuming I didn’t drop a decimal place, that is a drop in price by 7 orders of magnitude.  Ed Lazowska made a presentation a few years ago where he indicated that the number of grains of rice harvested worldwide in 2004 was ten quintillion—10 raised to the 18th power. But in 2004, there were also ten quintillion transistors manufactured, and that number has increased faster than the rice harvest ever since. We have more transistors being produced and fielded each year than all the grains of rice harvested in all the countries of the world. Isn’t that amazing?

Storage also changed drastically. We have gone from core memory to semiconductor memory. And in secondary storage we have gone from drum memory to disks to SSDs. If we look at consumer disk storage, it is now common to get storage density of better than 500Gb per cubic inch at a cost of less than $.20 per Gb (including enclosure and controller)—a price drop of nearly 8 orders of magnitude. Of course, weight, size, speed, noise, heat, power, and other factors have all also undergone major changes. To think of it another way, that same presentation by Professor Lazowska, noted that the computerized greeting cards you can buy at the store to record and play back a message to music have more computing power and memory in them than some of those multi-million $ computers of the 1950s, all for under $10.

Yet, despite these incredible transformations, the operating systems, databases, languages, and more that we use are still basically the designs we came up with in the 1960s to make the best use of limited, expensive, shared equipment. More to the theme of this blog, overall information security is almost certainly worse now than it was in the 1960s. We’re still suffering from problems known for decades, and systems are still being built with intrinsic weaknesses, yet now we have more to lose with more valuable information coming online every week.

Why have we failed to make appreciable progress with the software? In part, it is because we’ve been busy trying to advance on every front. Partially, it is because it is simpler to replace the underlying hardware with something faster, thus getting a visible performance gain. This helps mask the ongoing lack of quality and progression to really new ideas. As well, the speed with which the field of computing (development and application) moves is incredible, and few have the time or inclination to step back and re-examine first principles. This includes old habits such as the sense of importance in making code “small” even to the point of leaving out internal consistency checks and error handling. (Y2K was not a one-time fluke—it’s an instance of an institutional bad habit.)

Another such habit is that of trying to build every system to have the capability to perform every task. There is a general lack of awareness that security needs are different for different applications and environments; instead, people seek uniformity of OS, hardware architecture, programming languages and beyond, all with maximal flexibility and capacity. Ostensibly, this uniformity is to reduce purchase, training, and maintenance costs, but fails to take into account risks and operational needs. Such attitudes are clearly nonsensical when applied to almost any other area of technology, so it is perplexing they are still rampant in IT.   

For instance, imagine buying a single model of commercial speedboat and assuming it will be adequate for bass fishing, auto ferries, arctic icebreakers, Coast Guard rescues, oil tankers, and deep water naval interdiction—so long as we add on a few aftermarket items and enable a few options. Fundamentally, we understand that this is untenable and that we need to architect a vessel from the keel upwards to tailor it for specific needs, and to harden it against specific dangers. Why cannot we see the same is true for computing? Why do we not understand that the commercial platform used at home to store Aunt Bee’s pie recipes is NOT equally suitable for weapons control, health care records management, real-time utility management, storage of financial transactions, and more? Trying to support everything in one system results in huge, unwieldy software on incredibly complex hardware chips, all requiring dozens of external packages to attempt to shore up the inherent problems introduced by the complexity. Meanwhile, we require more complex hardware to support all the software, and this drives complexity, cost and power issues.

The situation is unlikely to improve until we, as a society, start valuing good security and quality over the lifetime of our IT products. We need to design systems to enforce behavior within each specific configuration, not continually tinker with general systems to stop each new threat. Firewalls, IDS, antivirus, DLP and even virtual machine “must-have” products are used because the underlying systems aren’t trustworthy—as we keep discovering with increasing pain. A better approach would be to determine exactly what we want supported in each environment, build systems to those more minimal specifications only, and then ensure they are not used for anything beyond those limitations. By having a defined, crafted set of applications we want to run, it will be easier to deny execution to anything we don’t want; To use some current terminology, that’s “whitelisting” as opposed to “blacklisting.” This approach to design is also craftsmanship—using the right tools for each task at hand, as opposed to treating all problems the same because all we have is a single tool, no matter how good that tool may be. After all, you may have the finest quality multitool money can buy, with dozens of blades and screwdrivers and pliers. But you would never dream of building a house (or a government agency) using that multitool. Sure, it does a lot of things passably, but it is far from ideal for expertly doing most complex tasks.

Managers will make the argument that using a single, standard component means it can be produced, acquired and operated more cheaply than if there are many different versions. That is often correct insofar as direct costs are concerned. However, it fails to include secondary costs such as reducing the costs of total failure and exposure, and reducing the cost of “bridge” and “add-on” components to make items suitable. Smaller and more directed systems need to be patched and upgraded far less often than large, all-inclusive systems because they have less to go wrong and don’t change as often. There is also a defensive benefit to the resulting diversity: attackers need to work harder to penetrate a given system because they don’t know what is running. Taken to an extreme, having a single solution also reduces or eliminates real innovation as there is no incentive for radical new approaches; with a single platform, the only viable approach is to make small, incremental changes built to the common format. This introduces a hidden burden on progress that is well understood in historical terms—radical new improvements seldom result from staying with the masses in the mainstream.

Therein lies the challenge, for researchers and policy-makers. The current cyber security landscape is a major battlefield. We are under constant attack from criminals, vandals, and professional agents of governments. There is such an urgent, large-scale need to simply bring current systems up to some bare minimum that it could soak up way more resources than we have to throw at the problems. The result is that there is a huge sense of urgency to find ways to “fix” the current infrastructure. Not only is this where the bulk of the resources is going, but this flow of resources and attention also fixes the focus of our research establishment on these issues, But when this happens, there is great pressure to direct research towards the current environment, and towards projects with tangible results. Program managers are encouraged to go this way because they want to show they are good stewards of the public trust by helping solve major problems. CIOs and CTOs are less willing to try outlandish ideas, and cringe at even the notion of replacing their current infrastructure, broken as it may be. So, researchers go where the money is—tried and true, incremental, “safe” research.

We have crippled our research community as a result. There are too few resources devoted to far-ranging ideas that may not have immediate results. Even if the program managers encourage vision, review panels are quick to quash it. The recent history of DARPA is one that has shifted towards immediate results from industry and away from vision, at least in computing. NSF, DOE, NIST and other agencies have also shortened their horizons, despite claims to the contrary. Recommendations for action (including the recent CSIS Commission report to the President) continue this by posing the problem as how to secure the current infrastructure rather than asking how we can build and maintain a trustable infrastructure to replace what is currently there.

Some of us see how knowledge of the past combined with future research can help us have more secure systems. The challenge continues to be convincing enough people that “cheap” is not the same as “best,” and that we can afford to do better. Let’s see some real innovation in building and deploying new systems, languages, and even networks. After all, we no longer need to fit in 32K of memory on a $21 million computer. Let’s stop optimizing the wrong things, and start focusing on discovering and building the right solutions to problems rather than continuing to try to answer the same tired (and wrong) questions. We need a major sustained effort in research into new operating systems and architectures, new software engineering methods, new programming languages and systems, and more, some with a (nearly) clean-slate starting point. Small failures should be encouraged, because they indicate people are trying risky ideas. Then we need a sustained effort to transition good ideas into practice.

I’ll conclude with s quote that many people attribute to Albert Einstein, but I have seen multiple citations to its use by John Dryden in the 1600s in his play “The Spanish Friar”:

  “Insanity: doing the same thing over and over again expecting different results.”

What we have been doing in cyber security has been insane. It is past time to do something different.

[Added 12/17: I was reminded that I made a post last year that touches on some of the same themes; it is here.]

Failures in the Supply Chain

[This is dervied from a posting of mine to Dave Farber’s Interesting People list.]

There is an article in the October Businessweek that describes the problem of counterfeit electronic components being purchased and used in critical Defense-related products.

This is not a new threat. But first let’s reflect on the past.

Historically, the military set a number of standards (MIL-SPEC) to ensure that materials they obtained were of an appropriate level of quality, as well as interoperable with other items. The standards helped ensure a consistency for everything from food to boots to tanks to software, as well as ensuring performance standards (quality).

The standards process was not without problems, however. Among issues often mentioned were:

     
  • Standards were sometimes not revised often enough to reflect changes in technology. The result was that the military often had to acquire and use items that were generations behind the commercial marketplace (esp. in software/computers);
  •  
  • Knowing and complying with so many standards often caused companies considerable extra time and effort in supplying items, thus raising their cost well above comparable commercial equivalents;
  •  
  • Incompatible standards across military agencies and services, especially when compared with commercial items used by civilian agencies, led to waste and increased cost, and lack of flexibility in implementation;
  •  
  • Imposition of rigid standards cut down on innovation and rapid development/acquisition/deployment cycles;
  •  
  • The rigidity and complexity of the standards effectively shut out new vendors, especially small vendors because they could not match the standards-compliance efforts of large, entrenched defense vendors.

Thus, in June of 1994, William Perry, the then Secretary of Defense, issued a set of orders that effectively provide a pathway to move away from the standards and adopt commercial standards and performance goals in their place. (cf. the Wikipedia article on MIL-SPEC). One of the rationales expressed then, especially as regarded computing software and hardware, was that the competition of the marketplace would lead to better quality products. (Ironically, the lack of vendor-neutral standards then led to a situation where we have large monocultures of software/hardware platforms throughout government, and the resultant lack of meaningful competition has almost certainly not served the goals of better quality and security.)

In some cases, the elimination of standards has indeed helped keep down costs and improve innovation. I have been told, anecdotally, that stealth technology might not have been fielded had those aircraft been forced within the old MIL-SPEC regime.

As a matter of cost and speed many MIL-SPEC standards seem to have been abandoned to choose COTS whenever possible without proper risk analysis. Only recently have policy-makers begun to realize some of the far-reaching problems that have resulted from the rush to abandon those standards.

As the Businessweek article details, counterfeit items and items with falsified (or poorly conducted) quality control have been finding their way into critical systems, including avionics and weapons control. The current nature of development means that many of those systems are assembled from components and subsystems supplied by other contractors, so a fully-reputable supplier may end up supplying a faulty system because of a component supplied by a vendor with which they have no direct relationship. One notable example of this was the “Cisco Raider” effort from a couple of years ago where counterfeit Cisco router boards were being sold in the US.

As noted in several press articles (such as the ones linked in, above) there is considerable price motive to supply less capable, “grey market” goods in large bids. The middlemen either do not know or care where the parts come from or where they are being used—the simply know they are making money. The problem is certainly not limited to Defense-related parts, of course. Fake “Rolex” watches that don’t keep time, fake designer shoes that fall apart in the rain, and fake drugs that either do nothing or actually cause harm are also part of the “gray market.” Adulteration of items or use of prohibited materials is yet another aspect of the problem: think “lead paint” and “melamine” for examples. Of course, this isn’t a US-only problem; people around the world are victimized by gray-market, adulterated and counterfeit goods.

These incidents actually illustrate some of the unanticipated future effects of abandoning strong standards. One of the principal values of MIL-SPEC standards was that it established a strict chain of accountability for products. I suspect that little thought has been given by policy-makers to the fact that there is considerable flow of items across borders from countries where manufacturing expertise and enforcement of both IP laws and consumer-protection statutes may not be very stringent. Buying goods from countries where IP violations are rampant (If there is little fear over copying DVDs, then there is little fear over stamping locally-produced items as “Cisco”), and where bribes are commonplace, is not a good strategy for uniform quality.

Of course, there are even more problems than simply quality. Not every country and group has the same political and social goals as we do in the US (or any other country—this is a general argument). As such, if they are in a position to produce and provide items that may be integrated into our defense systems or critical infrastructure, it may be in their interests to produce faulty goods—or carefully doctored goods. Software with hidden ‘features” or control components with hidden states could result in catastrophe. That isn’t fear-mongering—we know of cases where this was done, such as to the Soviets in the 1980s. Even if the host country isn’t subtly altering the components, it may not have the resources to protect the items being produced from alteration by third parties. After all, if the labor is cheaper in country X, then it will also be cheaper to bribe the technicians and workers to make changes to what they are producing.

The solution is to go back to setting high standards, require authentication of supply chain, and better evaluation of random samples. Unfortunately, this is expensive, and we’re not in a state nationally where extra expense (except to line the pockets of Big Oil and Banking) is well tolerated by government. Furthermore, this alters the model where many small vendors acting as middlemen are able to get a “piece of the action.” Their complaints to elected representatives who may not understand the technical complexities adds even further pressure against change.

In some cases, the risk posed in acquisition of items may warrant subsidizing the re-establishment of some manufacturing domestically (e.g., chip fabs). This doesn’t need to be across the board, but it does required judicious risk-analysis to determine where critical points are—or will be in the future. We must realize that the rapid changes in technology may introduce new patterns of production and acquisition that we should plan for now. For instance, once elements of nanotechnology become security-critical, we need to ensure that we have sufficient sources of controlled, quality production and testing.

I’m not going to hold my breath over change, however. Some of us have been complaining about issues such as this for decades. The usual response is that we are making a big deal out of “rare events” or are displaying xenophobia. The sheer expense frightens off many from even giving it more than a cursory thought. I know I have been dismissed as an “over-imaginative academic” more times than I can count when I point out the weaknesses.

One of the factors that allegedly led to the decline of the Roman empire was the use of lead in pipes, and lead salts to make cheap wine more palatable for the masses. The Romans knew there was a health problem associated with lead, but the vendors saw more profit from using it.

Once we have sufficiently poisoned our own infrastructure to save money and make the masses happier, how long do we last?

[If you are interested in being notified of new entries by spaf on cyber and national security policy issues, you can either subscribe to the RSS feed for this site, or subscribe to the notification list.]

 

Presidential Politics

If you are in the United States, it has been nigh-on impossible to watch TV, read a newspaper, follow a blog, or (in some states) get your paper mail without something about the upcoming election being present. Some of this has been educational, but a huge amount of it has been negative, vague, and often misleading. That’s U.S. politics, unfortunately—the majority of voters don’t really bother to educate themselves about the issues and the media does an uneven job of reporting the truth. For many voters, it comes down to only one or two issues they care passionately about, and they vote for a candidate (or against one) on those simple issues. For instance, there are many voters who will base their votes solely on a candidate’s perceived position on gun control, access to legal abortions, tax policy, or other single issues without thinking about all the position issues. (And, as I note below, most of these single issues aren’t really under the control of the President no matter who is elected.)

Of course, the US political system tends to reinforce this binary choice procedure, as we have long had only two really major parties. Parliamentary systems seem to encourage more parties, although even then there appears to be only two major ones, often oriented around the same approximate social/political poles: a conservative party, and a liberal (labor) party.

So, in the U.S. we have candidates from both major parties (and many minor ones) campaigning—explaining their positions, offering their plans for when they are in office, and trying to instill voter confidence and trust. (And too often, offering innuendo, misquotes and outright untruths about their opponents.)

What none of them say, and the media doesn’t either, is that very few of the promises can really be certain of being kept. And in large part, that is also a nature of government.

The President has a limited set of powers under the Constitution. He (or she) is responsible for the execution of the laws of the United States. The President is the Commander-in-Chief of all the armed forces and is responsible for commanding them in defense of the country and upholding the law (including treaties). The President is the chief executive agent of all the various Cabinet agencies, and of a number of offices and commissions. The President appoints a large number of officials (including judges and ambassadors), but doesn’t have the power to remove many of them.

Most importantly, the President does not make new laws. Laws are passed by Congress, usually with the assent of the President, although a 2/3 majority of both houses of Congress may pass laws to which the President objects. The President is then responsible for ensuring that those laws are carried out, with recourse to the Courts if there are questions. If the President fails to enforce the laws, Congress may take some punitive actions, or even impeach the President…if they have the political will.

So, back to the candidates. If you listen to their speeches, they offer to change tax law, spend more on energy issues, change health care, and a number of other important domestic issues. What they don’t point out, however, is that they will have no authority as President to do most of those things! Instead, Congress will need to pass authorizing legislation that is signed by the President. The President can certainly propose that Congress enact those changes, but Congress needs to craft and pass legislation that enables the President to act, and that allocate necessary funds, and that also create/remove administrative structures that may be involved. This legislation can include whatever other items that Congress adds in to the bill, including items that may be completely unrelated to the main topic. The President then must decide whether to sign the bill and act to implement its provisions.

So, the most a new President can do is to propose legislation to embody his/her campaign promises, and to work for its passage. What usually happens is that the size of the win in the election serves as a political measure of how much the population is aligned with the new President’s positions, and this can help get a particular agenda passed…or not. Of critical importance is also the issue of whether one or both houses of Congress are controlled by the same party as the new President, and by what margin.

Thus, there should probably be more attention paid to the candidates running for Congress and their particular positions on important issues. In many venues, however, the majority of the attention is focused on the Presidential contest. Some other states are also dealing with contentious state initiatives, tight governor races, and other local issues that help further obscure the Congressional races.

Now, how does this apply to cybersecurity, the ostensible topic of this blog? Or education? Or privacy? Or other topics we focus on here?

Well, as I will address in my next posting, the two main Presidential candidates have made some comments on cyber security, but I have not been able to find any coverage of any current candidate for Congress who has mentioned it. It is basically invisible. So is privacy. Education has gotten a little mention, but not much. And given the more overt, pressing issues of the economy, the deficit, health care, energy dependence, and war in the Middle East, it seems unlikely that any Congressional candidate has bothered to think much about these cyber issues, or that they have received much further thought from the Presidential candidates. (Too bad cyber security can’t be part of the mud slinging—it might raise its profile!)

Of course, with the economy in such sad shape, and some of the other severe problems being faced by the U.S., one might ask whether cyber should be a priority for the new President. I would answer yes, because the problems are already here and severe (although not as obvious as some of the other problems), and it will take years of major effort simply to keep even with the current sad status. The problems in cyber cannot be fixed in a crash effort devoted at any future time, and until they are addressed they will be a drain on the economy (in 2006, the FBI estimated the loss to computer crime in the US to be $67 billion—almost 10% of the recent economic bailout), and a threat to national security. Thus, deferring action on these issues will only make the situation worse; we need to initiate a sustained, significant program to make some important changes.

There are some things that the new President can do, especially as they relate to the military, law enforcement, and some other agencies in the Executive Branch. This is potentially cause for some glimmer of hope. I intend to blog some on that too, with a list of things that should be considered in the new administration.

 

Barack Obama, National Security and Me, Take II

Over the last month or so, many people who read my first post on Senator Obama’s “security summit” at Purdue have asked me about followup, I’ve been asked “Did you ever hear back from the Senator?”, “Has the McCain campaign contacted you?”, and “What do you think about the candidates?” I’ve also been asked by a couple of my colleagues (really!) “Why would they bother to contact you?”

So, let me respond to these, with the last one first.

Why would someone talk with you about policy?

So, I haven’t been elected or served in a cabinet-level position in DC. I haven’t won a Nobel prize (there isn’t one in IT), I’m not in the National Academies (and unlikely to be—few non-crypto security people are), and I don’t have a faculty appointment in a policy program (Purdue doesn’t have one). I also don’t write a lot of policy papers—or any other papers, anymore: I have a persistent RSI problem that has limited my written output for years. However, those aren’t the only indicators that someone has something of value to say.

As I’ve noted in an earlier post, I’ve had some involvement in cyber security policy issues at the Federal level. There’s more than my involvement with the origins of the SfS and Cyber Trust, certainly. I’ve been in an advising role (technology and policy) for nearly 20 years with a wide range of agencies, including the FBI, Air Force, GAO, NSA, NSF, DOE, OSTP, ODNI and more. I’ve served on the PITAC. I’ve testified before Congressional committees a half-dozen times, and met with staff (officially and unofficially) of the Senate and House many times more than that. Most people seem to think I have some good insight into Federal policy in cyber, but additionally, in more general issues of science and technology, and in defense and intelligence.

From another angle, I’ve also been deeply involved in policy. I served on the CRA Board of Directors for 9 years, and have been involved with its government affairs committee for a decade. I’ve been chair or co-chair of the ACM’s US Public Policy committee for a dozen years. From these vantage points I have gained additional insights into technology policy and challenges in a broad array of issues related to cyber, education, and technology.

And I continue to read a lot about these topics and more, including material in a number of the other sciences. And I’ve been involved in the practice and study of cyber security for over 30 years.

I can, without stretching things, say that I probably know more about policy in these areas than about 99.995% of the US population, with some people claiming that I’m in the top 10 or so with respect to broad issues of cyber security policy. That may be why I keep being asked to serve in advisory positions. A lot of people tend to ask me things, and seem to value the advice.

One would hope that at least some of the candidates would be interested in such advice, even if not all of my colleagues (or my family grin are interested in what I have to say.

Have any of the other candidates contacted you?

Simply put—no. I have gotten a lot of mailings from the Republican (and Democratic) campaigns asking me to donate money, but that’s it.

I’m registered as an independent, so that may or may not have played a role. For instance, I can’t volunteer to serve as a poll worker in Indiana because I’m not registered in one of the two main parties! I don’t show up in most of the databases (and that may be a blessing of sorts).

To digress a moment…. I don’t believe either party has a lock on the best ideas—or the worst. I’m not one of those people who votes a straight-ticket no matter what happens. I have friends who would vote for anyone so long as the candidate got the endorsement of “their” party. It reminds me of the drunken football fans with their shirts off in -20F weather cheering insanely for “their” team and willing to fight with a stranger who is wearing the wrong color. Sad. Having read the Constitution and taken the oath to defend it, I don’t recall any mention of political parties or red vs. blue….

That said, I would be happy to talk with any serious candidate (or elected official) about the issues around cyber, security, education, and the IT industry. They are important, and impact the future of our country…and of much of the world.

So, has anyone with the Obama campaign contacted you since his appearance at Purdue?

Well, the answer to this is “yes and no.”

I was told, twice, by a campaign worker that “Someone will call you—we definitely want more advice.” I never got that phone call. No message or explanation why. Nothing.

A few weeks after the second call I did get a strange email message. It was from someone associated with the campaign, welcoming me to some mailing list (that I had not asked to join) and including several Microsoft Word format documents. As my correspondents know, I view sending email with Word documents to be a bad thing. I also view being added to mailing lists without my permission to be a hostile act. I responded to the maintainer of the list and his reply was (paraphrased) “I don’t know why you were added. Someone must have had a reason. I’ll check and get back to you.” Well, I have received no more email from the list, and I never got any followup from that person.

So, in summary, I never got any follow-up from the campaign. I don’t think it is an issue with the Senator (who wouldn’t have been the one to contact me anyhow) but a decision by his staff.

So, depending your level of cynicism, the mentions of my name, of CERIAS, and of follow-up was either (a) a blown opportunity caused by an oversight, or (b) a cynical political ploy to curry local favor.

(My daughter suggested that they are waiting until after the election to appoint me to a lofty position in government. Uh, yeah. That probably explains why I haven’t gotten that MacArthur “genius grant” yet and why Adriana Lima hasn’t called asking me to run away with her—the timing just isn’t right yet. grin

What are your opinions on the Presidential candidates?

I’m not allowed to be partisan in official Purdue outlets. So, in some further posts here over the next week or two I will provide some analysis of both major candidates (NB. Yes, I know there are over 300 candidates for President on the ballots across the country. However, I don’t think there is much chance of Baldwin, Barr, McKinney, Nader, Paul or the rest getting into office. So, I’ll limit my comments to the two main candidates.

If you really want to know who I’m probably voting for, you can see my Facebook page or send me email.


[Update 10/16: After this was published I sent a link to this entry to several people associated with the Obama campaign. Only one responded, and it was clear from his email that there had been a mixup in getting back to me—but no interest in rectifying it.]

 

Centers of Academic .... Adequacy

History

Back in 1997, the year before CERIAS was formally established, I testified before Congress on the state of cyber security in academia. In my testimony, I pointed out that there were only four established research groups, and their combined, yearly PhD production was around 3 per year, not counting cryptography.

Also in that testimony, I outlined that support was needed for new centers of expertise, and better support of existing centers.

As a result of that testimony, I was asked to participate in some discussions with staff from OSTP, from some Congressional committees (notably, the House Science Committee), and Richard Clarke's staff in the Executive Office of the President. I was also invited to some conversations with leadership at the NSA, including the deputy director for information security systems (IAD) (Mike Jacobs). Those discussions were about how to increase the profile of the area, and get more people educated in information security.

Among the ideas I discussed were ones expanded from my testimony. They eventually morphed into the Scholarship for Service program, the NSF CyberTrust program, and the NSA Centers of Academic Excellence (CAE). [NB. I am not going to claim sole or primary credit for these programs. I know I came up with the ideas, briefed people about them, discussed pros & cons, and then those groups took them and turned them into what we got. None of them are quite what I proposed, but that is how things happen in DC.]

The CAE program was established by the NSA in late 1998. The CAE certification was built around courses meeting CNSS requirements. Purdue was one of the first seven universities certified as CAEs, in May of 1999. We remained in the CAE program until earlier this year (2008). In 2003, DHS became a co-sponsor of the program.

Why Purdue is No Longer a CAE

In 2007, we were informed that unless we renewed our CNSS certifications by the end of August, we would not be eligible for CAE renewal in 2008. This prompted discussion and reflection by faculty and staff at CERIAS. As noted above, the mapping of CNSS requirements to our classes is non-trivial. The CAE application was also non-trivial. None of our personnel were willing to devote the hours of effort required to do the processing. Admittedly, we could have "faked" some of the mapping (as we know some schools have done in the past), but that was never an option for us. We had other objections, too (see what follows).As a result, we did not renew the certifications, and we dropped out of the CAE program when our certification expired earlier this year.

Our decision was not made lightly -- we nearly dropped out in 2004 when we last renewed (and were not grandfathered into the new 5 year renewal cycle, much to our annoyance), and there was continuing thought given to this over intervening years. We identified a number of issues with the program, and the overhead of the mapping and application process was not the only (or principal) issue.

First, and foremost, we do not believe it is possible to have 94 (most recent count) Centers of Excellence in this field. After the coming year, we would not be surprised if the number grew to over 100, and that is beyond silly. There may be at most a dozen centers of real excellence, and pretending that the ability to offer some courses and stock a small library collection means "excellence" isn't candid.

The program at this size is actually a Centers of Adequacy program. That isn't intended to be pejorative -- it is simply a statement about the size of the program and the nature of the requirements.

Some observers and colleagues outside the field have looked at the list of schools and made the observation that there is a huge disparity among the capabilities, student quality, resources and faculties of some of those schools. Thus, they have concluded, if those schools are all equivalent as "excellent" in cyber security, then that means that the good ones can't be very good ("excellent" means defining the best, after all). So, we have actually had pundits conclude that cyber security & privacy studies can't be much of a discipline. That is a disservice to the field as a whole.

Instead of actually designating excellence, the CAE program has become an ersatz certification program. The qualifications to be met are for minimums, not for excellence. In a field with so few real experts and so little money for advanced efforts, this is understandable given one of the primary goals of the CAE program -- to encourage schools to offer IA/IS programs. Thus, the program sets a relatively low bar and many schools have put in efforts and resources to meet those requirements. This is a good thing, because it has helped raise the awareness of the field. However, it currently doesn't set a high enough bar to improve the field, nor does it offer the resources to do so.

Setting a low bar also means that academic program requirements are being heavily influenced by a government agency rather than the academic community itself. This is not good for the field because it means the requirements are being set based on particular application need (of the government) rather than the academic community's understanding of foundations, history, guiding principles, and interaction with other fields. (E.g., Would your mathematics department base its courses on what is required to produce IRS auditors? We think not!) In practice, the CAE program has probably helped suppress what otherwise would be a trend by our community to discuss a formal, common curriculum standard. In this sense, participation in the CAE program may now be holding us back.

Second, and related, the CNSS standards are really training standards, and not educational standards. Some of them might be met by a multi-day class taught by a commercial service such as SANS or CSI -- what does that say about university-level classes we map to them? The original CNSS intent was to provide guidance for the production of trained system operators -- rather than the designers, researchers, thinkers, managers, investigators and more that some of our programs (and Purdue's, in particular) are producing.

We have found the CNSS standards to be time-consuming to map to courses, and in many cases inappropriate, and therefore inappropriate for our students. Tellingly, in 9 years we have never had a single one of our grads ask us for proof that they met the CNSS standards because an employer cared! We don't currently intend to offer courses structured around any of the CNSS standards, and it is past the point where we are interested in supporting the fiction that they are central to a real curriculum.

Third, we have been told repeatedly over the years that there might be resources made available for CAE schools if only we participated. It has never happened. The Scholarship for Service program is open to non-CAE schools (read the NSF program solicitation carefully), so don't think of that as an example. With 100 schools, what resources could reasonably be expected? If the NSA or DHS got an extra $5 million, and they spread it evenly, each would get $50,000. Take out institutional overhead charges, and that might be enough for 1 student scholarship...if that. If there were 10 schools, then $500,000 each is an amount that might begin to make a difference. But over a span of nearly 10 years the amount provided has been zero, and any way you divide that, it doesn't really help any of us. Thus, we have been investing time and energy in a program that has not brought us resources to improve. Some investment of our energy & time to bolster community was warranted, but that time is past.

Fourth, the renewal process is a burden because of the nature of university staffing and the time required. With no return on getting the designation, we could not find anyone willing to invest the time for the renewal effort.

Closing Comments

In conclusion, we see the CAE effort as valuable for smaller schools, or those starting programs. By having the accreditation (which is what this is, although it doesn't meet ISO standards for such), those programs can show some minimal capabilities, and perhaps obtain local resources to enhance them. However, for major programs with broader thrusts and a higher profile, the CAE has no real value, and may even have negative connotations. (And no, the new CAE-R program does not solve this as it is currently structured.)

The CAE program is based on training standards (CNSS) that do not have strong pedagogical foundations, and this is also not appropriate for a leading educational program. As the field continues to evolve over the next few years, faculty at CERIAS at Purdue expect to continue to play a leading role in shaping a real academic curriculum. That cannot be done by embracing the CAE.

We are confident that people who understand the field are not going to ignore the good schools simply because they don't have the designation, any more than people have ignored major CS programs because they do not have CSAB accreditation. We've been recognized for our excellence in research, we continue to attract and graduate excellent students, and we continue to serve the community. We are certain that people will recognize that and respond accordingly.

More importantly, this goes to the heart of what it means to be "trustworthy." Security and privacy issues are based on a concept of trust and that also implies honesty. It simply is not honest to continue to participate in (and thereby support) a designation that is misleading. There are not 94 centers of excellence in information and cyber security in the US. You might ask the personnel at some of the schools that are so designated as to why they feel the need to participate and shore up that unfortunate canard.


The above was originally written in 2008. A few years later, the CAE requirements were changed to add a CAE-R designation (R for research), and several of our students did the mapping so we were redesignated. Most of the criticisms remain accurate even in 2012.

Barack Obama, National Security, and Me

[Update 7/17: Video of the Senator’s opening remarks and the panel session (2 parts) are now online at this site. I have also added a few links.]


This story (somewhat long) is about Senator Barack Obama’s summit session at Purdue University today (Wednesday, July 16). on security challenges for the 21st century. I managed to attend, took notes, and even got my name mentioned. Here’s the full story.

Prelude

Monday night, I received email from a colleague here at Purdue asking if I could get her a ticket to see Senator Obama on campus. I was more than a little puzzled — I knew of no visit from the Senator, and I especially didn’t know why she thought I might have a ticket (although there are people around here who frequently ask me for unusual things).

Another exchange of email resulted in the discovery that the Senator was coming to Purdue today (the 16th of July) with a panel to hold a summit meeting on security issues for the 21st century. Cyber security was going to be one of the topics. The press was told that Purdue was chosen because of the leading role our researchers have in various areas of public safety and national security — including the leading program in cyber security — although some ascribed political motives as the primary reason for the location.

I found it rather ironic that security would be given as the reason for being at Purdue, and yet those of us most involved with those security centers had not been told about the summit or given invitations. It appears that the organizers gave a small number of tickets to the university, and those were distributed to administrators rather than faculty and students working in the topic areas.

I found this all very ironic and interesting, and expressed as much in email to several friends and colleagues — including several who I knew had some (indirect) link to the Senator’s campaign. I had faint hope of getting a ticket, but was more interested in simply getting the word back that there was a misfire in the organization of the event.

Late last night (I was in the office until 6:30) I got a call from someone associated with the Obama campaign. He apologized for the lack of an invitation, and informed me that a ticket was awaiting me at the desk the next day.

The Event

I went over to the Purdue Union at 11:30; the official event was to start at 12. I encountered a number of Purdue administrators in the crowd. Security was apparent for the event, including metal detectors at the door run by uniformed officers, some of whom I believe were with the Secret Service uniformed division. The officers everywhere were polite and cheerful, but watchful. I found a seat in the back of the North Ballroom with about 500 other guests…and nearly as many members of the press, entourage, ushers, protection detail, and so on.

I won’t try to summarize everything said by the Senator and panel — you can find the full video here (in two parts). I will provide some impressions of specific things that were said.

The event started almost on time (noon) with Senator Evan Bayh introducing Senator Barack Obama. Sen. Obama then read from a prepared set of remarks. His comments really resonated with the crowd (I encourage you to follow the link to read them). His comment about how we have been “fighting the last war” is particularly appropriate.

He made some very nice comments about Senator Richard Lugar, the other Senator from Indiana. Senator Lugar is a national asset in foreign policy, and both Senators Obama and Bayh (and former Senator Nunn) had nothing but good things to say about him — and all have worked with him on disarmament and peace legislation. One of the lighter moments was when Senator Obama said that Senator Lugar was a great man in every way except that he was a Republican!

Early in his statement, he deviated from his script as reproduced in the paper, and dropped my name as he was talking about cyber security. I was very surprised. He referred to me as one of the nation’s leading experts in cyber security when he mentioned Purdue being in the lead in this area. Wow! I guess someone I sent my email to pushed the right button (although my colleagues and our students deserve the recognition, as much or more than I do).

His further comments on officially designating the cyber infrastructure as a strategic asset is important for policy & legal reasons, and his comments on education and research also seemed right on. It was a strong opening, and there was obviously a lot in his comments for a number of different audiences, including the press.

Panel Part I

The first 1/3 of the panel discussion was on nuclear weapons issues. The experts present to talk on the issue were (former) Senator Sam Nunn (who joked that in Indiana everyone thought his last name was actually Nunn-Lugar), Senator Bayh, and Dr. Graham Allison, the director of the Belfer Center at Harvard. There was considerable discussion about the proliferation of nuclear materials, the need for cooperation with other countries rather than ignoring them (viz. North Korea and Iran), and the control of fissionable material.

There were some statements that I found to be a bit of hyperbole: For instance, the statement that a single bomb could be made by terrorists to destroy a whole city. Not to minimize the potential damage, but without sophisticated nation-state assistance and machining, a crude fission weapon is about all that a terrorist group could manage, and it wouldn’t be that large or that easy to build. A few tens of kilotons of fission explosion could definitely ruin your day, but a detonation at ground level wouldn’t destroy a whole city of any size. (Lafayette, IN would be mostly destroyed by one, but that isn’t a major city.) Plutonium is too dangerous to handle, so over 100 pounds of U-235 (or U-233) would be needed, and machined appropriately, for such a weapon. Without accelerators and specially shaped charges & containers, getting fission fast enough and long enough is difficult and….well, there is a very serious threat, and the nuances may be lost on the average crowd, but the focus on terrorists building a significant bomb seemed wrong to me.

There were some excellent remarks made about opportunity cost. For instance, the one figure that stood out was that we could fully fund the Nunn-Lugar initiative and some other plans to secure loose nuclear materials by spending the equivalent of 1 month of what we now spend in Iraq over the next 4 years around the world; the war in Iraq is breeding terrorists and making US enemies, while securing loose nukes would help protect generations to come around the world. As both a taxpayer and a parent (as well as someone immersed in defense issues), I know where I would prefer to see the money spent!

One other number given is that currently less than 1/4 of 1% of the defense budget is spent on containing nuclear materials, despite it being a declared priority of President Bush. Professor Allison said that despite grade inflation at Harvard, the President still gets an “F” in this area.

Another interesting factoid stated was that about 10% of the lights in the US are powered by electricity generated from reprocessed fissile material taken from Russian nukes rendered safe under the Nunn-Lugar initiative. That sounds high to me given the amount of nuclear power generated in the US, but even if off by a factor of 10, darned impressive.

Panel Part II

The second part of the panel was on bio weapons. The panelists were Dr. Tara O’Toole of the Center for Biosecurity at Pitt, and Dr. David Relman of Stanford. Their discussion was largely what I expected, about how bio-weapons can be produced by rogue actors as well as rogue states. They made the usual references to plague (with a funny interchange about prairie dogs being carriers, and keeping the Senator’s campaign away from them), anthrax and Ebola.

Again, there was a bit of exaggeration coupled with the dialog. It was pointed out that there has still been no apprehension of the perpetrator of the 2001 anthrax attacks. It was then stated that the anthrax in the envelope sent to Senator Daschle was enough to kill a billion people. No mention was made about how impossible it would be to meter and deliver such dosages in the most appropriate manner to achieve that. In fact, no discussion was made about the difficulty in weaponizing most biological agents, limiting their use as a targeted weapon over a large area. And furthermore, no mention at all was made of chemical weapons.

The conclusion here was that investment in better research and international cooperation was key. The statement was made that better integration of electronic health records would be important, too, although some studies I recall indicate that their utility is probably not so great as some would hope. It was also concluded that benefits in faster medical response and better vaccine production would help in non-crisis times as well. I don’t think we can argue too much with that, although the whole issue of how we pay for medicine and health issues looms large.

Panel Part III

The last panel featured Alan Wade, former CIO of the CIA, and Paul Kurtz of Good Harbor Consulting, speaking on the cyber threat. I’ve known Paul for years, and he is a great person to talk on these issues.

The fact that cyber technology is universal and ubiquitous was highlighted. So was the asymmetry inherent in the area. Some mention was made about how nothing has been done by the current administration until very recently. Sadly, that is clearly the case. The National Strategy in 2002, the PITAC report in 2005, and the CSTB report in 2007 (to name 3 examples) all generated no response. As a member of the PITAC that helped write the 2005 report, I was shocked at the lack of Federal investment and the inaction we documented (I knew it was bad, but didn’t realize until then how bad it was); the reaction from the White House was to dissolve the committee rather than address the real problems highlighted in the report. As one of today’s panelists put it — the current administration’s response has been “…late, fragmented, and inadequate.” Amen.

I was disappointed that so much was said about terrorism and denial of service. Paul did join in near the end and point out that alteration of critical data was a big concern, but there was no mention of alteration of critical services, about theft of intellectual property, about threats to privacy, or other more prominent threats. Terrorism online is not the biggest threat we face, and we have a major crisis in progress that doesn’t involve denial of service. We need to ensure that our policymakers understand the scope of the threat.

On the plus side, Senator Obama reiterated how he sees cyber as a national resource and critical infrastructure. He wants to appoint a national coordinator to help move protection forward. (If he is elected I hope he doesn’t put the position in DHS!)

Paul pointed out the need for more funds for education and research. He also made a very kind remark, mentioning me by name, and saying how we were a world-class resource built with almost no funding. That’s not quite true, but sadly not far off. I have chafed for years at how much more we could do with even modest on-going support that wasn’t tied to specific research projects….

Conclusions

I was really quite impressed with the scope of the discussion, given the time and format, and the expertise of the panelists. Senator Obama was engaged, attentive, and several of his comments and questions displayed more than a superficial knowledge of the material in each area. Given our current President referring to “the Internets” and Senator McCain cheerfully admitting he doesn’t know how to use a computer, it was refreshing and hopeful that Senator Obama knows what terms such as “fission” and “phishing” mean. And he can correctly pronounce “nuclear”! grin His comments didn’t appear to be rehearsed — I think he really does “get it.”

(Before someone picks on me too much…. I believe Senator McCain is an honorable man, a dedicated public servant, and a genuine American hero. I am grateful to have people like him intent on serving the public. However, based on his comments to the press and online, I think he is a generation out of date on current technology and important related issues. That isn’t a comment related to his age, per se, but to his attitude. I’d welcome evidence that I am mistaken.)

Senator Obama is a great orator. I also noticed how his speed of presentation picks up for the press (his opening remarks) but became more conversational during the panel.

Senator Obama kept bringing the panel back to suggestions about what could be done to protect the nation. I appreciated that focus on the goal. He also kept returning to the idea that problems are better solved early, and that investments without imminent threat are a form of insurance — paying for clean-up is far greater than some prudent investment early on. He also repeatedly mentioned the need to be competitive in science and technology, and how important support for education is — and will be.

After the session was over, I didn’t get a chance to meet any of the campaign staff, or say hello to Paul. I did get about 90 seconds with Senator Bayh and invited him to visit. After my name had been mentioned about 3 times by panelists and Senator Obama, he sort of recognized it when I introduced myself. We’ll see if he follows up. I’ve visited his office and Senator Lugar’s, repeatedly, and neither have ever bothered to follow up to see what we’re doing or whether they could help.

Several people in the audience commented on my name being mentioned. I’m more than a little embarrassed that they didn’t refer to CERIAS and my colleagues, and in fact I was the only Purdue person mentioned by name during the entire 2 hours, and then it happened multiple times. I’m not sure if that’s good or not — we’ll see. However, as P.T. Barnum said, there’s no such thing as bad publicity … so long as they spell my name correctly. tongue rolleye None of the local or national press seem to have picked it up, however, so even spelling isn’t an issue.

The press, in fact, hasn’t seemed to focus on the substance of the summit at all. I’ve read about 15 accounts so far, and all have focused on his choice of VP or the status of the campaign. It is so discouraging! These are topics of great importance that are not well understood by the public, and the press simply ignores them. Good thing Angelina Jolie gave birth earlier in the week or the summit wouldn’t have even made the press. confused

I wish more of the population would take the time to listen to prolonged discussion like this. 15-second sound bites serve too often as the sole input for most voters. And even then, too many are insufficiently educated (or motivated) to understand even the most basic concepts. I wonder if more than 5 people will even bother to read this long a post — most people want blogs a single page in length.

As for my own political opinions and voting choices, well, I’m not going to use an official Purdue system to proselytize about items other than cyber security, education, research and Purdue. You can certainly ask me if you see me. Now, if only I had confidence in the electronic voting equipment that so many of us are going to be forced to use in November (hint: I’m chair of the USACM).

Last Tongue-in-Cheek Word

And no, I’m not particularly interested in the VP position.