The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Having an Impact on Cybersecurity Education

Share:

The 12th anniversary of CERIAS is looming (in May). As part of the display materials for our fast-approaching annual CERIAS Symposium (register now!), I wanted to get a sense of the impact of our educational activities in addition to our research. What I found surprised me -- and may surprise many others!

Strategic Planning

Back in 1997, a year before the formation of CERIAS, I presented testimony before a U.S. House of Representatives hearing on "Secure Communications." For that presentation, I surveyed peers around the country to determine something about the capacity of U.S. higher education in the field of information security and privacy (this was before the term "cyber" was popularized). I discovered that, at the time, there were only four defined programs in the country. We estimated that there were fewer than 20 academic faculty in the US at that time who viewed information security other than cryptography as their primary area of emphasis. (The reason we excluded cryptography was because there were many people who were working in abstract mathematics that could be applied to cryptography but who knew extremely little about information security as a field, and certainly were not teaching it).

The best numbers I could come up with from surveying all those people was that, as of 1997, U.S. higher education was graduating only about three new Ph.D. students a year in information security, Thus, there were also very few faculty producing new well-educated experts at any level, and too small a population to easily grow new programs. I noted in my remarks that the output was too low by at least two orders of magnitude for national needs (and was at least 3-5 orders too low for international needs).

As I have noted before, my testimony helped influence the creations of (among other things) the NSA's CAE program and the Scholarship for Service program. Both provided some indirect support for increasing the number of Ph.D graduates and courses at all postsecondary levels. The SfS has been a qualified success, although the CAE program not so much.

When CERIAS was formed, one element of our strategic plan was to focus on helping other institutions build up their capacity to offer infosec courses at every level, as a matter of strategic leadership. We decided to do this through five concurrent approaches:

  1. Create new classes at every level at Purdue, across several departments
  2. Find ways to get more Ph.D.s through our program, and help place them at other academic institutions
  3. Host visitors and postdocs, provide them with additional background in the field for eventual use at other academic institutions
  4. Create an affiliates program with other universities and colleges to exchange educational materials, speakers, best practices, and more
  5. Create opportunities for enrichment programs for faculty at other schools, such as a summer certificate program for educators at 2 and 4-year colleges.

Our goal was not only to produce new expertise, but to retrain personnel with strong backgrounds in computing and computing education. Transformation was the only way we could see that a big impact could be made quickly.

Outcome

We have had considerable success at all five of these initiatives. Currently, there are several dozen classes in CERIAS focus areas across Purdue. In addition to the more traditional graduate degrees, our Interdisciplinary graduate degree program is small but competitive and has led to new courses. Overall, on the Ph.D. front, we anticipate another 15 Ph.D. grads this May, bringing the total CERIAS output of PhD.s over 12 years to 135. To the best of our ability to estimate (using some figures from NSF and elsewhere), that was about 25% of all U.S. PhDs in the first decade that CERIAS was in existence, and we are currently graduating about 20% of U.S. output. Many of those graduates have taught or still teach at colleges and universities, even if part-time. We have also graduated many hundreds of MS and undergrad students with some deep coursework and research experience in information security and privacy issues.

We have hosted several score post-docs and visiting faculty over the years, and always welcome more --- our only limitation right now is available funding. For several years, we had an intensive summer program for faculty from 2 and 4-year schools, many of which are serving minority and disadvantaged populations. Graduates of that program went on to create many new courses at their home institutions. We had to discontinue this program after a few years because of, again, lack of funding.

Our academic affiliates program ran for five years, and we believe it was a great success. Several schools with only one or two faculty working in the area were able to leverage the partnership to get grants and educational resources, and are now notable for their own intrinsic capabilities. We discontinued the affiliates program several years ago as we realized all but one of those partners had "graduated."

So, how can we measure the impact of this aspect of our strategic plan? Perhaps by simply coming up with some numbers....

We compiled a list of anyone who had been through CERIAS (and a few years of COAST, prior) who:

  • Got a PhD from within Purdue and was part of CERIAS
  • Did a postdoc with CERIAS to learn (more) about cybersecurity/privacy
  • Came as a visiting faculty member to learn (more) about cybersecurity/privacy
  • Participated in one of our summer institutes for faculty

We gathered from them (as many as we could reach) the names of any higher education institution where they taught courses related to security, privacy or cyber crime. We also folded in the names of our academic affiliates at which such courses were (or still are) offered. The resultant list has over 100 entries! Even if we make a somewhat moderate estimate of the number of people who took these classes, we are well into the tens of thousands of students impacted, in some way, and possibly above 100,000, worldwide. That doesn't include the indirect effect, because many of those students have gone on (or will) to teach in higher education -- some of our Ph.D. grads have already turned out Ph.D. grads who now have their own Ph.D. students!

Seeing the scope of that impact is gratifying. And knowing that we will do more in the years ahead is great motivation, too.

Of course, it is also a little frustrating, because we could have done more, and more needs to be done. However, the approaches we have used (and are interested in trying next) never fit into any agency BAA. Thus, we have (almost) never been able to get grant support for our educational efforts. And, in many cases, the effort, overhead and delays in the application processes aren't worth the funding that is available. (The same is true of many of our research and outreach activities, but that is a topic for another time.)

We've been able to get this far because of the generosity of the companies and agencies that have been CERIAS general supporters over the years -- thank you! Our current supporters are listed on the CERIAS WWW site (hint: we're open to adding more!). We're also had a great deal of support within Purdue University from faculty, staff and the administration. It has been a group effort, but one that has really made a positive difference in the world....and provides us motivation to continue to greater heights.

See you at the CERIAS Symposium!

Institutions

Here is the list of the 106 107 108 educational institutions [last updated 3/21,1600 EDT]:

  • Air Force Institute of Technology
  • Amrita Vishwa Vidyapeetham, Coimbatore, India
  • Brigham Young University
  • Cairo University (Egypt)
  • California State University Sacramento
  • California State University Long Beach
  • Carnegie Mellon University
  • Case Western Reserve University
  • Charleston Southern University
  • Chunggnam National University, Korea
  • College of Aeronautical Engineering, PAF Academy, Risalpur Pakistan
  • College of Saint Elizabeth
  • Colorado State University
  • East Tennessee State University
  • Eastern Michigan University
  • Felician College
  • George Mason University
  • Georgia Institute of Technology
  • Georgia Southern University
  • Georgetown University
  • Hannam University, Korea
  • Helsinki University of Technology (Finland)
  • Hong Kong University of Science & Technology
  • Illinois Wesleyan University
  • Indian Institute of Science, Bangalore
  • Indiana University-Purdue University, Fort Wayne
  • Indiana University-Purdue University, Indianapolis
  • International University, Bruchsal, Germany
  • Iowa State University
  • James Madison University
  • John Marshall School of Law
  • KAIST (Korea Advanced Institute of Science and Technology)
  • Kansas State University
  • Kennesaw State University
  • Kent State University
  • Korea University
  • Kyungpook National University, Korea
  • Linköpings Universitet, Linköping Sweden
  • Marquette University
  • Miami University of Ohio
  • Missouri Univ S&T
  • Murray State University
  • Myongji University, Korea
  • N. Georgia College & State Univ.
  • National Chiao Tung University, Taiwan
  • National Taiwan University
  • National University of Singapore
  • New Jersey Institute of Technology
  • North Carolina State University
  • Norwalk Community College
  • Oberlin College
  • Penn State University
  • Purdue University Calumet
  • Purdue University West Lafayette
  • Qatar University, Qatar
  • Queensland Institute of Technology, Australia
  • Radford University
  • Rutgers University
  • Sabanci University, Turkey
  • San José State University
  • Shoreline Community College
  • Simon Fraser University
  • Southwest Normal University (China)
  • Southwest Texas Junior College
  • SUNY Oswego
  • SUNY Stony Brook
  • Syracuse University
  • Technische Universität München (TU-Munich)
  • Texas A & M Univ. Corpus Christi
  • Texas A & M Univ. Commerce
  • Tuskegee University
  • United States Military Academy
  • Universidad Católica Boliviana San Pablo, Bolivia
  • Universität Heidelberg, Heidelberg, Germany
  • University of Albany
  • University of Calgary
  • University of California, Berkeley
  • University of Cincinnati
  • University of Connecticut
  • University of Dayton
  • University of Denver
  • University of Florida
  • University of Kansas
  • University of Louisville
  • University of Maine at Fort Kent
  • University of Maryland University College
  • University of Mauritius, Mauritius
  • University of Memphis
  • University of Milan, Italy
  • University of Minnesota
  • University of Mississippi
  • University of New Haven (CT)
  • University of New Mexico
  • University of North Carolina, Charlotte
  • University of Notre Dame
  • University of Ohio
  • University of Pittsburgh
  • University of Texas, Dallas
  • University of Texas, San Antonio
  • University of Trento (Italy)
  • University of Virginia
  • University of Washington
  • University of Waterloo
  • University of Zurich
  • Virginia Tech
  • Washburn University
  • Western Michigan University
  • Zayed University, UAE

Making the CWE Top 25, 2010 Edition

Share:
As last year, I was glad to be able to participate in the making of the CWE Top 25. The 2010 Edition has been more systematically and methodically produced than last year's. We adjusted the level of abstraction of the entries to be more consistent, precise and actionable. For that purpose, new CWE entries were created, so that we didn't have to include a high-level entry because there was no other way to discuss a particular variation of a weakness. There was a formal vote with metrics, with a debate about which metrics to use, how to vote, and how to calculate a final score. We moved the high-level CWE entries which could be described as "Didn't perform good practice X" or "Didn't follow principle Y" into a mitigations section which specifically addresses what X and Y are and why you should care about them. Those mitigations were then mapped against the top-25 CWE entries that they affected.

For the metrics, CWE entries were ranked by prevalence and importance. We used P X I to calculate scores. That makes sense to me because risk is defined as Potential loss x Probability of occurrence, so by this formula the CWE rankings are related to the risk those weaknesses pose to your software and business. Last year, the CWEs were not ranked; they instead had "champions" who argued for their inclusion in the Top-25.

I worked on creating an educational profile, with its own metrics (of course not alone; it wouldn't have happened without Steve Christey, his team at MITRE, and other CWE participants). The Top-25 now has profiles; so depending on your application and concerns, you may select a profile that ranks entries differently and appropriately. The educational profile used prevalence, importance but also emphasis. Emphasis relates to how difficult a concept is to explain and understand. Easy concepts can be learned in homeworks, labs, or are perhaps so trivial that they can be learned in the students own reading time. Harder concepts deserve more class time, provided that they are important enough. Another factor for emphasis was how much a particular CWE is helpful in learning others, and its general applicability. So, the educational profile tended to include higher-level weaknesses. Also, it considered all historical time periods for prevalence, whereas the Top-25 is more focused on data for the last 2 years. This is similar to the concept of regression testing -- we don't want problems that have been solved to reappear.

Overall, I have a good feeling about this year's work, and I hope that it will prove useful and practical. I will be looking for examples of its use and experiences with it, and of course I'd love to hear what you think of it. Tell us both the good and the bad -- I'm aware that it's not perfect, and it has some subjective elements, but perhaps comments will be useful for next year's iteration.

Cowed Through DNS

Share:
May 2010 will mark the 4th aniversary of our collective cowing by spammers, malware authors and botnet operators. In 2006, spammers squashed Blue Frog. They made the vendor of this service, Blue Security, into lepers, as everyone became afraid of being contaminated by association and becoming a casualty of the spamming war. Blue Frog hit spammers were it counted -- in the revenue stream, simply by posting complaints to spamvertized web sites. It was effective enough to warrant retaliation. DNS was battered into making Blue Security unreachable. The then paying commercial clients of Blue Security were targetted, destroying the business model; so Blue Security folded [1]. I was stunned that the "bad guys" won by brute force and terror, and the security community either was powerless or let it go. Blue Security was even blamed for some of their actions and their approach. Blaming the victims for daring to organize and attempt to defend people, err, I mean for provoking the aggressor further, isn't new. An open-source project attempting to revive the Blue Frog technology evaporated within the year. The absence of interest and progress has since been scary (or scared) silence.

According to most sources, 90-95% of our email traffic has been spam for years now. Not content with this, they subject us to blog spam, friendme spam, IM spam, and XSS (cross-site scripting) spam. That spam or browser abuse through XSS convinces more people to visit links and install malware, thus enrolling computers into botnets. Botnets then enforce our submission by defeating Blue Security type efforts, and extort money from web-based businesses. We can then smugly blame "those idiots" who unknowingly handed over the control over their computers, with a slight air of exasperation. It may also be argued that there's more money to be made selling somewhat effective spam-fighting solutions than by emulating a doomed business model. But in reality, we've been cowed.

I had been hoping that the open source project could make it through the lack of a business model; after all, the open source movement seems like a liberating miracle. However, the DNS problem remained. So, even though I didn't use Blue Frog at the time, I have been hoping for almost 4 years now that DNS would be improved to resist the denial of service attacks that took Blue Security offline. I have been hoping that someone else would take up the challenge. However, all we have is modest success at (temporarily?) disabling particular botnets, semi-effective filtering, and mostly ineffective reporting. Since then, spammers have ruled the field practically uncontested.

Did you hear about Comcast's deployment of DNSSEC [2]? It sounds like a worthy improvement; it's DNS with security extensions, or "secure DNS". However, Denial-of-service (DoS) prevention is out-of-scope of DNSSEC! It has no DoS protections, and moreover there are reports of DoS "amplification attacks" exploiting the larger DNSSEC-aware response size [3]. Hum. Integrity is not the only problem with DNS! A search of IEEE Explore and the ACM digital library for "DNS DoS" reveals several relevant papers [4-7], including a DoS-resistant backwards compatible replacement for the current DNS from 2004. Another alternative, DNSCurve has protection for confidentiality, integrity and availability (DoS) [8], has just been deployed by OpenDNS [9] and is being proposed to the IETF DNSEXT working group [10]. This example of leadership suggests possibilities for meaningful challenges to organized internet crime. I will be eagerly watching for signs of progress in this area. We've kept our head low long enough.

References
1. Robert Lemos (2006) Blue Security folds under spammer's wrath. SecurityFocus. Accessed at http://www.securityfocus.com/news/11392
2. Comcast DNSSEC Information Center Accessed at http://www.dnssec.comcast.net/
3. Bernstein DJ (2009) High-speed cryptography, DNSSEC, and DNSCurve. Accessed at: http://cr.yp.to/talks/2009.08.11/slides.pdf
4. Fanglu Guo, Jiawu Chen, Tzi-cker Chiueh (2006) Spoof Detection for Preventing DoS Attacks against DNS Servers. 26th IEEE International Conference on Distributed Computing Systems.
5. Kambourakis G, Moschos T, Geneiatakis D, Gritzalis S (2007) A Fair Solution to DNS Amplification Attacks. Second International Workshop on Digital Forensics and Incident Analysis.
6. Hitesh Ballani, Paul Francis (2008) Mitigating DNS DoS attacks. Proceedings of the 15th ACM conference on Computer and communications security
7. Venugopalan Ramasubramanian, Emin Gün Sirer (2004) The design and implementation of a next generation name service for the internet. Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications
8. DNSCurve: Usable security for DNS (2009). Accessed at http://dnscurve.org/
9. Matthew Dempsky (2010) OpenDNS adopts DNSCurve. Accessed at http://blog.opendns.com/2010/02/23/opendns-dnscurve/
10. Matthew Dempsky (2010) [dnsext] DNSCurve Internet-Draft. Accessed at http://www.ops.ietf.org/lists/namedroppers/namedroppers.2010/msg00535.html

Blast from the Past

Share:

Yes, I have been quiet (here) over the last few months, and have a number of things to comment on. This hiatus is partly because of schedule, partly because I had my laptop stolen, and partly health reasons. However, I'm going to try to start back into adding some items here that might be of interest.

To start, here is one item that I found while cleaning out some old disks: a briefing I gave at the NSA Research division in 1994. I then gave it, with minor updates, to the DOD CIO Council (or whatever their name was/is -- the CNSS group?), the Federal Infosec Research Council, and the Criticial Infrastructure Commission in 1998. In it, I spoke to what I saw as the biggest challenges in protecting government systems, and what were major research challenges of the time.

I have no software to read the 1994 version of the talk any more, but the 1998 version was successfully imported into Powerpoint. I cleaned up the fonts and gave it a different background (the old version was fugly) and that prettier version is available for download. (Interesting that back then it was "state of the art" grin

I won't editorialize on the content slide by slide, other than to note that I could give this same talk today and it would still be current. You will note that many of the research agenda items have been echoed in other reports over the succeeding years. I won't claim credit for that, but there may have been some influences from my work.

Nearly 16 years have passed by, largely wasted, because the attitude within government is still largely one of "with enough funding we can successfully patch the problems." But as I've quoted in other places, insanity is doing the same thing over and over again and expecting different results. So long as we believe that simple incremental changes to the existing infrastructure, and simply adding more funding for individual projects, is going to solve the problems then the problems will not get addressed -- they will get worse. It is insane to think that pouring ever more funding into attempts to "fix" current systems is going to succeed. Some of it may help, and much of it may produce some good research, but overall it will not make our infrastructure as safe as it should be.

Yesterday, Admiral (ret) Mike McConnell, the former Director of National Intelligence in the US, said in a Senate committee hearing that if there were a cyberwar today, the US would lose. That may not be quite the correct way of putting it, but we certainly would not come out of it unharmed and able to claim victory. What's more, any significant attack on the cyberinfrastructure of the US would have global repercussions because of the effects on the world's economy, communications, trade, and technology that are connected by the cyber infrastructure in the US.

As I have noted elsewhere, we need to do things differently. I have prepared and circulated a white paper among a few people in DC about one approach to changing the way we fund some of the research and education in the US in cybersecurity. I have had some of them tell me it is too radical, or too different, or doesn't fit in current funding programs. Exactly! And that is why I think we should try those things -- because doing more of the same in the current funding programs simply is not working.

But 15 years from now, I expect to run across these slides and my white paper, and sadly reflect on over three decades where we did not step up to really deal with the challenges. Of course, by then, there may be no working computers on which to read these!

Drone “Flaw” Known Since 1990s Was a Vulnerability

Share:
"The U.S. government has known about the flaw since the U.S. campaign in Bosnia in the 1990s, current and former officials said. But the Pentagon assumed local adversaries wouldn't know how to exploit it, the officials said." Call it what it is: it's a vulnerability that was misclassified (some might argue that it's an exposure, but there is clearly a violation of implicit confidentiality policies). This fiasco is the result of the thinking that there is no vulnerability if there is no threat agent with the capability to exploit a flaw. I argued against Spaf regarding this thinking previously; it is also widespread in the military and industry. I say that people using this operational definition are taking a huge risk if there's a chance that they misunderstood either the flaw, the capabilities of threat agents, present or future, or if their own software is ever updated. I believe that for software that is this important, an academic definition of vulnerability should be used: if it is possible that a flaw could conceptually be exploited, it's not just a flaw, it's a vulnerability, regardless of the (assumed) capabilities of the current threat agents. I maintain that (assuming he exists for the sake of this analogy) Superman is vulnerable to kryptonite, regardless of an (assumed) absence of kryptonite on earth.

The problem is that it is logically impossible to prove a negative, e.g., that there is no kryptonite (or that there is no God, etc...). Likewise, it is logically impossible to prove that there does not exist a threat agent with the capabilities to exploit a given flaw in your software. The counter-argument is then that the delivery of the software becomes impractical, as the costs and time required escalate to remove risks that are extremely unlikely. However, this argument is mostly security by obscurity: if you know that something might be exploitable, and you don't fix it because you think no adversary will have the capability to exploit it, in reality, you're hoping that they won't find or be told how (for the sake of this argument, I'm ignoring brute force computational capabilities). In addition, exploitability is a thorny problem. It is very difficult to be certain that a flaw in a complex system is not exploitable. Moreover, it may not be exploitable now, but may become so when a software update is performed! I wrote about this in "Classes of vulnerabilities and attacks". In it, I discussed the concept of latent, potential or exploitable vulnerabilities. This is important enough to quote:

"A latent vulnerability consists of vulnerable code that is present in a software unit and would usually result in an exploitable vulnerability if the unit was re-used in another software artifact. However, it is not currently exploitable due to the circumstances of the unit’s use in the software artifact; that is, it is a vulnerability for which there are no known exploit paths. A latent vulnerability can be exposed by adding features or during the maintenance in other units of code, or at any time by the discovery of an exploit path. Coders sometimes attempt to block exploit paths instead of fixing the core vulnerability, and in this manner only downgrade the vulnerability to latent status. This is why the same vulnerability may be found several times in a product or still be present after a patch that supposedly fixed it.

A potential vulnerability is caused by a bad programming practice recognized to lead to the creation of vulnerabilities; however the specifics of its use do not constitute a (full) vulnerability. A potential vulnerability can become exploitable only if changes are made to the unit containing it. It is not affected by changes made in other units of code. For example, a (potential) vulnerability could be contained in the private method of an object. It is not exploitable because all the object’s public methods call it safely. As long as the object’s code is not changed, this vulnerability will remain a potential vulnerability only.

Vendors often claim that vulnerabilities discovered by researchers are not exploitable in normal use. However, they are often proved wrong by proof of concept exploits and automated attack scripts. Exploits can be difficult and expensive to create, even if they are only proof-of-concept exploits. Claiming unexploitability can sometimes be a way for vendors to minimize bad press coverage, delay fixing vulnerabilities and at the same time discredit and discourage vulnerability reports. "

Discounting or underestimating the capabilities, current and future, of threat agents is similar to the claims from vendors that a vulnerability is not really exploitable. We know that this has been proven wrong ad nauseam. Add configuration problems to the use of the "operational definition" of a vulnerability in the military and their contractors, and you get an endemic potential for military catastrophies.