Posts tagged symposium

Page Content

Panel #4: Securing Web 2.0 (Panel Summary)

Wednesday, April 6, 2011

Panel Members:

  • Gerhard Eschelbeck, Webroot
  • Lorraine Kisselburgh, Purdue
  • Ryan Olson, Verisign
  • Tim Roddy, McAfee
  • Mihaela Vorvoreanu, Purdue

Panel Summary by Preeti Rao

The panel was moderated by Keith Watson, Research Engineer, CERIAS, Purdue University

Keith kick-started the panel with an interesting introduction to the term Web 2.0. He talked about how he framed its definition, gathering facts from Wikipedia, Google searches, comments and likes from Facebook, tweets from Twitter while playing Farmville, Poker on the Android phone!

All the panelists gave short presentations on Web 2.0 security challenges and solutions. These presentations introduced the panel topic from different perspectives - marketing, customer demands, industry/market analysis, technological solutions, academic research and user education.

Mihaela Vorvoreanu from Purdue University, who gave the first presentation, chose to use Andrew McAfee’s definition of Enterprise 2.0: a set of emerging social software collaborative platforms. She noted that the emphasis is on the word “platform” as opposed to “communication channels” because platforms are public and they support one-to-one communication which is public to all others, thus making it many-to-many communication.

She talked about the global study on Web 2.0 use in organizations which was commissioned by McAfee Inc, and reported by faculty at Purdue University. This study defined Web 2.0 to include consumer social media tools like Facebook, Twitter, YouTube and Enterprise 2.0 platforms. The study was based on a survey of over 1000 CIOs and CEOs in 17 countries, sample balanced by country, organization size, industry sector. The survey results were complimented with in-depth interviews with industry experts, analysts, academicians to get a comprehensive view of Web 2.0 adoption in organizations globally, its benefits and security concerns. While overall organizations reported great benefits and importance to using Web 2.0 in several business operations, the major concern was security - reported by almost 50% of the respondents. In terms of security vulnerabilities, social networking tools were reported to be the top threat followed by Webmail, content sharing sites, streaming media sites and collaborative platforms. Specific threats that organizations perceive from employee use of Web 2.0 included malware, virus, information over-exposure, spyware, data leaks. 70% of the respondents had security incidents in the past year and about 2 million USD were lost due to security incidents. The security measures reported by organizations included firewall protection, web filtering, gateway filtering, authentication and social media policies.

She presented a broad, global view of organizational uses, benefits and security concerns of Web 2.0.

Lorraine Kisselburgh from Purdue University continued to present the results from McAfee’s report. She discussed an interesting paradox that the study found.

Overall, there is a positive trend with significant adoption rate (75%) of Web 2.0 tools world-wide. There are also significant concerns among those who haven’t adopted the technology. 50% of non adopters report security concerns, followed by productivity, brand and reputation concerns. Not all tools have the same perceived value or even same concerns/risks/threats. Social networking tools and streaming media sites are considered most risky. Nearly half of the organizations banned Facebook. 42% banned IM, 38% banned YouTube. Collaborative platforms and content sharing tools are considered as less risky and their perceived value/usefulness is high when compared to social tools. But survey of those organizations who have adopted report the real value of social tools to be quite high - helpful in increasing communication, improving brand marketing etc. In fact social tools realized greater value than webmail etc.

So, the paradox is: social tools (social networking and streaming media sites) are mostly considered highly risky from a security standpoint, perceived least valuable to organizations, but yet they realize great value among adopters.

This reflects the continuing tensions between how the value of social media tools is perceived vs realized by organizations. This is also in-line with some historical trends in adopting new/unknown, emerging technologies. Example: email. The tensions are also because of where the technology is located and where to address risk: internal tools vs external on the cloud. It also has to do with recognizing organizational tools vs people tools.

Tim Roddy from McAfee addressed his comments on Web 2.0 security from a buying organization standpoint, giving it a product marketing perspective, about selling web security solutions. He commented that initially people were concerned about malware coming in to the organizations through email. Now the model and dynamics have changed and it has an influence on how we investigate our products and how we see our customers using security solutions from a business standpoint. His comments focussed on two areas: 1) stopping malicious software from coming in 2) having customizable controls for people using social media tools.

He pointed out that about 3 years ago, his customers were using their products to block access to sites like Twitter, Facebook because they saw no value in using them in businesses. But periodic McAfee surveys show a dramatic change in this trend. Organizations are allowing access to these tools; this trend is also driven by the younger generation of employees in the organizations demanding access. While it was a URL filtering solution that was used 3 years back to just block for eg, social networking sites category, now it is changed because they allow access to those websites.

So, how do we allow safe productive access?

There is a dramatic increase/acceleration in malware; they are automated, targeted and smarter now. Therefore web security efforts need to be proactive. By proactive security, it means not only to stop malware with signature analysis but include effective behavioral analysis to break the chains/patterns of attacks. McAfee’s Gateway Anti-Malware strategies focus on these.

Secondly, organizations allow access to social media tools now; but no one filters the apps in those tools to make sure they are legitimate. For eg: are the game apps on Facebook legitimate and secure? Such apps are one of the most common ways of attacks. The solution is to customize controls. Industries, especially finance and healthcare, are worried about leakage of data. Say, an employee sends his SSN through a LinkedIn message. Can it be blocked/filtered? Security solution efforts are now bi-directional – to proactively monitor and filter what is coming in as malware and what is going out as data leakage.

Lastly, the security concerns for use of mobile/handheld devices are growing. There is a great need to secure these devices, especially if corporately owned. It needs to have the same level of regulations and be compliant to corporate network standards.

Gerhard Eschelbeck from Webroot talked about why securing Web 2.0 a big deal and how we got there.

First gen of web apps were designed for static content to be displayed by browser. All execution processing was on server side and mostly trusted content. There were no issues about client/side browser side execution so the number of attacks happening was significantly less. The only worry then was to protect the servers. Now, the security concerns are mainly because of interactive content in Web 2.0. Fundamentally the model changes from 1-way-data from server to client to 2-way interactive model. Browser has become part of this execution environment. Billions of users’ browsers that are a part of this big ecosystem are exposed to attacks.

There is a major shift from code execution purely on server-side to distributed model of code execution using ajax and interactive, dynamic client side web page executions. While useful in many ways, it introduces new vulnerabilitie and this is the root cause for Web 2.0 security concerns.

He highlighted four areas of concerns:

  1. User created, user defined content which is not trusted content
  2. To bring desktop look and feel to the Web 2.0 applications, interactive features like mouse rollovers, popups have caused significant amount of interaction between server and client and this causes more vulnerabilities
  3. Syndication of content and mashups of various sites
  4. Offline capabilities of some applications now lead to storage of information on one of those billions of desktops

All these have led to increased security exposure points in turn leading to vulnerabilities.

Ryan Olson from Verisign talked about malware issues with Web 2.0.People are sharing a lot of their personal information online which they weren’t doing earlier. Access to personal information of people has become easy now, and is available to friends on social networks, or even anyone who has access to that friend’s account. A lot of organizations now have started using a security question/answer as a form of authentication after login/password. Answers to questions like user’s mother’s maiden name or high school name can be easily found on social networking sites. Most of such questions can be answered by looking at the user’s personal data that is available online, often without much authentication. This way Web 2.0 offers more vectors for malware. It offers many ways of communicating with people hence opening up to a lot of new entry points that we now need to monitor. Earlier it was mostly email and IM but now each of these social networks allow an attacker to send message, befriend and build trust. There are additional avenues provided by these tools to social-engineer the user into revealing some information about self, by exploiting the trust between user and his friends. A lot of malware are successful purely through social engineering attacks, by befriending them or enticing them and then extracting information. Primary solution to this problem is to educate people about the consequences of revealing personal information and the value of trust.

Questions from audience and discussions with the panel:

Keith Watson: How much responsibility should be held with the Web 2.0 providers (organizations like Facebook, Twitter) in providing secure applications? How much responsibility should be held with the users and educating them about safe usage? Is there a balance between user education and application provider responsibility?

Discussions:

TR: Just like any application provider, the companies do have a lot of responsibility; but educating the users is also equally important. Users are putting so much information out on the Web (for eg: Oh, I am in the airport). People should be made to realize how much and what to share.

RO: It should be a shared responsibility. It is the market that drives Web 2.0 to become more secure. For example, the competition between social network providers to provide a malware-free, secure application drives everything. If one social network is not as secure then users will just migrate to the next one. This way market will help and continue to put pressure on people in turn the providers to make secure applications.

LK: While it has to be a shared responsibility, it also has to do with recognizing the value of social media tools and encouraging its participation in businesses. Regarding user education, what we have found in some privacy research is that understanding the audience of these tools - who has access, what are they accessing, to whom are you disclosing, and being able to visualize who is listening helps the users in deciding what and how much information to disclose. Framing this through technology, system design would be helpful from an educational standpoint.

MV noted that there could be unintended, secondary audience always listening. She took a cultural approach to explain/understand social media tools. Each tool may be viewed as a different country – Facebook is a country, Twitter is another country. Just like how people from one country aren’t familiar with another country’s culture, and they may use travel guidebooks, travel information for help, users of social media tools need to be educated about the different social media tools and their inherent cultures.

GE: While the tourism and travel industry comparison is good, it doesn’t quite work always in the cyberworld because it is different. There is no differentiation anymore between dark and bright corners; even a site which “looks” safe might be a target of an awful attack Educational element is important but the technological safety belt is much needed. Securing is also hard for the fact that server-side component is usually from provider but client-side/browsers are with the people. It is important how we provide browser protection to users and reduce Web 2.0 attacks.

Brent Roth: What are your thoughts on organizations adopting mechanisms/models like the “no script add- on in Firefox”?

Discussions:

RO: This model would work really well for people who have some security knowledge/background, but doesn’t work for a common man. We need to look at smarter models for general public that make decisions about good and bad by putting the user in the safety belt.

TR: Websites get feeds and ads. While some may be malicious, they also drive the revenue. McAfee’s solutions block parts of the sites/pages which could be malicious. Behavioral analysis techniques help. It has to be a granular design solution.

RO: If all scripts are blocked then what about the advertisers? If we block all advertisers, the Internet falls because they drive the revenue. Yes, a lot of malware comes from ads and scripts but you cannot just completely block everything.

Malicious script analytics, risk profiling need to be done. The last line of defense is always at the browser end. User education is as important as having a technology safety belt to secure Web 2.0.

Panel #3: Fighting Through: Mission Continuity Under Attack (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • Paul Ratazzi, Air Force Research Laboratories
  • Saurabh Bagchi, Purdue
  • Hal Aldridge, Sypris Electronics
  • Sanjai Narain, Telcordia
  • Cristina Nita-Rotaru, Purdue
  • Vipin Swarup, MITRE

Panel Summary by Christine Task

In Panel #3: “Fighting Through: Mission Continuity Under Attack”, each of the six panelists began by describing their own perspective on the problem of organizing real-time responses and maintaining mission continuity during an attack. They then addressed three questions from the audience.

Paul Ratazzi offered his unique insight as the technical advisor for the Cyber Defense and Cyber Science Branches at the Air Force Research Laboratory in Rome, NY. He noted that military organizations are necessarily already experienced at “guaranteeing mission essential functions in contested environments” and suggested that the cyber-security world could learn from their general approach. He divided this approach into four stages: Avoid threats (including hardening systems, working on information assurance, and minimizing vulnerabilities in critical systems), survive attacks (develop new, adaptive, real-time responses to active attacks), understand attacks (forensics), and recover from attacks (build immunity against similar future attacks). Necessary developments to meet these guidelines are improved understanding of requirements for critical functions (systems engineering) and real-time responses that go beyond our current monitor/detect/respond pattern. As a motivation for the latter, he gave the example of a fifth generation fighter, nicknamed a ‘flying network’. When its technological systems are under attack, looking through the log file afterwards is “too little, too late”.

Dr. Saurabh Bagchi of CERIAS and the Purdue School of Electrical and Computer Engineering described an innovative NSF-funded research project which offered real-time responses to attacks on large-scale, heterogeneous distributed systems. These systems involve a diverse array of third-party software and often offer a wide variety of vulnerabilities to an attacker. Additionally, attacks across these systems can spread incredibly quickly using trust relationships and privilege escalation, eventually compromising important internal resources. Any practical reaction must occur in machine-time. Dr. Bagchi’s research chose the following strategies: Use bayesian-inference to guess which components are currently compromised at a given time, and from that information estimate which are most likely to be attacked next. Focus monitoring efforts on those components precieved as at risk. Use knowledge of the distributed system to estimate the severity of the attack in progress, and respond appropriately with real-time containment steps such as randomizing configurations or restricting access to resources. Finally, he emphasized the importance of learning from each attack. Long-term responses should abstract the main characteristics of the attack and prepare defenses suited to any similar attacks in the future.

Dr. Sanjai Narain, a Senior Research Scientist in Information Assurance and Security at Telcordia Research, described his own work on distributed systems defense—a novel, concrete solution for the type of immediate containment suggested by Dr. Bagchi. Although the high-level abstraction of a network as a graph is relatively straightforward, the actual configuration space can be incredibly complex with very many variables to set at each node. ConfigAssure is an application which eliminates configuration errors by using SAT constraint solvers to find configurations which satisfy network specifications. For any given specification, there are likely many correct configurations. In order to successfully attack a network, an attacker must gain some knowledge of its layout (such as the location of gateway routers). By randomizing the network configuration between different correct solutions to the specification, an attacker can be prevented from learning anything useful about the network while the users themselves remain unaware of any changes.

Dr. Cristina Nita-Rotaru, an Assistant Director of CERIAS and an Associate Professor in the Department of Computer Science at Purdue, introduced an additional concern with maintaining mission continuity: maintaining continuity of communication. She offered the recent personal example of having her credit cards compromised while traveling. She was very quickly informed of this problem by her credit card companies and was thus able to make a risk-assessment of the situation and form a reasonable response (disabling one card while continuing to use the less vulnerable one until she could return home). When an attack compromises channels of communication, for example by taking out the network which would be used to communicate—as in jamming wireless networks, the information necessary to make a risk-assessment and form containment strategies is not available. Thus when considering real-time reactions to attacks, it’s important to make sure the communication network is redundant and resilient.

Dr. Hal Aldridge, the Director of Engineering at Sypris Electronics and a previous developer of unmanned systems for space and security applications at Northrop Grumman and NASA, discussed the utility of improving key-management systems to respond to real-time attacks. Key management systems which are agile and dynamic can help large organizations react immediately to threats. In a classic system with one or few secrets which are statically set, the loss of a key can be catastrophic. However, a much more robust solution is a centralized cryptographic key management system which uses a large, accurate model of the system to enable quickly changing potentially compromised keys, or using key changes to isolate potentially compromised resources. He briefly described his work on such a system.

Dr. Vipin Swarup, Chief Scientist for Mission Assurance Research in MITRE’s Information Security Division, emphasized one final very important point about real-time system defense: high-end threats are likely to exist inside the perimeter of the system. Our ability to prevent predictable low-end threats from entering the perimeter of our systems is reasonably good. However, we must also be able to defend against strategic, targeted, adaptive attacks which are able to launch from inside our security system. In this case, as the panel has discussed, the key problem is resiliency; we must be able to launch our real-time response from within a compromised network. Dr. Swarup summarized three main guidelines for approaching this problem: reduce threats (by deterring and disrupting attackers), reduce vulnerabilities (as Ratazzi described, understand system needs and protect critical resources), and reduce consequences (have a reliable response). Any real-time response strategy must take into account that the attacker will also be monitoring and responding to the defender, must be able to build working functionality on top of untrusted components, and must have a more agile response-set than simply removing compromised components.

After these introductions, there was time to address three questions to the panel [responses paraphrased].

“What time-scale should we consider when reconfiguring and reacting to an attack?”

Swarup: Currently we’re looking at attacks that flood a network in a day, and require a month to clean up [improvement is needed]. However, some attacks are multi-stage and take considerable time to execute [stuxnet]—these can be responded to on a human time scale.

Aldridge: It can take a lot of time to access all of the components in the network which need reconfiguring after an attack [some will be located in the ‘boonies’ of the network].

Bagchi: It can take seconds for a sensor to rest, while milliseconds are what’s needed.

“What are some specific attacks which require real-time responses?”

Aldridge: If you lose control of a key in the field, the system needs to eliminate the key easily and immediately.

Nita-Rotaru: When you are sending data on an overlay network, you need to be able to reroute automatically if a node becomes non-functional.

Narain: If you detect a sniffing attack, you can reroute or change the network-architecture to defend against it.

Ratazzi: Genetic algorithms can be used to identify problems at runtime and identify a working solution.

“What design principles might you add to the classic 8 to account for real-time responses/resiliency?”

Swarup & Nita-Rotaru: Assume all off-the-shelf mobile devices are compromised, focus on using them while protecting the rest of the system using partitioning and trust relationships, and by attempting to get trusted performance of small tasks over small periods of time in potentially compromised environment. Complete isolation [from/of compromised components] is probably impossible.

Ratazzi & Bagchi: minimize non-essential functionality of critical systems, focus on composing small systems to form larger ones, using segmentation-separate tools and accesses for separate functions-where possible to reduce impact of attack.

Panel #2: Scientific Foundations of Cyber Security (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • Victor Raskin, Purdue
  • Greg Shannon, CERT
  • Edward B. Talbot, Sandia National Labs
  • Marcus K. Rogers, Purdue

Panel Summary by Pratik Savla

Edward Talbot initiated the discussion by presenting his viewpoint on Cyber security. He described himself as a seasoned practitioner in the field of cyber security. He highlighted his concerns for cyber security. The systems have become too complicated to provide an assurance of having no vulnerabilities. It is an asymmetrical problem. For an intruder, it may just take one door to penetrate the system but for the person managing the system, he/she would need to manage a large number of different doors. Any digital system can be hacked and any digital system that can be hacked will be hacked if there is sufficient value in that process. Talbot described problems in three variations: near-term, mid-term and long term. He used a fire-fighting analogy going back two centuries when on an average a U.S. city would be completely gutted and destroyed every five years. If the firefighters were asked about their immediate need, they would say more buckets are required. But, if they were asked what to do to prevent this from happening again, they had no answer. Talbot placed this concern into three time-frames: near-term, mid-term and long term. The first time frame involves the issue of what to do today to prevent this situation. The second timeframe tries to emphasize that it is important to be ahead of the game. The third timeframe involves the role of science. In this context, the development of a fire science program in academia. To summarize, he pointed out that the thinking that gets one into a problem is insufficient to get one out of the problem.

Talbot quoted a finding from the JASON report on the science of cyber security which stated that the highest priority should be assigned to the establishment of research protocols to enable reproducible experiments. Here, he stated that there is a science of cyber security. He concluded by comparing the scenario to being in the first step of a 12-step program (borrowing from Alcoholics Anonymous). It means to stop managing an unmanageable situation and instead developing a basis to rethink what one does.

Rogers focused on the the question: Do we have foundations that are scientifically based that can help answer some of the questions in form of research? Are we going in the right direction? This lead to a fundamental question: how we define a scientific foundation? What defines science? He highlighted some common axioms or principles such as body of knowledge, testable hypotheses, rigorous design and testing protocols and procedures, metrics and measurements, unbiased results and their interpretation, informed conclusions, repeatability as well as feedback into theory that are found across different disciplines. The problems that one comes across are non-existence of natural laws, man-made technologies in constant flux, different paradigms of research such as observational, experimental and philosophical, non-common language, extent of reliability and reproducibility of metrics, difference in approach such as applied versus basic, studying symptoms as opposed to causes. Cyber security is informed by a lot of disciplines such as physics, epidemiology, computer science, engineering, immunology, anthropology, economics and behavioral sciences.

The JASON report on the science of cyber security came out with strategies that are areas such as modeling and simulation which involved biological, decisional, inferential, medical as well as behavioral models that could be considered when viewing it on a scientific foundation. He emphasized that cyber security problems lend themselves to a scientific based approach. He stressed that there will be a scientific foundation for cyber security only if it is done correctly and only when one is conscious about what constituted a scientific foundation. Even solutions such as just-in-time, near-term and long-term can be based on a scientific foundation.

He pointed out that currently the biggest focus was on behavioral directive. In other words, how do we predict what will happen 20 years from now if employee ‘X’ is hired?

Shannon addressed the question: How do we apply the scientific method? Here, he presented the software engineering process. He discussed its various components by describing the different issues each one addresses. Firstly, what data do we have? What do we know? What can we rely on? What is something that we can stand on which is reasonably solid? Secondly, why do we have data that is prone to exploitation? He highlighted reasons such as lack of technology as well as mature technology, lack of education and lack of capacity. Here, he concluded that these hypotheses do not seem to stand the test of data as the data indicated we have always had problems. He then stated some alternative hypothesis such as market forces, people and networks that can be considered. He stressed on the point that solutions are needed based on what people and systems do, not what we wish they would do. The stumbling block for such a case is the orthodoxy of cyber security which means being in the illusion that by just telling people to do the right thing and using the right technology would lead to a solution to a problem. It is analogous to an alchemist who would state that just by telling the lead to turn gold, it would become gold. He stressed that we need to understand what is going on and what is really possible. The key message was that if there is a science that is built on data, it would involve much more than just theory.

Raskin took a more general view of cyber science by offering some of his thoughts on the subject. He said that he did not agree to the “American” definition of science which defines it as a small sub-list of disciplines where experiments can be run and immediate verification is possible as he considered it to be too narrow. He conformed to the notion of science wherein any academic discipline that is well-defined is a science. He presented a schematic of the theory-building process. It involved components such as phenomena which corresponded to a purview of the theory, theory, methodology and the description, which is a general philosophical term for results. The theory is connected to the methodology and a good theory would indicate why it can help guide the methodology. He asked why we were not questioning what we were doing. The first thought was related to the issue of data provenance i.e. why are you doing what are you doing? The second thought focused on the question of how we deal with different sciences that all part of cyber science. A mechanism that can help address that is that of rigorous application. He disagreed with the notion that combining two things without any import/export of sub-components leads to some worthy result. He stated that from the source field, components such as data, theory and methods should be imported to the target field. Only the problems of the source field should be excluded from being imported. The second thought emphasized about forming a linkage between the two fields; source and target by a common application. He concluded that without a theory, one does not know what one is doing and one does not know why one is doing it? It does not imply that there is no theory in existence. On the contrary, anything that is performed has an underlying theory and one may not be having any clue about that theory.

A question about complexity theory brought up an example of a bad scientific approach wherein the researcher adds more layer of complexity or keeps changing the research question but does not ever question the underlying theory which may be flawed.

Panel #1: Traitor Tracing and Data Provenance (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • David W. Baker, MITRE
  • Chris Clifton, Purdue
  • Stephen Dill, Lockheed Martin
  • Julia Taylor, Purdue

Panel Summary by Nikhita Dulluri

In the first session of the CERIAS symposium, the theme of ‘Traitor Tracing and Data Provenance’ was discussed. The panelists spoke extensively about the various aspects relating to tracing the source of a given piece of data and the management of provenance data. The following offers a summary of the discussion in this panel.

With increasing amounts of data being shared among various organizations such as health care centers, academic institutions, financial organizations and government organizations, there is need to ensure the integrity of data so that the decisions based on this data are effective. Providing security to the data at hand does not suffice, it is also necessary to evaluate the source of the data for its trust-worthiness. Issues such as which protection method was used, how the data was protected, and whether it was vulnerable to any type of attack during transit might influence how the user uses the data. It is also necessary to keep track of different types of data, which may be spread across various domains. Identification of the context of the data usage i.e., why a user might want to access a particular piece of data or the intent of data access is also an important piece of information to be kept track of.

Finding the provenance of data is important to evaluate its trustworthiness; but this may in-turn cause a risk to privacy. In case of some systems, it may be important to hide the source of information in order to protect its privacy. Also, data or information transfer does not necessarily have to be on a file to file exchange basis- there is also a possibility that the data might have been paraphrased. Data which has a particular meaning in a given domain may mean something totally different in another domain. Data might also be given away by people unintentionally. The question now would be how to trace back to the original source of information. A possible solution suggested to this was to pay attention to the actual communication, move beyond the regions where we are comfortable and to put a human perspective on them, for that is how we communicate.

Scale is one of the major issues in designing systems for data provenance. This problem can be solved effectively for a single system, but the more one tries to scale it to a higher level, the less effective the system becomes. Also, deciding how much provenance is required is not an easy question to answer, as one cannot assume that one would know how much data the user would require. If the same amount of information as the previous transaction was provided, then one might end up providing excess (or insufficient) data than what is required.

In order to answer the question about how to set and regulate policies regarding the access of data, it is important to monitor rather than control the access to data. Policies when imposed at a higher level are good, if there is a reasonable expectation that people will act accordingly to the policy. It is important not to be completely open about what information will be tracked or monitored, as, if there is a determined attacker, this information would be useful for him to find a way around it.

The issue of data provenance and building systems to manage data provenance has importance in several different fields. In domains where conclusions are drawn based on a set of data and any alterations to the data would change the decisions made, data provenance is of critical importance. Domains such as the DoD, Health care institutions, finance, control systems and military are some examples.

To conclude, the problem of data provenance and building systems to manage data provenance is not specific to a domain or a type of data. If this problem can be solved effectively in one domain, then it can be extended and modified to provide the solution to other domains as well.

Opening Keynote: Neal Ziring (Symposium Summary)

Tuesday, April 5, 2011

Keynote Summary by Mark Lohrum

Neal Ziring, the current technical director for the Information Assurance Directorate at the NSA, was given the honor of delivering the opening keynote for the 2011 CERIAS Symposium on April 5th at Purdue University. He discussed the trends in cyber threats from the 1980s to today and shifts of defenses in response to those threats. He noted that, as a society, we have built a great information network, but unless we can trust it and be defended against possible threats, we will not see the full potential of a vast network. Ziring’s focus, as an NSA representative, was primarily from a perspective of preserving national interests regarding information security.

Ziring discussed trends in threats to information security. In the 1980s, the scope of cyber threats was rather simple. Opposing nations wished to obtain information from servers belonging to the U.S., so the NSA wished to stop them. This was fairly straightforward. Since the 1980s, threats have become far more complex. The opponents may not be simply opposing countries; they may be organized criminals, rouge hackers, hacktivists, or more. Also in years past, much expertise was required to complete attacks. Now, not so much expertise is required, which results in more threat actors. In the past, attacks were not very focused. Someone would write a virus and see how many computers in a network in can effect, almost as if it were a competition. Now, attacks are far more focused on achieving a specific goal aimed at a specific target. Ziring cited a statistic that around 75% of viruses are targeted at less than 50 individual computers. Experts in information security must understand the specific goals of a threat actor so attacks can be predicted.

Ziring also discussed shifts in information security. The philosophy used to be to simply protect assets, but now the philosophy includes defending against known malicious code and hunting for not yet known threats. Another shift is that the NSA has become increasingly dependent upon commercial products. In the past, defenses were entirely built internally, but that just does not work against the ever-changing threats of today. Commercial software advances at a rate far faster than internal products can be developed. The NSA utilizes a multi-tiered security approach because all commercial products contain certain shortcomings. Where one commercial product fails to protect against a threat, another product should be able to counter that threat; this concept is used to layer security software to fully protect assets.

A current concern in information security is the demand for mobility. Cell phones have become part of everyday life, as we as a society carry them everywhere. As these are mobile networking computers, the potential shortcomings of security on these devices is a concern. If they are integrated with critical assets, a security hole is exposed. Similarly, cloud computing creates a concern. Integrity of information on servers which the NSA does not own must be ensured.

Ziring brought up a couple of general points to consider. First, information security requires situational awareness. Knowing the current status of critical information is necessary to defending it properly, and knowing the status of the security system consistently is required. Currently, many security systems are audited every several years, but it may be better to continuously check the status of the security system. And secondly, operations must be able to operate on a compromised network. The old philosophy was to recover from a network compromise, then resume activity. The new philosophy, because networks are so massive, is to be able to run operations while the network is in a compromised state.

Ziring concluded by discussing the need to create academic partnerships. Academic partnerships can help the NSA have access to the best researchers, newer standards, and newer technologies. Many of the current top secure systems would not have been possible without academic partnerships. It is impossible for the NSA to employ more people than the adversaries, but it is possible to outthink and out-innovate them.

Symposium Transcript: Complexity vs. Security—Choosing the Right Curve, Morning Keynote Address

Teaching as an adjunct faculty member, even at George Mason is a labor of love: the pay is not great, but the job definitely has rewards. One of these rewards is to see how bad a job we are doing as an industry in teaching people about security awareness. I teach a course on how to develop secure software. In my opinion, this is a “101” level course. At George Mason, it is listed as 600. The quality of the code at the beginning of each semester is interesting. Some code is very well structured and most of the code is not that bad, but the security awareness is not there. When you teach security awareness to the students, the quality of the code improves. Especially when you give them the techniques on what to watch for, and the methods and the mechanisms that can be used to avoid those challenges. This is not what this talk is about.

This talk is about the end result, and what happens after you do that [give them the tools and techniques], even when we spend this time, a full semester, 15 weeks of concentrated effort. By the way, they write a ton of code, which is one of the challenges about the class—let me just describe this one problem, since this talk is likely to run somewhat shortly anyway:

Right now, I have students writing programs. I give them a specification and they write a program back to me given that specification. And the only way I can think of to grade it is to actually do a code analysis. To actually look at it and do the testing myself. Then I come back and identify the vulnerabilities at the code level, because if I don’t do it at the code level, how could they possibly learn what they messed up ? And that’s “killing me”. grin I spend at least 40 hours every time a program is turned in by the class. And I guarantee I am not hitting every vulnerability. If anybody has a way of structuring these assignments so that I can get the same points across, I am all ears. We will leave this as an aside.

At the end of the class, even with the folks that I consider to be good coders, there are still vulnerabilities that are creeping in. Especially into the more interesting and full featured assignements that we give out.

The students spend the second half of the semester writing something big. This year they had two choices for the final assignment. The first choice was a network enabled game, which had to be peer-to-peer, between two machines, running across the network. They could choose the game. I am always interested when people choose really bad games.

Quick thing: in any game, the idea is that there is an adversary actively trying to compromise the game, and the software has to defend against that. Don’t choose random ! Random is hard in that environment. You want to choose chess. Chess has a defined state: there is no randomness in chess. If people choose games like poker, it is a bad idea: who controls the deck ? There are ways to work around that though.

The second choice of assignment was a remote backup service which, surprisingly enough, about half the class chose to do. I think conceptually they can get their head around it. I thought it would be a harder assignment.

These projects turn out to be non-trivial: there is a lot of code written. I think the most lines of code we saw was on the order of 10,000 lines of code, which isn’t huge by the standards of industry, but certainly is pretty big for a classroom project. Some of the submissions were much smaller. Some people try to abstract away the difficulty by using whatever Microsoft libraries they can find to implement the project functionalities, which caused its own difficulties. Most of these projects, by the way, were the worst from a security perspective. Not because the Microsoft technology wasn’t good; actually Microsoft has done some pretty cool stuff when it comes to security APIs, but the students did not understand what the APIs had to offer, so they called them in ways that were truly insecure. This is another challenge that we get: understand the underlying base technology that we rely on is a major challenge.

So, this talk —back to this talk…

This talk is about what we do after we’ve already done some level of training and started focusing on security, to keep our products more secure over time. Because, frankly, our economy (this is a bad time to say this of course) isn’t a big smoking hole; some people would say it is. The impact of software vulnerabilities definitely impacts on the economy. There is a great book called Geekonomics by David Rice. The major thesis of the book is that the cost of vulnerabilities in code is actually a tax that we all pay. So when you buy, say, Microsoft Office or Adobe Illustrator, whatever tool suite you’re on, all the vulnerabilities and bugs in that are paid by every user and every person that gains service through the use of these tools. We are paying a huge cost. Because all of us are being taxed, we don’t see it. It is a very big number. Every once in a while it becomes specific. I have worked for the banking industry. You have to love these guys, because you do not have to make an ROI argument with them when it comes to cybercrime. They’re either experiencing fraud or they’re not. If they’re experiencing fraud, they know how much. In the case of fraud, the numbers are high enough that, even as a CEO used to seeing big numbers, you would pay attention to them. By the way those banks are still profitable, some of them.

The thing is, if all that fraud is happening, who is paying for that ? Eventually, the cost of fraud has to be passed on to the consumer for the bank to remain profitable. So, again, the point I want to make is that we are all paying this price.


Some of the pre-conceptions I had before preparing this talk turned out to be false.

The main question of this talk is “How are we doing as an industry?” Is software trending towards a better place or a worse one? My position when I started preparing this talk is that we are going towards a worse place. The thought around that is that we are trending towards more complex systems. Why is that happening ? I’ll argue with numbers later; these number show that software is getting at least larger if not more complex.

Question: “How do you sell version 2?” Actually, “How do you sell version 9?” (since nobody has version 2 anymore). So how do you sell the next version of a software? You are not going to sell version (n+1) with less features than version (n) in this market. You have to add something. But most of the time we add something, that means more code. There is also all that engineering work that goes into maintaining that code over time. There is no economic incentive to buy version (n+1) if it does not provide something new besides a new pretty GUI. I guess this did not work too well with Vista. In theory it works. Actually, have you seen the Windows 7 interface ? That is actually pretty cool. I am currently being blown away by how the interface on my Mac increased my productivity, after just a few weeks. I think 7 might give Microsoft a leg up in that area. But I digress.

So, software is getting bigger because the vendors who produce the software for us, by and large, need to add features to the products to sell it to us. That’s one reason. A couple of other things though:

  • If we are getting bigger, are we getting more complex. And if we are getting more complex, does that actually affect the security question? That was the underlying thought behind this talk.

To the left, this is the only code I can write and guarantee it comes out bug-free. [points to “Hello, World”, with character encoding issues]. There is a joke there…

But that’s not like average code. Average code looks more like the code on the right [points to cryptic code: deeply nested, and dense]. By the way, in fairness, I “borrowed” from the recent winners of the C obfuscation project. But I have been handed code that is at least that ugly. When you start trying to understand that code, you are going to find out that it is nearly impossible to do so. It really matters how much you focus on the readability and legibility of your code. Architecture matters here. But as you get bigger, chances are that even with good controls, you are still going to wind up with increasing complexity.

How many people think they can pick up bugs in the right hand side given an hour to do it? How confident would you be?

I read a lot of source code. By the way, one of the reasons I teach the class is to keep touching source code. Even with the course I teach and all the time I spend doing it, I am not going to find it. Now, there are tricks you can do from a vulnerability perspective. Following the inputs is always a good idea. I do not necessarily have to read every line to find the most tasty vulnerabilities in a piece of code. Nonetheless, that is kind of ugly [obfuscated code on the right] and there is a lot of ugly in the products we use every day. So we have to account for that.


Here is what a few bright guys in the industry have to say about this. [list of a few quotes on the slide including one from Dr. Gene Spafford]

This is a general acknowledgment that complexity is the driving force behind some of the problems that we have. What do you all think ? Is it real, is it fake ? There are pretty outrageous claims being made here. I like the second one: “Every time the number of lines is doubled, the company experiences four times as many security problems.”

Do you buy that ?

If that is the case, pardon the language, but we are screwed. We can’t survive that because the level of changes that we are experiencing in the code base that we are sitting on are following something like Moore’s law. The period at which code bases are doubling is much longer than a year and a half [Moore’s law], but it is occurring. So, if we project at five years, that would be a fairly reasonable time to expect that we will be sitting on something twice as big as what we’re sitting on today, unless it all comes down and crashing and we decide we have to get rid of legacy code.

If we are increasing by four times the rate of security problems, we are in serious trouble. So the questions are:

  1. “Is that true?”
  2. “If it is true, or even partially true, what do we do about it ?”

There is research that tries to correlate complexity and security faults. Surprisingly, this is an underexplored area of research. There was not as much literature in this area as I was expecting,but there is some. Some of it comes from Industry and some of it comes from Academia.

The first one comes from academia and it does find some correlation between complexity and security. One of the interesting things about that paper is that it actually went in and looked at a number of different complexity metrics. A lot of people use source lines of code but these guys actually looked at several different ones. Some perform much better than others at predicting vulnerabilities in the experiment that was run, which is a relatively small experiment.

The next one is an industry line. These guys sell a tool that looks for, in an automated sense, for software vulnerabilities. By the way, I really wish that these tools were much cheaper, because they do work ! They actually are cool products. Do they find every bug? Absolutely not: they have very little concept of the context. So I find vulnerabilities in the code of students who run Fortify and Coverity and Klocwork against their code all the time. The thing is, these tools do find vulnerabilities, so they reduce the total count of vulnerabilities. And everytime we close one of these vulnerabilities, we close a window to the attack community. So I wish these tools were cheaper and therefore more commonly used.

So, these guys [next reference] run their experiment over a fairly large codebase. They pulled a number of open source products and ran their tools against them. They found a very large correlation in their experiments between lines of code and not just the number but the rate of software vulnerabilities that were being discovered.

So there are some folks on the pro- side.


We have some folks on the con side. Very interesting paper out of MIT where they went through multiple versions of OpenBSD and looked at what happened to the vulnerability rates over time. And they showed pretty much the opposite. They showed a trending of vulnerabilities down over time. I have a theory for why this is. This is one of the surprising things as I got into this paper I was trying to figure out what was going on there, because that seems counter to my intuition.

There are some other folks at Microsoft who would say “Look at XP and look at Vista, it’s night and day. We got religion, we got our security development lifecycle going on.” By the way, I believe they deserve a huge amount of credit for doing that. You could say it was a response to a market demand, and that’s probably true. But they did it and they have been a pretty strong force in the industry for pushing this idea of secure development forward. I’m not in any way, shape, or form an anti-Microsoft guy, although I really love my Mac. They’re showing a system (Vista) which is larger but has fewer vulnerabilities.

This is interesting: we have people looking at this problem hard and they are coming up with entirely different results. That got me thinking, as I was preparing this talk, “What is actually going on here?” I think I have a few ideas as to why we are getting these divergences.


One thing is that source lines of code is probably a very poor measure of complexity. If I have a program that never branches and is a thousand lines long, chances are it is going to be pretty easy to analyze. But if I have a program that has a bunch of recursive loops in it, and they are chosen based on random values and interact with threads that have to synchronize themselves over a shared database. Someway, it is going to be more difficult to get your head around that. So there are lots of different ways to measure complexity. Examples include:

  1. Size
  2. Cyclomatic
  3. Halstead metrics
  4. McCabe metrics

I think this is an area that ought to receive more focused attention in research. We ought to spend more time trying to understand what metrics should be applied, and in what context should they be applied to understand when we’re getting past our ability to successfully control quality in a software project. I think this is a very interesting research question. I do not know how you feel about that, if that is something that people should chew on. I spent a lot of time crawling through IEEE and ACM in preparation for this talk. Frankly, i was a little bit underwhelmed by the number of papers that were out there on the subject. So, to anybody working on their doctorate, anytime you find a white space, this is a good thing. And this is a very focused research area. It is very easy in one sense to run an experiment. The hard part is actually in understanding the difference between discovering a vulnerability and understanding how many vulnerabilities are likely in a particular population of code.


So, let’s talk a little bit about size. Again, source lines of code are probably not the best measure. And by the way some vendors publish the source lines of code for their products and some don’t. So some of these numbers are better than others; some of them are estimates. Microsoft has started publishing the numbers for Windows. So we have: * 40 million LOC (MLOC) in XP, depending on which version we are talking about * 60 MLOC in Vista, although some people would say higher.

These are pretty large numbers. If you told me my car had 60 MLOC on it, I probably wouldn’t drive it. As an interesting aside, has anyone driven a BMW 7 series? Nice car, right? They had huge quality problems when they were first shipping, mainly the German version, before it came over. Anyone know the story behind this? The cars would crash. I don’t mean they would hit something. The computers in the car would crash. And at that point you could not re-enable them. They were dead; they had to be towed off the road and the computers replaced.

Prof. Gene Spafford: “I had the first series and the windows system onboard, while the car was sitting in the garage, thought it had been in an accident and disabled the car and actually blew the explosive bolt on the battery on the starter cable. So I had the distinction of having the first BMW 7 series in the Midwest towed to the dealership”

Dr. Ron Ritchey: “Don’t you find it ironic that somebody in you position, with your standing in industry, is the guy that has the BMW that blows itself up because of a software fault ?”

Prof. Gene Spafford: “Somehow, it seems very appropriate… to most of my students”

Dr. Ron Ritchey: “I think that’s brilliant.”

Someone in the audience: “Was it actually a security fault ? Wasn’t it actually one of the students who didn’t want you to come to class ?”

Dr. Ron Ritchey: “Well, a failure mode is a failure mode.”

[more joking]


So, how fast are these things growing ? Between versions, quite a bit. Windows, if we go way way back to NT4, some numbers show around 8MLOC; the number I got for this talk was 11, so I went with it. XP, 40MLOC. That’s twice twice. [discusses also Mac OS X, Linux, Internet Explorer]

The point is that a lot of different products are getting a lot more complex. I shouldn’t say that. They are getting larger. But there is strong evidence to suggest that structurally, if you do not focus on it, larger does actually equal more complex. I do need to be a little more careful here. One of the pieces of belief I’m attempting to prove is that the reason why SLOC as a metric are showing that sometimes things get better and sometimes things get worse, is probably because SLOC is not the measure we should be focusing on. Unfortunately, it’s very hard for an outside researcher, unless you work for Microsoft, or you work for Sun, or you work for one of these companies that produces these large code blocks, to have access to the source code in order to run these different metrics and track the performance of these metrics over time. I think that it would help a lot if we could get these metrics published and used better, to correlate these tools.


This is a simple experiment that we ran for the talk. We basically took a number of different versions of Windows and we went into the national vulnerability database NVD, which is a wonderful program run by NIST. They track all vulnerabilities they can get their hands on. They do analysis of each vulnerability, and they put them in this database that is accessible to anybody (to you, to anyone), to do research. This is really useful data. So we pulled the data from NVD.


This slide shows vulnerability discovery over time and each one of the different lines is a different product. [# of vulnerabilities vertical per year, years in horizontal; one curve per product, with the product’s release year as the origin of the curve]. You need to take a second and look a this to realize what is going on. The bottom one where the line jumps up and then kind of bounces at the bottom a little bit, that’s 95. With 95, you saw a little bit of bump when the product was released and then a flat line. What’s going on there? Is Windows 95 a really solid secure product? No. But keep that question in your mind for a little bit. When we start getting to more modern operating systems and look at the curve for XP for instance, it goes up. Notice that it spikes up initially and then it continues to spike over time. In my opinion that is an unusual curve, because that was when Microsoft was starting to develop an appreciation for security being important. If you look at Vista, unfortunately this hasn’t been out that long, we have a huge spike at the beginning. If we look at NT, we see a more classic curve: a lot of spikes in the early lifecycle of the product and then it dwindles down. That dwindle could happen for two reasons. One, nobody cares about it because they are not running it, so vulnerability researchers go away, which I think is a factor to keep in mind. But I also think that the code base that these products are sitting on is being mined out.

For the vulnerabilities that are being discovered, there is this concept of foundational code versus new code that I’ll get into. It’s really interesting that in the first two years we are seeing this spike in vulnerabilities. The count at two years for Vista was around 75 vulnerabilities. In fairness, I did not distinguish between root-level compromises and more minor issues. This is just raw numbers. You might get a much different analysis if the only thing you cared about was root level compromise. This is not an argument that there aren’t techniques to reduce your risk. Fortunately, there are. This is an argument about our ability to control the number of faults that we write into our software, that result in vulnerabilities over time.


[graph showing the cumulative number of vulnerabilities (vertical axis) for Windows products over time (horizontal axis)]

This is an interesting graph: it represents the cumulative number of vulnerabilities over time. The fact that you have numbers going up left to right in this graph isn’t really that scary. But what you want to see is the rate of change over time [the rate of discovery of vulnerabilities] decreasing. You want to see these curves flattening as they reach the right. That’s a good thing. When you see stuff that is accelerating forward, when the curve of your graph is going positive, that’s not a good thing. What that says is that you are not managing complexity well, you are not managing the security well. Vista hasn’t been long enough to figure out which way it is going to curve. XP had a pretty bad history, as did 2K, though you can see it starting to flatten out. And NT is following the curve that we would like to see: it is starting to flatten out as we go to the right.


The last thing that we want to show is cumulative vulnerabilities versus complexity. So this is the slide that attempts to answer the question are we getting worse or better. And, again, because I do not have better data, I have to use source lines of code. As we get to more source lines of code, what is happening to the rate of vulnerabilities that we’re experiencing? They are pretty clearly going up, and the rate at which they are going up is not linear in many cases. There is a curve occurring here. Whether when we got to 120 MLOC it would flatten out or not, we don’t know. What this data tends to suggest is that as complexity goes up, the total count of security vulnerabilities goes up at a rate greater than the linear extrapolation of the number of source lines of code.

I am not going to try to make a proof out of this; it’s a thought exercise. The thought is “Hey, this might be true.” The sum of the evidence seems to suggest that’s true.

So the result suggests that complexity does matter. And actually it contradicts some of the data that has been coming out by industry saying that “We’ve got religion, SDL [software development lifecycle] is working”, which, by the way I think is probably a true statement, but it’s not true enough to completely eliminate this problem. Interestingly enough, why are some of the numbers different ? Let me get into that.


There is this notion of foundational code. Foundational code is the code that is shipped in the initial release or at least exists in a particular version of a product. As that product moves forward and more things are added to it, that new code, those patches, the things that we change, those aren’t in the foundational code. Those are in the new areas. We need to be careful figuring out what is going on when we have foundational code versus new code. We can have two or three different scenarios.

In the first scenario, you have foundational code, say 10 MLOC. Version (n+1) comes out and you have 20 MLOC, and the new code is just a superset of the old code [diagram with two concentric circular areas]. In other words, all of the foundational code still exists in the product.

The other extreme you could have is when new code merely replaces foundational code, which is no longer being used in the product [diagram with a small and a large circle that barly overlap, small being the foundational code]. The first case is much more likely to happen, because once you have a library written … I don’t know if you maintain your own personal libraries of code; I probably have code that I wrote when I was a senior in high school, because it just works. So foundational code tends to be sticky. There are probably lines of code running in Windows 7 that Gates wrote back in his graduate days.

The interesting thing with foundational code is that when we find problems in it we fix them. Sometimes when we fix them we break other things, but over time the quality of foundational code improves. It has to: we’re not that bad. In fact, once we find a problem, if it’s a class of problems, we often go looking for other examples of that same class of problems in our code. So one fault in one section of the code can actually make larger chunks of the code better. So this notion of foundational code is somehing that we need to keep in our heads. As we go out we figure, “Hey, I’m gonna ship version (n+1)”, I might want to ask myself: “Did I only change my code at the margins that make the user experience a little bit better? But most of the code that I’m shipping is actually code that has really gotten through the rigors and the experience of a lot of peer review, a lot of time to stress the code” Or, “did I throw out everything else and start from scratch because maybe my foundational code was so bad that I needed to do that?” Maybe there are other marketing reasons to do that.

But I have to have a notion of how much of the product I am shipping is foundational code versus new, because that is going to affect the vulnerability rate that it is going to experience over time.


Another issue that we need to worry about, say that we are about to ship version (n+1), how many vulnerabilities are in it? 2, 12, 2,000? What you have to worry about is how many people care, how many people are looking. If you have a non-popular product, a niche product, the number of people that will focus on it is probably directly correlated to the number of people that really don’t like you. What I mean by that is that people need to have a reason to target the smaller set of groups that are using that tool.

An unpopular product is not going to experience the same level of scrutiny.


Now, if you are Internet Explorer, there will be a lot of folks focused on you. And this skews our data horribly. If you are trying to compare the fault rate between IE and Opera, that’s a difficult thing to do because, while people care about the security of Opera, people in the attack community are business people. [the day before, the “fireside” conversation touched on the “professionalization of the attack community”]. They are business people that like not to follow the rules. (Of course, some of these people live in countries where the rules are written differently.) They invest in what they can get a return on; they are very good at this. And, if you are going to choose where you are going to invest your money, as an attacker, what do you choose? Do you choose IE or do you choose Opera? Well, up and until the point that Opera has a market share that is similar to IE, you are going to choose IE, because you get a bigger bang for your buck. So, this is something that we need to keep in mind.


Another thing that we need to worry about is that having the vulnerability reporting, itself, turns out to be really difficult to get right. One problem that we have now, is that the vulnerabilities are not being published. There are commercial organizations and a black market that are perfectly happy to pay you to release your vulnerability only to them, and it does not get into tools like the NVD [national vulnerability database]. This is one of the challenges that we have. A lot of the really good vulnerabilities that are being discovered are being sat on. So, we are not learning about them at the same rate that we used to. It used to be that tools like metasploit would be up to date very quickly with new vulnerabilities, but it’s slowing. It’s slowing because you can hand that vulnerability over to the black market and get 10.000 dollars, sometimes more depending on what product is being targeted and how “cool” the attack is —cool being an operational term.

There are other problems: it’s very hard sometimes to [identify] two different bug reports, two different fault reports, that actually are the same vulnerability but they were expressed in differet ways. So, the data that we collect, itself, can be somewhat challenged. Also, we don’t always know when these things are truly closed up. Sometimes the fix dates are published because, when you get to “Patch Tuesday” and your box reboots, they might have told you about all of the vulnerabilities they just patched; they might not. The same thing happens with all of the folks. That’s an operational security decision. So, again, not all of the data is getting to us, so you have to take some of this with a grain of salt. But, again, I am making an argument about the large numbers; not trying to make sure that every single fact that I have is 100% correct. It;s more of a trend that I think demonstrates the concept.


There are some industries that are much more careful about their development process than others. And when you look at the fault rates, their fault rates are actually much lower than our experience by industry as a whole. In this case I am refering to safety critical [faults], and aerospace is a good example of this. Now, there was an incident with one of fly-by-wire airplanes that recently crashed in London. [Prof. Gene Spafford helped clarify that it was an Airbus 340. It is actually a Boeing 777, flight BA038. So, there have been accidents caused by software faults in the aviation industry. But, by and large, their fault rate is pretty darn low.

In fact, their is a project that my academic advisor was involved in, which was TCAS II. TCAS II is a system where, if two airplanes are on a collision path, there are transponders in both aircrafts that negociate with each other and give the airplane directions to deconflict themselves. So, the TCAS alert will go off and say “This airplane, dive, go left”, and the other will say “Climb, go right”. The thing is, you have to make sure that both instructions are right. There has been one accident, where the TCAS system went off and announced directions, and air-traffic control said “No, the TCAS box is wrong, do what I said”. The pilot chose to follow air-traffic control and, indeed, collided. It was a pretty bad accident. The TCAS system was actually correct; it came with the right decision. There has never been a case where TCAS has given the wrong answer in an operational setting. So, the point is that if we focus on this, and train for it, and actually run complexity metrics against our code and measure it today, we can get better. Unfortunately, it is costly: there is a price to be paid. And can we, as a society, afford to pay that cost for every product that we rely on. The answer is probably no. But, I get back to the point I made earlier: are we truly calculating the cost of faults in the products that we rely upon today ?


There are lots of different reasons why managing complexity is hard. I’ve just listed a few of the examples. [slide would help here].

You can’t manage complexity in an ad-hoc development environment. Just throwing code together without any idea of how you are going to collaborate, how you are going to compare, how you are going to define interfaces, how you are going to architect your code, is … madness. It does not work. That’s another thing I learned from my class. Most of the assignments are written as individuals, and the quality of that code is directly tied to the quality of that individual developer. As soon as they get into project teams, things get a lot more interesting, because the faults start showing up at the integration layer, as opposed to the unit level. So, when we bring development shops together, especially large development shops, if you do not have a lot of good process behind your development, you are not going to manage complexity well.


Here is the point that I wanted to get to: if you really focus on security and in general software faults management, you can actually write code that is of higher complexity, without necessarily injecting a lot of security faults into it. But the problem is, you have to know how good you are before you do that. If you have really good development processes, if you have smart developers that thave been trained, they’re security-aware, they’re quality-aware, they’re writing code. [the slide presents curves with different slopes and asymptotic behaviors] You might be able to have —the bottom axis here is complexity, driving up in complexity— [the vertical axis is the number of faults] …

The bottom line is the mystical beast: you write larger code and actually get higher quality. [the curve decreases from left to right]. I would say that if you could be on these first two lines [constant curve and slow linear increase], you are doing very very well. The idea here is that you want to measure the complexity of your code and you want to measure the fault rate that you are experiencing once you ship the product. This is going to take years to get good numbers for any particular organization. But once you know where you are at, from a maturity standpoint, and once you have decided what you are willing to pay for in terms of your development processes, you can place yourself somewhere on this graph. Based on that you are going to have to make a decision of how far along on that complexity curve you are willing to tolerate [faults increase with the complexity], and your customers are willing to tolerate from a vulnerability perspective. If you are that top line [at least quadratic growth] or worse, you don’t want to be doubling the complexity of your code in the next five years, because youre vulnerabilities number is not going to double but to quadruple. You are going to be that “four times” number that we saw a couple of slides ago. If you’re instead on one of these slow-sloped lines, maybe you can double the complexity of your code and still successfully manage the vulnerabilities that you’re exposing. But unless you measure that, unless you’ve looked at it, it’s all ad-hoc. It is faith versus true.

So I am making, I hope, a plea here. As you go out, as you start looking and working on real world software projects that are going to translate into code that large numbers of people run, that you know where you are on this graph.

So, conclusions:

One thing is that the initial code release, the foundational code, that’s where most of the bugs are introduced. The lesson here is, as you’re releasing large chunks of new, that’s when you need to focus really tightly on this question. When you’re just doing the maintenance-side of things, that’s a little easier. But that initial chunk, all the numbers show —the research is in fact pretty consistent in this area— that large chunks of foundational code is usually where most of the action is from a vulnerability perspective.

Complexity does impact vulnerability. I strongly believe that. I don’t believe that source lines of code (SLOC) are a way to go. I believe that we need better complexity measures that correlate better to security faults. I do believe complexity drives security.

This is an important one: foundational code is increasing in size rather rapidly. So, again, not Moore’s law, but about every five years it appears to be doubling, at least with the large products that we are using every day. And our ability to prevent vulnerabilities from increasing at the same rate is directly related to our ability to put ourselves in the right curve [referring to the graph on previous slide] and then potentially make some hard choices. Because, I have two large options.

If I discover that I am on one of these high-slope curves, one approach is that I can invest more heavily in my software development processes. I could invest more heavily in training my developers. I could invest more heavily in measuring the complexity of code as I am doing it, and maintaining it over time. I could walk closer to the kind of standards that are used in the safety critical world. So I could put myself on a slower slope, and I could say “OK, my customers need more complexity because that’s what is going to sell. But to do that I am going to need to invest more heavily in my software development process.”

The other option is: I have to choose to release products that have the same complexity as version (n). And really what that means is that I am going to choose to give up some functionnality that i would normally like to release. And that’s a very hard decision to make. But, as consumers, we have some ability to control that as well. By the way, one of the things that the software industry has been doing over the last year and a half to two years is moving away from charging you for software. Software as a service: big move. You are not being charged for software; you are being charged for the use of the software. I actually think this is going to be a good thing in the long term for security. Why ? Because there is a profit motive for maintaining that software and keeping it up and reliable over time.

Same token: anybody bought Oracle or any of these large products recently ? They’ll add: you want some new Oracle feature ? Fine, they are almost not going to charge you for it, but they are going to charge you for the maintenance. 25 percent a year. That might be annoying to you but it is changing the economics behind software development pretty dramatically. I think it’s in the right way: a way that starts shifting the focus to what the customers actually need as opposed to adding features just so that you will buy the next version. (somewhat of an aside).

The last option which, I’m afraid, is the default one, is that you don’t care. You go ahead and you release the product that has the increased complexity. You probably don’t know what part of the curve you’re on. And because of that, we are moving towards a world where the complexity [vulnerability?] rates are increasing as opposed to decreasing. Frankly I think that some of the reason the curve of the vulnerability rate flattens out is because there are only so many vulnerabilities that you need as an attacker to get where you want to be. So there are a numbers of reasons why that curve might flatten, which are beyond bugs simply disappearing. So with that, I would open up to thoughts and questions.

Links

Symposium Summary: Complexity vs. Security—Choosing the Right Curve, Morning Keynote Address

A keynote summary by Gaspar Modelo-Howard.

Dr. Ronald W. Ritchey, Booz, Allen and Hamilton

Ronald Ritchey is a principal at Booz Allen Hamilton, a strategy and technology consulting firm, and chief scientist for IATAC, a DoD initiative to provide authoritative cyber assurance information to the defense community. He spoke about software complexity and its relation to the number of vulnerabilities found in software.

Ritchey opened the talk sharing his experience as a lecturer for a secure software development course he gives at George Mason University. The objective of the course is to allow students to understand why emphasis on secure programming is so important. Using the course dynamics, he provided several examples on why secure programming is not easy to achieve: much of the code analysis to grade his course projects includes manual evaluation which makes the whole process long, even students with good development skills usually have vulnerabilities in their code, and some students insert vulnerabilities by calling secure-sounded libraries in insecure ways. All these examples allowed Ritchey to formulate the following question: How hard can it be to write good/secure software?

Ritchey then moved on to discuss software complexity. He presented the following statement: software products tend toward increasing complexity over time. The reason is that to sell the next version of a program, market is expecting to receive more features, compared to previous version. To add more features, more code is needed. Software is getting bigger, therefore more complex. So in light of this scenario: Does complexity correlate to software faults? Can we manage complexity for large development projects? And, should development teams explicitly limit complexity to what they have demonstrated they can manage?

Several security experts suggest that complexity increases security problems in software. Quoting Dan Geer, “Complexity is the enemy”. But Ritchey mentioned that researchers are divided on the subject. Some agree that complexity is a source of vulnerabilities in code. The latest Scan Open Source Report[1] found strong linear correlation between source lines of code (SLOC) and number of faults, after the analysis of 55 million SLOC from 250 open source projects. Shin & Williams[2] suggest that vulnerable code is more complex than faulty code after analyzing the Mozilla JavaScript engine.

Some researchers suggest there is no clear correlation. Ozment and Schechter[3] found no correlation after analysis of the OpenBSD operating system which is known for its developers’ focus on security. Also, Michael Howard of Microsoft Corp. pointed out that even though Windows Vista’s SLOC is higher than XP, Vista is experiencing a 50% reduction in its vulnerability count and this is attributed to their secure development practices.

Regardless of the relationship between complexity and security, Ritchey mentioned it is likely that SLOC is a weak metric for complexity and suggested potential replacements in terms of code structure (cyclomatic complexity, depth of inheritance), computational requirements (space, time), and code architecture (number of methods per class, lack of cohesion of methods).

Looking at different popular programs, it is clear that all are becoming larger as new versions are released. MacOS X v10.4 included 86M SLOC and Ubuntu Linux has 121M. Browser applications also follow this trend, with Internet Explorer v6 included 7M SLOC and Firefox v3 has 5M. A considerable percentage of these products doubled their sizes between versions: Windows NT4 has more than 11M SLOC and its later version XP has 40M, Debian v3 has 104M and v4 jumped to 283M.

In light of the different opinions and studies presented, Ritchey analyzed the Microsoft Windows operating system by counting the vulnerabilities listed on the National Vulnerabilities Database[4] for different versions of this popular system. No distinction was made between the root level compromise and other levels. From the results presented, a large number of vulnerabilities were found after the initial release of the different Windows versions. Such trend represents the initial interest shown by researchers to find vulnerabilities who later moved to newer versions or different products. Ritchey also commented on the impact of the foundational (initial release) code, which seems to have a higher vulnerability rate than later added code from updates. From the cumulative vulnerability count vs. complexity (SLOC) graph shown, lines go up so it might be true that complexity impacts security. He alerted though on need to be careful on how to judge these numbers since factors such as quantity and quality of resources available to development team, popularity of software, and operational and economic incentives might impact these numbers.

Throughout his talk, Ritchey emphasized that managing complexity is difficult. It requires a conscious cultural paradigm shift from the software development team to avoid and remove faults that lead to security vulnerabilities. And as a key point from the talk, a development team should know at a minimum how much complexity can be handled.

Ritchey then concluded that complexity does impact security and the complexity found in code is increasing, at a plausible rate of 2x every 5 to 8 years. The foundational code usually contributes to the majority of vulnerabilities reported. The ability to prevent vulnerability rates from increasing is tied to the ability to either limit the complexity or improve how we handle it. The speaker (calls himself an optimist and) believes that shift from software as a product to software as a service is good for security since it will promote sound software maintenance and move industry away from adding features just to sell new versions.

References

  1. Coverity, Inc. Scan Open Source Report 2008. Available at http://scan.coverity.com/.
  2. Shin, Y. and Williams, L.: Is complexity really the enemy of software security? In: 4th ACM workshop on Quality of protection, pp. 47—50. ACM, New York, NY, USA.
  3. Ozment, A. and Schechter, S.: Milk or Wine: Does Software Security Improve with Age? In: 15th USENIX Security Symposium, pp. 93—104. Usenix, Berkeley, CA, USA.
  4. National Institute of Standards and Technology. National Vulnerability Database. Available at http://nvd.nist.gov.

Symposium Summary: Unsecured Economies Panel

A panel summary by Kripa Shankar.

Panel Members:

  • Karthik Kannan, Krannert School of Management, Purdue University
  • Jackie Rees, Krannert School of Management, Purdue University
  • Dmitri Alperovitch, McAfee
  • Paul Doyle, ProofSpace
  • Kevin Morgan, Arxan Technologies

Adding a new dimension to the CERIAS 10th Annual Security Symposium, five of the panelists with varied background came together on the final day to share their work and experiences on “Unsecured Economies: Protecting Vital IP”.

Setting the platform for this discussion was this report. “Together with McAfee, an international team of data protection and intellectual property experts undertook extensive research and surveyed more than 1,000 senior IT decision makers in the US, UK, Japan, China, India, Brazil and the Middle East regarding how they currently protect their companies digital data assets and intellectual property. A distributed network of unsecured economies has emerged with the globalization of many organizations, leaving informational assets even more at risk to theft and misuse. This report investigates the cybercrime risks in various global economies, and the need for organizations to take a more holistic approach to vulnerability management and risk mitigation in this ever-evolving global business climate.”

Karthik Kannan, Assistant Professor of Management Information Systems, CERIAS, Krannert School of Management, Purdue University was the first to start the proceedings. He gave a brief overview of the above report, which was the product of the collaborative research done by him, Dr. Jackie Rees and Prof. Eugene Spafford as well. The motivation behind this work, was that more and more information was becoming digital and traditional geographic boundaries were blurring. Information was being outsourced to faraway lands and as a result protecting leaks was becoming harder and harder. Kannan, put forth questions like: “How do perceptions and practices vary across economies and cultures?”, and sighted an example from India where salary was not personal information, and was shared and discussed informally. To get answers for more such questions, a survey was devised. This survey was targeted at senior IT decision makers, Chief Information Officers and directors of various firms across the globe. US, UK, Germany, Brazil, China and India were among the countries chosen, giving the survey the cultural diversity element that it needed. Adding more value to the survey was the variety of sectors: Defense, Retail, Product Development, Manufacturing and Financial Services. According to results of the survey, a majority of the intellectual property (47%) originates from North America and Western Europe, and on an average firms lost $4.6 million worth of IP last year. Kannan went on to explain how security was being perceived in developing countries, and also discussed how respondents reacted to security investment during the downturn. Statistics like: 42% of the respondents saying laid-off employees are the biggest threat caused by the economic downturn, showed that insider threats were on the rise. The study put forth many case studies to show that data thefts from insiders tend to have greater financial impact given the high level of data access, and an even greater financial risk to corporations.

Jackie Rees, also an Assistant Professor of Management Information Systems, CERIAS, Krannert School of Management, Purdue University took it up from where Kannan had left and brought to light some of the stories that did not go into the report. Rees explained the reasons behind the various sectors storing information outside the home country. While Finance sector viewed it as being safer to store data elsewhere; the IT , Product Development and Manufacturing sectors found it to be more efficient for the supply chain; and the Retail and Defense sector felt better expertise was available elsewhere. Looking at the perspective on the amount that these sectors were spending on security, 67% of the Finance industry said it was “just right”, while “30%” of Retail felt it was “too little”. The other results seemed varied but consistent with our intuitions, however all sectors seemed to agree that the major threat to deal with was “its own employees”. The worst impact of a breach was on the reputation of the organization. Moving on to the global scene where geopolitical perceptions have become a reality in information security policies, Rees shared that certain countries are emerging as clear sources of threats to sensitive data. She added that Pakistan is seen as big threat by most industries according to respondents while China and Russia are in the mix. Poor law enforcement, corruption and lack of cooperation in these economies were sighted as a few reasons for them to emerge as threats.

Dmitri Alperovitch, Vice President of Threat Research, McAfee Corporation began by expressing his concern over the fact that Cybercrime is one of the headwinds hitting our economy. He pointed out that the economic downturn has resulted in less spending on security, and as a result increased vulnerabilities and laid of employees were now the serious threats. Elucidating, he added that most of the vulnerabilities are used by insiders who not only know what is valuable, but also know how to get it. Looking back at the days when a worm such as Melissa that was named after the attacker’s favorite stripper seems to be having a much lesser malicious intent that those of today, where virtually all threats now are financially motivated and more to do with money laundering. Sighting examples, Alperovitch told us stories of an organization in Turkey that was recently caught for credit and identity theft, of members of law enforcement being kidnapped, and of how Al-Qaeda and other terrorist groups were using such tools to finance terrorist groups and activities. Alperovitch vehemently stressed on the problem that this threat model was not understood by the industry and hence the industry is not well protected.

Paul Doyle, Founder Chairman & CEO, Proofspace began by thanking CERIAS and congratulating the researchers at McAfee for their contributions. Adding a new perspective of thinking to the discussion, Doyle proposed that there has not been enough control over the data. Data moves over supply chain, but “Control” does not move. Referring to yesterday’s discussion on cloud computing, where it was pointed out that availability is a freebie, Doyle said the big challenge here was that of handling integrity of data. Stressing on the point he added that integrity of data is the least common divisor, and that it was the least understood area in security as well. How do we understand when a change has occurred? In the legal industry, we have a threat factor in the form of a cross-examining attorney. What gives us certainty in other industries? We have not architected our systems to handle the legal threat vector. Systems lack the controls and audit ability needed for provenance and ensured integrity. Trust Anchor of Time has to be explored. “How do we establish the trust anchor of time and how confidentiality tools can help in increasing reliabilities?” are important areas to work on.

Kevin Morgan, Vice President of Engineering, Arxan Technologies began with an insight on how crime evolves in perfect synchrony with the socio-economic system. Every single business record is accessible in the world of global networking, and access enables crime. Sealing enterprise perimeters has failed, as there is no perimeter any more. Thousands and thousands of nodes execute business activity, and most of the nodes (like laptops and smart phones) are mobile, which in turn means that data is mobile and perimeter-less. Boundary protection is not the answer. We have to assume that criminals have access to enterprise data and applications. Assets, data and applications must be intrinsically secure and the keys protecting them must be secure too. Technology can help a great deal in increasing the bar for criminals and the recent trends are really encouraging.

After the highly informative presentations, the panel opened up for questions for the next hour. A glimpse of the session can be found in the transcript of the Q&A session below.

Q&A Session: A transcript snapshot

Q: We are in the Mid-West, no one is going to come after us. What should I as a security manager consider doing? How do you change the perception that organizations in “remote” locations are also subject to attack?

  • Alperovitch: You are cyber and if you have valuable information you will be targeted. Data manipulation is what one has to worry about the most.
  • Morgan: Form Red teams, perform penetration tests and share the results with the company.
  • Doyle: Employ allies and make sure you are litigation ready. Build a ROI model and lower total cost of litigation.

Q: CEOs consider cutting costs. They cut bodies. One of the biggest threats to security is letting the people go. It’s a paradox. How do we handle this?

  • Kannan: We have not been able to put a dollar value to loss of information. Lawrence Livermore National Lab has a paper on this issue which might be of interest to you.
  • Rees: Try to turn it into a way where you can manage information better by adding more controls.

Q: How do we stress our stand on why compliance is important?

  • Doyle: One of our flaws as a professional committee is that we are bad in formulating business cases. We have to take a leaf out of Kevin’s (of Cisco) books who formulates security challenges into business proposals. Quoting an analogy, at the end of the day it is the brakes and suspensions are the ones that decide the maximum speed of the automobile, not the engine or the aerodynamics. The question is: How fast we can go safely? Hence compliance becomes important.

Q: Where do we go from here to find out how data is actually being protected?

  • Kannan: Economics and behavioral issues are more important for information security. We need to define these into information security models.
  • Rees: Governance structure of information must also be studied.
  • Alperovitch: The study has put forth those who may be impacted by the economy. We need to expose them to the problem. Besides we also need to help law enforcement get information from the private sector as the laws are not in place. We also need to figure out a way to motivate companies to share security information and threats with the community.
  • Doyle: Stop thinking about security and start thinking about risk and risk management. Model return-reward proposition in terms of risk.
  • Morgan: We need to step up as both developers and consumers.

Q: The $4.6 million estimate. How was it estimated?

  • Rees: We did a rolling average across the respondents, keeping in mind the assumption that people underestimate problems.

Q: Was IP integral to the business model of a company that there was a total loss causing the company to go bust?

  • Rees: We did not come across any direct examples of firms that tanked and fell because of IP loss.

Q: Could you suggest new processes to enforce security of data?

  • Doyle: We need to find ways from the other side. If we cannot stop them, how do we restrict and penalize them using the law?

Q: Infrastructure in Purdue and US has been there for long and we have adapted and evolved to newer technologies. However other old organization and developing countries are still backward, and it actually seems to be helping them, as they need to be less bothered with the new-age threats. What’s your take on that?

  • Kannan: True. We spoke to the CISO of a company in India. His issues were much less as it was a company with legacy systems.
  • Alperovitch: There is a paradigm shift in the industry. Security is now becoming a business enabler.

Symposium Summary: Distinguished Lecture

A summary written by Nabeel Mohamed.

The main focus of the talk was to highlight the need for “information-centric security” over existing infrastructure centric security. It was an interesting talk since John was instrumental in providing real statistics to augment his thesis.

Following are some of the trends he pointed out from their research:

  • Explosive growth of information: Digital content in organization grows by about 50% every year.
  • Most of the confidential/sensitive information or trade secrets of companies are in the form of unstructured data such as emails, messages, blogs, etc.
  • The growth of malicious code in the market place out-paces that of legitimate code.
  • Attackers have found ways to get around network protection and get at the sensitive/confidential information leaving hardly any trace most of the time. Attackers have changed their motivation; they no longer seek big press and they want to hide every possible trace regarding the evidence of attacks.
  • Threat landscape has changed markedly over the last ten years. Ten years ago there were only about five viruses/malicious attacks a day, but now it’s about staggering 15,000 a day.
  • The research conducted by the Pondemon Group asked laid-off employees if they left with something from the company and 60% said yes. John thinks that the figure could be still higher as there may be employees who are not willing to disclose it.

These statistics show that data is becoming increasingly important than ever before. Due to the above trends, he argued that protecting infrastructure alone is not sufficient and a shift in the paradigm of computing and security is essential. We need to change the focus from infrastructure to information.

He identified three elements in the new paradigm:

  1. It should be risk-based.
  2. It should be information centric.
  3. It should be managed well over a well-managed infrastructure.

John advocated to adopt a risk-based/policy-based approach to manage data. A current typical organization has strong policies on how we want to manage the infrastructure, but we don’t have a stronger set of policies to manage the information that is so critical to the business itself. He pointed out that it is high time that organizations assess the risk of loosing/leaking different types information they have and devise policies accordingly. We need to quantify the risk and protect those data that could cause high damage if compromised. Identifying what we want to protect most is important as we cannot protect all adequately.

While the risk assessment should be information-centric, one may not achieve security only by using encryption. Encryption can certainly help protect data, but what organizations need to take is a holistic approach where management (for example: data, keys, configurations, patches, etc.) is a critical aspect.

He argued that it is impossible to secure without having knowledge about the content and without having good policies on which to base organizational decisions. He reiterated that “you cannot secure what you do not manage”. To reinforce the claim, he pointed out that 90% of attacks could have been prevented had the systems came under attack been managed well (for example, Slammer attack). The management involves having proper configurations and applying critical updates which most of the vulnerable organizations failed to perform. In short, well-managed systems could mitigate many of the attacks.

Towards the end of his talk, he shared his views for better security in the future. He predicted that “reputation-based security” solutions to mitigate threats would augment current signature-based anti-virus mechanisms. In his opinion, reputation-based security produces a much more trusted environment by knowing users’ past actions. He argued that this approach would not create privacy issues if we change how we define privacy and what is sensitive in an appropriate way.

He raised the interesting question: “Do we have a society that is sensitive to and understands what security is all about?” He insisted that unless we address societal and social issues related to security, the technology alone is not sufficient to protect our systems. We need to create a society aware of security and create an environment for students to learn computing “safely”. This will lead us to embed safe computing into day- to-day life. He called for action to have national approach to security and law enforcement. He cited that it is utterly inappropriate to have data breach notification on a state-by- state basis. He also called for action to create an information-based economy where all entities share information about attacks and to have information-centric approach for security. He mentioned that Symantec is already sharing threat information with other companies, but federal agencies are hardly sharing any threat information. We need greater collaboration between public and private partnerships.

Symposium Summary: Fireside Chat

A panel summary by Utsav Mittal.

Panel Members:

  • Eugene H. Spafford, CERIAS
  • Ron Ritchey, IATAC
  • John Thompson, Symantec

It’s an enlightening experience to listen to some of the infosec industry’s most respected and seasoned professionals sitting around a table to discuss information security.

This time it was Eugene Spafford , John Thompson and Ron Ritchey. The venue was Lawson computer science building. The event was a fireside chat as a part of the CERIAS 10th Annual Security Symposium.

Eugene Spafford started the talk by stating that security is a continuous process not a goal. He compared security with naval patrolling. Spaf said that security is all about managing and reducing risks on a continuous basis. According to him, nowadays a lot of stress is placed on data leakage. This is undoubtedly one of the major concerns today, but it should not be the only concern. When people are focused more on data leakage instead of addressing the core of the problem, which is in the insecure design of the systems, they get attacked which gives rises to an array of problems. He further added that the amount of losses in cyber attacks are equal to losses incurred in hurricane Katrina. Not much is being done to address this problem. This is partly due to the fact that losses in cyber attacks, except a few major ones, occur in small amounts which aggregate to a huge sum.

With regards to the recent economic downturn, Spaf commented that many companies are cutting down on the budget of security, which is a huge mistake. According to Spaf, security is an invisible but vital function, whose real presence and importance is not felt unless an attack occurs and the assets are not protected.

Ron Ritchey stressed upon the issues of data and information theft. He said that the American economy is more of a design-based economy. Many cutting edge products are researched and designed in the US by American companies. These products are then manufactured in China, India and other countries. The fact that the US is a design economy further encompasses the importance of information security for US companies and the need to protect their intellectual property and other information assets. He said that attacks are getting more sophisticated and targeted. Malware is getting carefully social engineered. He also pointed out there is a need to move from signature-based malware detection to behavior-based detection.

John Thomson arrived late as his jet was not allowed to land at the Purdue airport due to high winds. John introduced himself in a cool way as a CEO of a ‘little’ company named Symantec in Cupertino. Symantec is a global leader in providing security, storage and systems management solutions; it is one of the world’s largest software companies with more than 17,500 employees in more than 40 countries.

John gave some very interesting statistics about the information security and attack scene these days. John said that about 10 years ago when he joined Symantec, Symantec received about five new attack signatures each day. Currently, this number stands about 15000 new signatures each day with an average attack affecting only 15 machines. He further added that the attack vectors change every 18-24 months, and new techniques and technologies are being used extensively by criminals to come out with new and challenging attacks. He mentioned that attacks today are highly targeted, intelligently socially engineered, are more focused on covertly stealing information from a victim’s computer, and silently covering its tracks. He admitted that due to increasing sophistication and complexity of attacks, it is getting more difficult to rely solely on signature-based attack detection. He stressed the importance on behavior-based detection techniques. With regards to the preparedness of government and law enforcement, he said that law enforcement is not skilled enough to deal with these kind of cyber attacks. He said that in the physical world people have natural instincts against dangers. This instinct needs to be developed for the cyber world, which can be just as dangerous if not more so.