Posts by watsonk

Keynote: Howard Schmidt (Keynote Summary)

Howard Schmidt, Special Assistant to the President and Senior Director for Cyber Security, Office of the U.S. President

Morning Keynote Address, April 4, 2012.

Summary by Keith Watson

In the introduction, Professor Spafford mentioned many of the roles that Howard Schmidt has had over his many years in the field. He specifically highlighted Mr. Schmidt’s service to the nation.

He also indicated that things in information security are not necessarily better since Howard last attended the CERIAS Symposium in 2004, but that was not Howard’s fault.

Howard Schmidt began his keynote address by thanking the staff and faculty associated with CERIAS for their efforts. Mr. Schmidt disagreed with Spafford regarding his opening comment about things not being better since his last visit. “The system works,” he said. It is fraught with issues with which we have to manage. Mr. Schmidt indicated that there are many things that we can do online that we were not able to do twenty years ago. We can make it work better though. We have bigger threats and more vulnerabilities due to increased accessibility, but it works. We have to make it work better.

In 2008 when then Senator Obama visited Purdue, he talked about emerging technologies and cybersecurity. He stated, “Every American depends — directly or indirectly — on our system of information networks. They are increasingly the backbone of our economy and our infrastructure; our national security and our personal well-being.” We take technology infrastructure for granted, and we must ensure that it continues to be available.

One of the issues discussed in the government today, is reducing the likelihood that new generations of victims are created. We need cybercrime prevention. Then law enforcement agencies have a better opportunity at scaling up to deal with the issue. Currently, law enforcement can only focus on the most egregious crimes. The FBI is moving cyber crime moving up on their priority list. They are looking at cyber crime internationally.

An estimated $8 trillion were exchanged over wired and wireless networks last year. Online shopping increased even in a down economy.

The President has promised to make cyber infrastructure a strategic national asset. He has called on all of us to look ahead and design and build a stronger infrastructure.

Howard related a story about about writing code for a TI-99/A for aiming his antenna to conduct Earth-Moon-Earth (EME) communications for his ham radio hobby. He sat down with expert developers to talk about buffer overrun issues. The question that the developers had was, “Why would anyone do that?” Because they can.

The President created the Office of the Cybersecurity Coordinator in a unique way. The Office is part of the National Security Counsel and the National Economic Counsel. Mr. Schmidt has two roles in addressing security issues and ensuring that the system remains open. If specific expertise is needed from other government agencies, those experts can be brought in to assist. Setting strategy and policy is a major effort of the Office. It is also responsible for execution.

The FBI Director has identified the primary and high-level actors in the cyber world:

  1. Foreign intelligence services. They are no longer breaking into buildings and doing surveillance. We have to protect our cyber infrastructure from them.

  2. Terrorist groups. They are interested in critical infrastructure and how to attack it.

  3. Organized crime. They see cyberspace as a business opportunity. Some hacker groups are loosely organized but working together to disrupt the infrastructure.

Mr. Schmidt outlined several programs and initiatives of his office:

Questions/Answers:

Question: What is your vision for Continuous Monitoring?

Answer: It is possible to be FISMA-compliant and still unsecure. The creation of the reports required by the law take away time and effort from actually protecting the infrastructure. The goal now is to use continuous monitoring to deal with issues in real-time.

Question: What are the challenges in getting service providers to allow third-party identifiers?

Answer: We hope that there are multiple drivers for federated IDs. One is a market driver for business. They can reduce costs and lower risks by accepting trusted identifiers. We hope that innovators address some of the technical challenges. Finally as consumers, we have to demand better IDs.

Question: Are we at the point where we need to create a new agency responsible for cybersecurity?

Answer: No. It is not necessary. What we need is coordination, not another branch of government. The Office of Cyber Coordinator is the right model to coordinate activities across government.

Security Fireside Chat (Summary)

Summary by Christine Task.

The fireside chat was an open discussion among several important persons with very interesting positions in the security world. The conversation covered a broad range of topics, as each participant contributed their unique insight and perspective. The summary below will collect just the main points for easy review.

Present were (in seating order):

Dr. J.R. Rao of IBM Research Manager of the Internet Security Group at IBM Research (abbreviated below as IBM)

Howard A. Schmidt, Office of the U.S. President Cyber-Security Coordinator of the Obama Administration (abbreviated below as GOV)

Dr. Eugene Spafford, Purdue Executive Director of Purdue CERIAS (abbreviated below as SPAF)

Sam Curry, RSA Chief Technology Officer, Identity and Data Protection business unit and Chief Technologist for RSA, The Security Division of EMC (abbreviated below as RSA)

The first question addressed was: Why do commercial products still fail to adopt basic security practices, (such as separation of privilege, limited connectivity and minimization of function) even though their importance and efficacy has been well-understood for decades?

RSA: Product designers aren’t security experts; security is usually added as an afterthought and considered an interruption to progress. Although there’s some market pressure for more secure products, there is incredible pressure to be the first to release a new product. The long term outlook gets forgotten. Possibly if contracts included penalties for developers who made obviously vulnerable products or did not properly integrate basic security measures into their products, the balance might be better.

IBM: Security is definitely an afterthought in most product design. On the other end of the scale, though, high assurance ‘ivory tower’ systems exist, but are incredibly expensive to build. One aspect of convincing commercial interests to integrate security policies into their development is finding a good balance among what is effective, efficient, and economically feasible. Currently companies with web-facing applications who are concerned about security often use off-the-shelf products to perform source-code scans. Unfortunately, these aren’t as helpful as they might be, even as after-thoughts. They often produce a flood of output, with little to indicate which faults are actually important, and as a result much of their advice may be disregarded.

SPAF: Some fixes are obvious and simple, like languages which prevent buffer overflows. Why aren’t they in use? The vast majority of people don’t make use of the explosion of features in their gadgets: why don’t product developers practice minimization of features? The problem is that there is basically no liability for security flaws. Potentially, we need to consider penalties for software companies whose security performance is extremely negligent.

GOV: Companies aren’t completely unaware of security concerns; delegation of privileges is much more widespread than it used to be. The difficulty may be that companies don’t understand which security policies are applicable to their products (“it’s secure, it has a password!”). Customers need to demand secure products, or else there’s no market pressure for companies to improve their records. A concern about government regulations, managing security from the top down, is that introducing lawyers limits innovation, and we can’t afford to have an economic disadvantage in the global economy. However, the “Power of Procurement” is a very valuable tool. The government penalizes its contractors/suppliers for obvious security flaws in the products they provide, and this forces higher standards to be adopted within those companies, which helps the standards spread out into the technology ecosystem. There has been visible progress in the past decade.

Next, Spafford asked about the possible worst-case consequences of our slow adoption of good security practices: Is a catastrophic event, a “cyber-security pearl harbor”, possible?

RSA: Every new technology brings concerns like this, and generally we prepare and the threat doesn’t come to pass. Of more concern are less glamorous, slower threats, which we are not defending against: like the involvement of organized crime in technical spheres.

GOV: We actually have been developing tools for a long time, within the DOD, to protect against catastrophic attacks, and we’re working on making those tools available for law enforcement and civilians now as well. What’s more difficult is protecting against these more long-term, subtle threats. Law enforcement has been trained to do computer forensics on localized, physical computers. How do they adapt when an intrusion investigation can easily become a global affair?

IBM: One of these subtle threats is intellectual property loss. It doesn’t take much to remove a company’s competitive edge, and that loss can eventually destroy the company. The FBI has been helpful in tracking IP threats throughout the world, but there are clearly still problems. Commercial tech developers are extremely worried about the security measures which protect their IP, and this may be a good vector for encouraging them to adopt better security practices generally.

This was followed by a slightly more personal question from Dr. Spafford, “What keeps Dr. Rao (IBM) up at night?”

IBM: Intellectual property loss; existing products aren’t sufficient protection. How quickly can an effective approach be developed and adopted?

GOV: A similar issue: The government was able to greatly reduce global issues with money-laundering, by diplomacy with other countries who were blindly enabling it for their own personal, or national benefit. We’re hoping to form a similar global coalition to reduce IP theft: an agreement such that if someone steals your product which you’ve invested deeply in developing, and pushes their version out the door before you, there will be sanctions. There won’t be a market for the pirated product. Also, note that although CEO’s of companies may be concerned about IP protection, the structure of companies often leaves no one actually in charge of managing it: auditors are concerned about financial books rather than security.

RSA: In fact, the CFO’s and audit committees have their own language, and aren’t likely to learn a separate language for security. For example, the word “risk” means very different things to the two groups. If security professionals want to be successful, they need to learn to speak business language; they can’t allow themselves to be separated into a pool of technology talent and kept away from the overall workings of the company.

This prompted the general question: How does a company or a government manage security concerns in a multi-national environment?

GOV: We work diplomatically with other countries on our common cyber-security issues, and our common desire to be able to safely support multi-national companies who have concerns IP protection.

IBM: We sell defensive products in 176 countries, never products to be used for offensive purposes. We never align with any government against any other.

RSA We’re in an interesting situation as a multi-national company: we actually work with many, many different governments and thus have personnel with security clearances in a variety of countries. We use a pools of trust system to make certain sensitive information stays segregated within the company.

The speakers then responded to three questions which had been previously submitted by audience members:

How do we deal with the fact that the critical infrastructure we need to protect is often owned by a variety of small regional businesses?

GOV: Again, the power of procurement allows the government to help encourage high standards of security for the products which these smaller companies use.

IBM: The national labs and IBM have worked together with regional utilities to roll out an extremely secure, well-designed smart grid system. This is another way in which private-public partnerships can improve security generally.

SPAF: However, the government can’t cover every small utility. Really effective new security is often prohibitively expensive for these small businesses. We need to find ways for them to break needed improvements into a sequence of small, gradual changes and amortize the costs over time.

RSA: Even large utilities have very small IT departments, and often a large age and cultural gap between the old staff and the new tech experts. The two groups don’t communicate well, and incredibly valuable knowledge is being lost as people retire. This endangers the security of the entire system. Is there any way we can change the model/organization of these institutions to prevent this?

Will users, rather than the corporations they deal with, ever have direct control over their own privacy?

GOV: This is very important, and it needs to happen sooner rather than later. Unfortunately, we’ve already gone a long way down the wrong path, and it may be very difficult to get back.

Nine years ago, Dr. Spafford collaborated on a list of the [Grand Challenges for Cyber-security] (https://www.cerias.purdue.edu/assets/pdf/bibtex_archive/01264859.pdf). What progress has been made?

SPAF: Progress has been made against epidemic attacks, such as flash worms. Now we’re dealing with slower penetration by bot-nets, and we’re getting better at fighting those as well. There is considerable work left to be done, in general, though.

IBM: There is industry inertia, but active work is being done on these.

RSA: These are very useful rallying points, things we should continue to work on. He once got a question from a German reporter at an RSA conference, “When will we solve this security thing?” This was his favorite question ever. It’s all, always, a work in progress. Right now, it’s very important that existing security is made effortless for users, so it’s commonly adopted.

GOV: We actually have a hard time comparing the costs and prosecution rates of these cyber-attacks to the costs of physical attacks, such as burglaries. Only 3% of cyber-attacks were prosecuted (in a recent year), but what percentage of burglaries are prosecuted? What’s the relative cost? In general, we need to educate people about simple ways of defending themselves.

In conclusion:

SPAF: To achieve widespread adoption, security needs to be made effortless and economic. We can’t hope to succeed by telling people what “not” to do. We need to build security into products, so there’s no choice necessary: so users aren’t even aware it’s there.

Panel #3: Securing Mobile Devices (Panel Summary)

Tuesday, April 3, 2012

Panel Members:

  • Saurabh Bagchi, Purdue
  • David Keppler, MITRE
  • Jeremy Rasmussen, CACI

Panel summary by Robert Winkworth.

The panel was moderated by Keith Watson, CERIAS, Purdue University.

In light of its unprecedented growth, wireless mobile communications remains a major focus of security research. The stated purpose of this panel was to address the challenges in securing data and processing, limiting communication to designated parties, protecting sensitive data from loss of device, and handling new classes of malware.

Professor Bagchi opens the discussion with these key points and predictions:

  • 3G routing often circumvents institutional barriers and filters.
  • Information is leaking from one application to another within the device.
  • More anti-malware software packages are sold now. This will increase.
  • Virulent code will spread by near-field technologies, such as Bluetooth.
  • It is becoming more lucrative to commit unauthorized remote monitoring.
  • Encryption for mobile services will improve in the future.
  • Behavior-based detection will become more popular.
  • New features are often rushed to market before being functionally secure.

MITRE’s David Keppler joins the discussion with these thoughts:

  • Mobile devices are single-user devices, and are highly personalized.
  • On the device, we are separating apps rather than users.
  • Contacts, social network data, banking info, etc. are stored in mobiles.
  • Locking down devices can reduce productivity.
  • Users like to have one device for many different actions.
  • A single compromised device can enable a threat against many network users.
  • Mobiles are “always connected”, and that brings security implications.

CACI’s Jeremy Rasmussen contributes:

  • DoD facilities are still trying to prevent mobile activity on premises.
  • New proposals would extend popular connectedness to government workers.
  • Policy is lagging behind what technology provides.
  • Everything needed, even for NSA standards, is available as free software.
  • Vouching for a unit is vouching for every combination of apps it can run.
  • The US government struggles greatly to keep pace with technology.

The audience submits questions:

Attendant: “What will it take to make mobiles as secure as desktops?”

David: “I would argue that the vulnerabilities of a handheld are actually no worse than those of a laptop. A proper risk assessment should be done for each. Expect that exploits will always be possible, but invest for them accordingly.”

Saurabh: “Protocols and architecture need to be standardized. This will be helpful to developers. And we need openness in standards.”

Attendant: “Does it seem inevitable that Android will allow lower-level access to the hardware in the future?”

Jeremy: “Yes, and that can benefit the user, who really should unlock the device and install a personalized solution. We must have root access to the phone to get better security. An app cannot protect the user from system abuses that occur at a lower level than app.”

David: “I agree. What we must do is break the current security in order to rebuild it in a more robust way. There are also some underling market issues at work here. Commercial products are unfortunately vendor-specific, but need to be standardized. How can this happen where there is DRM?”

Attendant: “What are the key differences in user experience between desktop and mobile?”

Saurabh: “Energy consumption, bandwidth, and limitations in the user interface.”

David: “Users trust mobiles MORE rather than less than their desktops. They have not grasped the magnitude of the mobile threat.”

Keith: “What advice would you have for CSO/CIO as they face these threats?”

Saurabh: “CSOs and CIOs don’t ask me for advice! [laughter] What I would recommend, though is strong isolation between applications, and a means to certify them before loading.”

David: “There are some utilities available that employers can have users run if they’re going to be on a private network. Some risk is inevitable, though. There is no perfect solution.”

Jeremy: “Yes—NAC (Network Access Control) used to be required for user devices if they’d be allowed on a corporate network. We need that for mobiles, but I don’t see how it’s possible; we can be circumvented so easily.”

Panel #2: Big Data Analytics (Panel Summary)

Tuesday April 3, 2012

Panel Members:

  • William S. Cleveland, Purdue University
  • Marc Brooks, MITRE Corporation
  • Jamie Van Randwyk, Sandia National Laboratories
  • Alok R. Chaturvedi, Professor, Purdue University

Panel Summary by Nabeel Mohamed

The panel was moderated by Joel Rasmus, CERIAS, Purdue University.

A quick review on Big Data:

Big Data represents a new era in data analysis where the volume of the data to analyze is so big that it does not work with current traditional database technologies and algorithms. The size of the data set needs to be collected, stored, shared, analyzed and/or visualized continue to grow as the information has been produced at an unprecedented rate from ubiquitous mobile devices, RFID technologies, sensor networks, web logs, surveillance records, search queries, social networks and so on. Increasing volume of the data is only one challenge of big data, and there are other challenges. In fact, Gartner analyst, Doug Laney, defined big data challenges/ opportunities as 3V’s:

  1. Volume - it refers to the increasing volume of data as mentioned above.

  2. Velocity - it refers to the time constraints in collecting, processing and using the data. A traditional algorithm which can process a small set of data quickly may take days to process a large set and give the results. However, if there is a real-time need such as national security, surveillance, and health care, taking days is not good enough any more.

  3. Variety - it refers to the increasing array of data types that need to be handled. It includes all kinds of structured and unstructured data including audio, video, image data, transaction logs, web logs, web pages, emails, text messages and so on.

Panel discussion:

First, each of the panelists gave their perspective and their experience with big data analytics.

William S. Cleveland, Shanti S. Gupta Professor of Statistics, Purdue University, mentioning the challenges and experience in handling large volume of data in their research group, described their divide and recombine (D&R) approach to parallelize the processing by dividing the data into small subsets and applying traditional numeric and visualization algorithms on such subsets. They exploit the parallelization exhibits by the data itself. Cleveland described their tool called RHIPE built based on this concept. It is available to the public at www.rhipe.org. RHIPE is a merger of R, a free statistical analysis software and Apache Hadoop, an open source MapReduce framework.

Marc Brooks, Lead Information Security Researcher, MITRE Corporation, mainly focused on anomaly detection in large data sets. He raised the question of how one can detect an anomaly without sufficient test data sets. Further, in his opinion, it is expensive to create such data sets. Brooks sees the trend of moving from supervised learning to unsupervised learning such as clustering due to the above reason. Most of the big data sources provide large amount of unstructured data. We know well to handle structured data as we already have a schema of it. He raised the question of what are the effective ways of handling unstructured data and thinks that there should be a fundamental change in the way we model such data. He also touched on the subject of what it takes to be a data scientist which is becoming an attractive career path these days. He thinks that the skill set is a mixture of software engineering, statistics and distributed systems.

Jamie Van Randwyk, Technical R&D Manager, Sandia National Laboratories, started off with the idea of relativity behind the term “big data”. In his opinion, for different organizations big data means different sizes and complexities. Specially the volume of the data which can be called as big data. Randwyk mentioned that while most commercial entities such as Amazon, Microsoft, Rackspace and so on, handle the big data needs of the industry, Sandia mainly focus on US government agencies. He raised the question that we use Hadoop and other technologies to perform analytics and visualizations on large volume of data, however, we still don’t know how to secure such data in these big data environments. Randwyk and his team deal mainly with cyber data which is mostly unstructured. He pointed out the challenge of analyzing large volumes of unstructured data due to the lack of schema.

Alok R. Chaturvedi, Professor, Krannert Graduate School of Management, Purdue University, started his perspective with the idea that one has to collect as much information possible from multiple sources and make actual information stand out. Chaturvedi briefly explained their big data analytics work involving real time monitoring of multiple markets and multiple assets. A challenge in doing so in the real world is that data is often inconsistent and fragmented. They build behavioral models based on the data feeds from sensors, news feeds, surveys, political, economical and social channels. Based on such models they perform macro market assessment by regions in order to spot opportunities to invest. Chaturvedi thinks that big data analytics is continue to going to play a key role in doing such analysis.

After the initial perspective short talks by each panelist, the floor was open to the questions from the audience.

Q: Is behavioral modelling effective? What are the challenges involved?

A: Panelists identified two ways in which the behavior would change: adversarial activities and perturbation of data or the business itself. It is important to understand these two aspects and build behavioral model accordingly. Also, if the behavioral model does not keep up with the changes, it is going to be less effective in identifying behaviors that one wants to look for. Some of the challenges involved are deciding what matrices to use, defining such matrices, understanding the context (data perturbation vs. malicious activities) and keeping updating the model. It is also important to put the correct causality to the event. For example, 9/11 is due to a security failure not anything else.

Q: Do you need to have some expertise in the field in order to better utilize big data technologies to identify anomalies?

A: Yes, big data analytics will point to some red flags, you need be knowledgeable in the subject matter in order to dig deep and get more information.

Q: Is it practical to do host based modeling using big data technologies?

A: Yes, you have to restrict your domain of monitoring. For example, it may not be practical to do host based monitoring for the whole Purdue network.

Q: How do you do packet level monitoring if the data is encrypted?

A: Cleveland is of the view that one cannot do effective packet level monitoring if the data is encrypted. In their work, they assume that the packets are transmitted in cleartext.

Q: To what extent intelligence response being worked out? Can you do it without the intervention of humans?

A: Even with big data analytics, there will be false positives. Therefore, we still need human in the loop in order to pinpoint the incident accurately. These people should have background in computer security, algorithms, analysis, etc.

A challenge in current big data technologies like Hadoop is that it is difficult to do near real time analysis yet.

Q: (panel to audience) What are your big data problems?

A: (An audience) Our problem is scalability. There is nothing off the shelf that we can buy to meet our need. We have to put a lot of effort to build these system by putting various component together. Instead of spending time on defending attacks, we have to spend a lot of time on operational tasks.

Q: Is it better to have a new framework for big data for scientific data?

A: It is not the science per se that you have to look at; you have to look at the complexity and size of the data in order to decide. From an operational perspective, a definition/framework may not be important, but from a marketing perspective, it may be important. For example, defining the size of the data set could be potentially useful.

Q: We want to manage EHRs (electronic health records) for 60m people. Can these people be re-identified using big data technologies?

A: Even EHR data confirming to safe harbor rules where 18-19 elements are not there may be re-identified. Safe harbor rules are not sufficient, neither they are necessary. They will protect most people, but not all. You can protect even without safe harbor. This is a very challenging problem and CERIAS has an ongoing research project.

Q: Have you seen adversaries intentionally trying to manipulate big data so that they go undetected? Specifically have you seen adversaries that damage the system slowly to stay below the threshold level of detection and that damage very fast to overwhelm the system?

A: We have seen that adversaries understand your protocols, whether your packets are encrypted or not, etc., so that they can behave like legitimate users. I have heard anecdotal stories of manipulating data in bank and other financial institutions, but can’t point to any specific incident.

Q: Often times, we have to reduce the scope, when many parameters are to be analyzed due to the sheer volume of data. How do you ensure that you still detect an anomaly (no false negatives)?

A: You have to analyze all the data otherwise it may result in false negatives.

Panel #1: Securing SCADA Systems (Panel Summary)

Tuesday April 3, 2012

Panel Members:

  • Hal Aldridge - Sypris Electronics
  • William Atkins – Sandia National Laboratories
  • Jason Holcomb – Lockheed Martin - Energy and Cyber Services
  • Steven Parker – Energy Sector Security Consortium
  • Lefteri Tsoukalas – Purdue University

Panel Summary by Matt Levendoski

The panel was moderated by Charles Killien, Computer Science, Purdue University.

Dr. Hal Aldridge, the Director of Engineering at Sypris Electronics, opened today’s first panel on the currently popular topic of SCADA security. Dr. Aldridge initially presented his current research interests, which involves the defining of who takes true ownership and responsibility for the security of our nation’s backbone infrastructure, our SCADA and control systems. An interesting opposition he presented was, what if the responsible party doesn’t have a well-defined background in the security realm?

Dr. Aldridge further delved into the aspects of smart grids and the fact that they are everywhere. Hal discussed how it is a scary thought of how much code is being utilized to run the control system of an automobile. In some aspects cars have more code then a variety of our current fighter jets. He further teased about the concept of an Internet based coffee maker. All concepts aside, these systems have their cons, which are present in the form of security problems. Dr. Aldridge closed with the statement that he greatly appreciates the interdisciplinary stance of CERIAS and how this allows for great innovation in the industry and current academic research.

William Atkins, a Senior member of Technical Staff in Sandia National Laboratories, followed up with his stance and the difference between SCADA and control systems. He specifically focuses on general computing systems security. More precisely, he introduced the term ‘cyber physical systems’. He presented the recent trend that calls for these systems to have inter-compatibility because customers don’t want to be locked into a single vendor for their solutions. He further stressed that this topic is vague and largely unknown which has created a lot of media attention, more specifically topics like the stuxnet worm.

William further addressed the current trends of security as they relate to control systems. These systems are changing from a less manual or analog approach to a more automated and digital methodology. We want our systems to do more yet require less. This trend tends to bring about unforeseen consequences, especially when these systems hit an unknown state of inoperability. Additionally, all the hypothetical attacks being posed to the public are actually becoming a reality. Attackers now have the capability to purchase or acquire the hardware online via surplus sales, eBay, or the like.

William closed with his perspective on SCADA security and how the odds are asymmetrically stacked in favor of the offense verses the defense. Essentially, security tends to get in the way of security. The stuxnet worm is a great example in that it utilized vulnerabilities within the access level of anti-virus software that allowed for a lower level approach to the attack.

Jason Holcomb is a Senior Security Consultant at Lockheed Martin in Energy and Cyber Services. He opened his panel discussion with an interesting spin on how he got involved with SCADA security. Jason indirectly introduced a denial of service conflict within the SCADA system he was working on in which he had to, in turn, remediate.

Jason presented Lockheed’s current approach to the security threads within SCADA systems. Their current research and solutions look to bring some of the advantage back to the defense. This was a great contrast to the perceptions that William Atkins previously presented. Jason then further introduced the following Cyber Kill Chain:

  • Reconnaissance – Gather information. Names, emails, employee info, etc
  • Weaponization – Create malware, malicious document, webpage etc
  • Delivery – Deliver the malware. Email hyperlink *Exploitation – Exploit vulnerability to gain access to assets *Installation – Install on assets
  • Command and Control – Create channel of communication back to attacker
  • Actions on Objectives – Adversary performing their objectives

Steven Parker is the Vice President of Technology Research and Projects with the Energy Sector Security Consortium. Steven stated that when it comes to control systems and SCADA, we don’t need to necessarily solve the hard problems but focus more on easy solutions. Steven then continued to compare the security industry with that of the diet industry. A few of his comparisons included how the diet industry has Dietitians and we have CISSPs, they have nutritional labeling and we have software assurance, everyone wants a no effort weight loss program while security wants an easy solution for everything, and lastly the diet industry has a surgical procedure called gastric bypass where the security industry has something called regulations and compliance. He then closed with the notion that a lot of challenges aren’t all necessarily technical. These challenges include economic strategies, human interactions, public policy, and legal issues.

Lefteri Tsoukalas is a Professor of Nuclear Engineering at Purdue University. Prof. Tsoukalas jumped right into making the statement that the energy markets are currently undergoing a phase transition. Demand isn’t affected by high prices as the resources have changed state from abundance to resource scarcity. This is why energy allocation is key. We need to utilize our resources when energy prices are lower rather then during peak cost timeframes. Prof. Tsoukalas also suggested that we take the same perspective as Europe and look into alternative resources. At this point in time we aren’t sitting as comfortably on our current supply of energy resources as we were, say, 100 years ago.

Q&A Session

Question 1: There is a lot of research in SCADA/Control Systems. How do we adapt our research to be more applicable to Control Systems?

Answers/Discussion:

  • Turn problem away from keeping attackers out and focus on other aspects.
  • Looking at domain specific research.
  • Don’t limit research to a very specific area but rather apply it across all platforms.
  • It’s not an issue that systems are attached to Internet but the fact that we need better control of these systems in both physical and cyber worlds.
  • Looking from the console perspective things may be fine, but sometimes they aren’t. We can’t always rely on the digital tools.
  • Understanding the business is critical for research.
  • Developing methods for evolved systems.
  • Resilience is key, protect privacy and confidentiality.

Question 2: How do we get a handle on global regulations?

Answers/Discussion:

  • A lot can be shared that doesn’t involve personal or corporate data.
  • Here is where the offense has the advantage over defense. The Offense doesn’t care about regulations where defense has to.
  • Discussion was diverted to a more local level and the differences and difficulties with sharing data across large and small companies and how smaller companies tend to be more agile from this perspective.

Question 3: What skills do students and staff need, to be affective in this area?

Answers/Discussion:

  • Good communication, understand business requirements, wide range of experience skills.
  • The industry needs more security experts then there are job openings.
  • Technical experience, also good social engineer.
  • Core fundamental concepts, you will be able to be trained to flourish in this domain.
  • May want to visit and acquire physics skillsets to operating in Control/SCADA systems

Question 4: What type of attacks have you actually experienced?

Answers/Discussion:

  • This question was diverted for confidential and security reasons.

Further discussion was taken from the following perspective:

  • Be careful with internal use of thumb drives etc. Attackers don’t always know what they are looking for but rather just collect data until they find something of interest.

Opening Keynote: Arthur W. Coviello, Jr. (Keynote Summary)

Tuesday, April 3, 2012

Keynote summary by Gaspar Modelo-Howard

The State of Security

Arthur W. Coviello, Jr., Chairman, RSA, The Security Division of EMC

Mr. Coviello opened his keynote with a quote from Nicholas Negroponte: “Internet is the most overhyped, yet underestimated phenomenon in history”. This statement, Mr. Coviello argues, it is still true today. And to determine the state of security, one does not have to look beyond the state of the Internet.

The growth of the Internet has driven the evolution of computing in the last few decades. Computing has gone through radical transformations: from its early days with mainframes, to computers, moving later to networks in the 80s and then to the rise of the Internet and the World Wide Web in the mid 90s. We are currently experiencing a confluence of technologies and trends (cloud computing, big data, social media, mobile, consumerization of IT) that make clear that the next transformation of computing is well underway and creating new challenges to security. Coviello contended the past evolution of IT infrastructure gives clear signals to the fast and deep changes security should continue to experience in the future. As an example and in just a couple of years, the IT industry has moved from 1 exabyte of data to 1.8 zettabytes, from the iPod to the iPad, from 513M to over 2B Internet users, from speeds of 100kbps to 100Mbps, and from AOL to Facebook (which would be the 3rd largest country in the world, by considering its number of users as population).

Coviello then used an interesting analogy to explain the impact in security of the continuous growth of the Internet, and therefore the need to better empower security. Imagine that the Internet is a highway system that is experiencing an exponential growth in the number of cars that use it. The highway system then needs to increase the number of lanes of existing roads, add new roads, and provide better ways for cars to access the system. But all this growth also increases the number and complexity of accidents on the roads. Then, security needs to grow accordingly to better manage (prevent, detect, and respond) the new scenario of potential accidents.

Looking at the security world, things have also changed dramatically over the years. Not long ago there were tens of thousands of viruses and their corresponding signatures, where as now there are tens of millions. Organized crime and spying online is a very real threat today that was not really happening in 2001. The scenario is then more difficult today for security practitioners to protect their networks. Stuxnet opened a new threat era for security. We have long moved away from the times of script kiddies. The new breed of attackers include: (1) non-state actors, like terrorists and anti-establishment vigilantes; (2) criminals, that act like a technology company by expanding their market around the world to distribute their products and services, and have sophisticated supply chains; and (3) nation-state actors, which are stealth and sophisticated, difficult to detect, well-resourced, and efficient.

Coviello briefly explained the high profiled breach experienced by RSA in 2011. They were attacked by two advanced persistent threat (APT) groups. From the steps taken, it is very clear that a lot of research on the company was made before the attacks. Phishing email was used to get inside their networks, sending the messages to a carefully selected group of RSA employees. The messages included an Excel attachment that contained a zero-day exploit (Adobe Flash vulnerability), which installed a backdoor when triggered. The attackers knew what they wanted, and went low and slow. The attack went on for 2 weeks, with RSA staying two to three hours behind the attackers’ moves. The attackers were able to ex-filtrate information from the networks, but RSA ultimately determined that no loss was produced to the company from the attack. As for the experience, Coviello acknowledged that is still not a good idea for a security company to get breached.

We are past the tipping point, were physical and virtual worlds could be separated. Additionally, the confluence of technologies and trends is creating more ‘open’ systems. The security industry is challenged as the open systems are more difficult to secure (than close systems, each under a single domain). We need to secure what in a way can’t be controlled. It is then not difficult to explain what has happened recently, in terms of breaches. In 2011, many high-profiled attacks occurred (in what others have labeled as the ‘Year of the Security Breach’) to big organizations like Google, Sony, RSA, PBS, BAH (Booz, Allen, Hamilton), Diginotar, and governmental entities such as the Japanese Parliament and the Australian Prime Minister.

Coviello argued that vendors and manufacturers must stop the linear approach used in the security industry to keep adding layer after layer of security control mechanisms. Security products should not be silos. We need to educate computer users, but keeping in mind that people make mistakes. After all we are humans. Our mindset must change from playing defense, as protection from perimeter does not work alone. Also, security practitioners and technologists must show an ability for big picture thinking and having people skills.

We need to get leverage from all security products, therefore the need to move away from the security silos architecture. Fortunately, the age of big data is arriving to the security world. Coviello provided a definition to big data: collecting datasets from numerous sources, at a large scale, and to produce actionable information from analyzing the datasets. The security objective is then to reduce the window of vulnerabilities for all attacks. The age of big data should also promote the sharing of information, which unfortunately is currently a synonym for failure. Organizations do not work together to defend against attacks.

Mr. Coviello calls for the creation of multi-source intelligence products. They must be risk-based, as there are different types of risks and should consider the different vulnerabilities, threats, and impacts affecting each organization. The intelligence products should be agile, having deep visibility of the protected system. They should detect anomalies in real time and the corresponding responses should be automated in order to scale and be deployed pervasively. Unfortunately today, systems are a patchwork of security products, focusing only on compliance. Finally, the intelligence products should have contextual capabilities. The ability to succeed against attacks depends on having the best available information, not only security logs. Such information should come from numerous sources, not only internals.

The Q&A session included several interesting questions, after the stimulating talk. The first one asked about the possible impediments to achieve the goals outlined in the talk. Coviello pointed out three potential roadblocks. First, the lack of awareness regarding the impact of a security situation by the top board of the organization. Top management should understand that security problems are the responsibility of the whole company, not just the IT department. Second, ignoring the requirement to follow a risk based approach when making security decisions and developing strategies. Third, is important that security programs grow as organizations increasingly rely on their IT systems.

A question was made regarding the asymmetric threat that security practitioners face and what can be done about it. Coviello pointed out the need to work around risk analysis in order to reduce the potential risks faced by organizations. It should be understood that the digital risk cannot be reduced any more than the physical risk. So organizations should get more sophisticated on the analytics, following a risk-based approach.

A member of the audience pointed out that several federal cybersecurity policies are based on the concept of defense in depth. Such concept is not driven by risks, which ultimately might raise costs to organizations required to comply with policies and regulations. Coviello agreed that if a risk-based approach is not followed, security programs might not achieve cost effectiveness. He also mentioned that defense in depth is sometimes misunderstood as it is not a layering mechanism to implement cybersecurity. It should encompass information sharing among organizations and even countries. He offered an example, calling for ISPs to play a more aggressive role and work with organizations to stop the threat from botnets.

A final question was made regarding the push by elected officials to use electronic voting, especially in small counties that might lack the resources to protect those systems. How to make elected officials understand the risk faced when using electronic voting, when such authorities usually do not have the capability to secure the voting system? Coviello sounded less than enthusiastic about electronic voting. But more importantly, he said there is a need to aggregate the security expertise and services so it can be outsourced to small and medium-sized organizations. The security industry should follow on the steps of the software and hardware industries, offering outsourcing services and products.

Panel #4: Securing Web 2.0 (Panel Summary)

Wednesday, April 6, 2011

Panel Members:

  • Gerhard Eschelbeck, Webroot
  • Lorraine Kisselburgh, Purdue
  • Ryan Olson, Verisign
  • Tim Roddy, McAfee
  • Mihaela Vorvoreanu, Purdue

Panel Summary by Preeti Rao

The panel was moderated by Keith Watson, Research Engineer, CERIAS, Purdue University

Keith kick-started the panel with an interesting introduction to the term Web 2.0. He talked about how he framed its definition, gathering facts from Wikipedia, Google searches, comments and likes from Facebook, tweets from Twitter while playing Farmville, Poker on the Android phone!

All the panelists gave short presentations on Web 2.0 security challenges and solutions. These presentations introduced the panel topic from different perspectives - marketing, customer demands, industry/market analysis, technological solutions, academic research and user education.

Mihaela Vorvoreanu from Purdue University, who gave the first presentation, chose to use Andrew McAfee’s definition of Enterprise 2.0: a set of emerging social software collaborative platforms. She noted that the emphasis is on the word “platform” as opposed to “communication channels” because platforms are public and they support one-to-one communication which is public to all others, thus making it many-to-many communication.

She talked about the global study on Web 2.0 use in organizations which was commissioned by McAfee Inc, and reported by faculty at Purdue University. This study defined Web 2.0 to include consumer social media tools like Facebook, Twitter, YouTube and Enterprise 2.0 platforms. The study was based on a survey of over 1000 CIOs and CEOs in 17 countries, sample balanced by country, organization size, industry sector. The survey results were complimented with in-depth interviews with industry experts, analysts, academicians to get a comprehensive view of Web 2.0 adoption in organizations globally, its benefits and security concerns. While overall organizations reported great benefits and importance to using Web 2.0 in several business operations, the major concern was security - reported by almost 50% of the respondents. In terms of security vulnerabilities, social networking tools were reported to be the top threat followed by Webmail, content sharing sites, streaming media sites and collaborative platforms. Specific threats that organizations perceive from employee use of Web 2.0 included malware, virus, information over-exposure, spyware, data leaks. 70% of the respondents had security incidents in the past year and about 2 million USD were lost due to security incidents. The security measures reported by organizations included firewall protection, web filtering, gateway filtering, authentication and social media policies.

She presented a broad, global view of organizational uses, benefits and security concerns of Web 2.0.

Lorraine Kisselburgh from Purdue University continued to present the results from McAfee’s report. She discussed an interesting paradox that the study found.

Overall, there is a positive trend with significant adoption rate (75%) of Web 2.0 tools world-wide. There are also significant concerns among those who haven’t adopted the technology. 50% of non adopters report security concerns, followed by productivity, brand and reputation concerns. Not all tools have the same perceived value or even same concerns/risks/threats. Social networking tools and streaming media sites are considered most risky. Nearly half of the organizations banned Facebook. 42% banned IM, 38% banned YouTube. Collaborative platforms and content sharing tools are considered as less risky and their perceived value/usefulness is high when compared to social tools. But survey of those organizations who have adopted report the real value of social tools to be quite high - helpful in increasing communication, improving brand marketing etc. In fact social tools realized greater value than webmail etc.

So, the paradox is: social tools (social networking and streaming media sites) are mostly considered highly risky from a security standpoint, perceived least valuable to organizations, but yet they realize great value among adopters.

This reflects the continuing tensions between how the value of social media tools is perceived vs realized by organizations. This is also in-line with some historical trends in adopting new/unknown, emerging technologies. Example: email. The tensions are also because of where the technology is located and where to address risk: internal tools vs external on the cloud. It also has to do with recognizing organizational tools vs people tools.

Tim Roddy from McAfee addressed his comments on Web 2.0 security from a buying organization standpoint, giving it a product marketing perspective, about selling web security solutions. He commented that initially people were concerned about malware coming in to the organizations through email. Now the model and dynamics have changed and it has an influence on how we investigate our products and how we see our customers using security solutions from a business standpoint. His comments focussed on two areas: 1) stopping malicious software from coming in 2) having customizable controls for people using social media tools.

He pointed out that about 3 years ago, his customers were using their products to block access to sites like Twitter, Facebook because they saw no value in using them in businesses. But periodic McAfee surveys show a dramatic change in this trend. Organizations are allowing access to these tools; this trend is also driven by the younger generation of employees in the organizations demanding access. While it was a URL filtering solution that was used 3 years back to just block for eg, social networking sites category, now it is changed because they allow access to those websites.

So, how do we allow safe productive access?

There is a dramatic increase/acceleration in malware; they are automated, targeted and smarter now. Therefore web security efforts need to be proactive. By proactive security, it means not only to stop malware with signature analysis but include effective behavioral analysis to break the chains/patterns of attacks. McAfee’s Gateway Anti-Malware strategies focus on these.

Secondly, organizations allow access to social media tools now; but no one filters the apps in those tools to make sure they are legitimate. For eg: are the game apps on Facebook legitimate and secure? Such apps are one of the most common ways of attacks. The solution is to customize controls. Industries, especially finance and healthcare, are worried about leakage of data. Say, an employee sends his SSN through a LinkedIn message. Can it be blocked/filtered? Security solution efforts are now bi-directional – to proactively monitor and filter what is coming in as malware and what is going out as data leakage.

Lastly, the security concerns for use of mobile/handheld devices are growing. There is a great need to secure these devices, especially if corporately owned. It needs to have the same level of regulations and be compliant to corporate network standards.

Gerhard Eschelbeck from Webroot talked about why securing Web 2.0 a big deal and how we got there.

First gen of web apps were designed for static content to be displayed by browser. All execution processing was on server side and mostly trusted content. There were no issues about client/side browser side execution so the number of attacks happening was significantly less. The only worry then was to protect the servers. Now, the security concerns are mainly because of interactive content in Web 2.0. Fundamentally the model changes from 1-way-data from server to client to 2-way interactive model. Browser has become part of this execution environment. Billions of users’ browsers that are a part of this big ecosystem are exposed to attacks.

There is a major shift from code execution purely on server-side to distributed model of code execution using ajax and interactive, dynamic client side web page executions. While useful in many ways, it introduces new vulnerabilitie and this is the root cause for Web 2.0 security concerns.

He highlighted four areas of concerns:

  1. User created, user defined content which is not trusted content
  2. To bring desktop look and feel to the Web 2.0 applications, interactive features like mouse rollovers, popups have caused significant amount of interaction between server and client and this causes more vulnerabilities
  3. Syndication of content and mashups of various sites
  4. Offline capabilities of some applications now lead to storage of information on one of those billions of desktops

All these have led to increased security exposure points in turn leading to vulnerabilities.

Ryan Olson from Verisign talked about malware issues with Web 2.0.People are sharing a lot of their personal information online which they weren’t doing earlier. Access to personal information of people has become easy now, and is available to friends on social networks, or even anyone who has access to that friend’s account. A lot of organizations now have started using a security question/answer as a form of authentication after login/password. Answers to questions like user’s mother’s maiden name or high school name can be easily found on social networking sites. Most of such questions can be answered by looking at the user’s personal data that is available online, often without much authentication. This way Web 2.0 offers more vectors for malware. It offers many ways of communicating with people hence opening up to a lot of new entry points that we now need to monitor. Earlier it was mostly email and IM but now each of these social networks allow an attacker to send message, befriend and build trust. There are additional avenues provided by these tools to social-engineer the user into revealing some information about self, by exploiting the trust between user and his friends. A lot of malware are successful purely through social engineering attacks, by befriending them or enticing them and then extracting information. Primary solution to this problem is to educate people about the consequences of revealing personal information and the value of trust.

Questions from audience and discussions with the panel:

Keith Watson: How much responsibility should be held with the Web 2.0 providers (organizations like Facebook, Twitter) in providing secure applications? How much responsibility should be held with the users and educating them about safe usage? Is there a balance between user education and application provider responsibility?

Discussions:

TR: Just like any application provider, the companies do have a lot of responsibility; but educating the users is also equally important. Users are putting so much information out on the Web (for eg: Oh, I am in the airport). People should be made to realize how much and what to share.

RO: It should be a shared responsibility. It is the market that drives Web 2.0 to become more secure. For example, the competition between social network providers to provide a malware-free, secure application drives everything. If one social network is not as secure then users will just migrate to the next one. This way market will help and continue to put pressure on people in turn the providers to make secure applications.

LK: While it has to be a shared responsibility, it also has to do with recognizing the value of social media tools and encouraging its participation in businesses. Regarding user education, what we have found in some privacy research is that understanding the audience of these tools - who has access, what are they accessing, to whom are you disclosing, and being able to visualize who is listening helps the users in deciding what and how much information to disclose. Framing this through technology, system design would be helpful from an educational standpoint.

MV noted that there could be unintended, secondary audience always listening. She took a cultural approach to explain/understand social media tools. Each tool may be viewed as a different country – Facebook is a country, Twitter is another country. Just like how people from one country aren’t familiar with another country’s culture, and they may use travel guidebooks, travel information for help, users of social media tools need to be educated about the different social media tools and their inherent cultures.

GE: While the tourism and travel industry comparison is good, it doesn’t quite work always in the cyberworld because it is different. There is no differentiation anymore between dark and bright corners; even a site which “looks” safe might be a target of an awful attack Educational element is important but the technological safety belt is much needed. Securing is also hard for the fact that server-side component is usually from provider but client-side/browsers are with the people. It is important how we provide browser protection to users and reduce Web 2.0 attacks.

Brent Roth: What are your thoughts on organizations adopting mechanisms/models like the “no script add- on in Firefox”?

Discussions:

RO: This model would work really well for people who have some security knowledge/background, but doesn’t work for a common man. We need to look at smarter models for general public that make decisions about good and bad by putting the user in the safety belt.

TR: Websites get feeds and ads. While some may be malicious, they also drive the revenue. McAfee’s solutions block parts of the sites/pages which could be malicious. Behavioral analysis techniques help. It has to be a granular design solution.

RO: If all scripts are blocked then what about the advertisers? If we block all advertisers, the Internet falls because they drive the revenue. Yes, a lot of malware comes from ads and scripts but you cannot just completely block everything.

Malicious script analytics, risk profiling need to be done. The last line of defense is always at the browser end. User education is as important as having a technology safety belt to secure Web 2.0.

Panel #3: Fighting Through: Mission Continuity Under Attack (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • Paul Ratazzi, Air Force Research Laboratories
  • Saurabh Bagchi, Purdue
  • Hal Aldridge, Sypris Electronics
  • Sanjai Narain, Telcordia
  • Cristina Nita-Rotaru, Purdue
  • Vipin Swarup, MITRE

Panel Summary by Christine Task

In Panel #3: “Fighting Through: Mission Continuity Under Attack”, each of the six panelists began by describing their own perspective on the problem of organizing real-time responses and maintaining mission continuity during an attack. They then addressed three questions from the audience.

Paul Ratazzi offered his unique insight as the technical advisor for the Cyber Defense and Cyber Science Branches at the Air Force Research Laboratory in Rome, NY. He noted that military organizations are necessarily already experienced at “guaranteeing mission essential functions in contested environments” and suggested that the cyber-security world could learn from their general approach. He divided this approach into four stages: Avoid threats (including hardening systems, working on information assurance, and minimizing vulnerabilities in critical systems), survive attacks (develop new, adaptive, real-time responses to active attacks), understand attacks (forensics), and recover from attacks (build immunity against similar future attacks). Necessary developments to meet these guidelines are improved understanding of requirements for critical functions (systems engineering) and real-time responses that go beyond our current monitor/detect/respond pattern. As a motivation for the latter, he gave the example of a fifth generation fighter, nicknamed a ‘flying network’. When its technological systems are under attack, looking through the log file afterwards is “too little, too late”.

Dr. Saurabh Bagchi of CERIAS and the Purdue School of Electrical and Computer Engineering described an innovative NSF-funded research project which offered real-time responses to attacks on large-scale, heterogeneous distributed systems. These systems involve a diverse array of third-party software and often offer a wide variety of vulnerabilities to an attacker. Additionally, attacks across these systems can spread incredibly quickly using trust relationships and privilege escalation, eventually compromising important internal resources. Any practical reaction must occur in machine-time. Dr. Bagchi’s research chose the following strategies: Use bayesian-inference to guess which components are currently compromised at a given time, and from that information estimate which are most likely to be attacked next. Focus monitoring efforts on those components precieved as at risk. Use knowledge of the distributed system to estimate the severity of the attack in progress, and respond appropriately with real-time containment steps such as randomizing configurations or restricting access to resources. Finally, he emphasized the importance of learning from each attack. Long-term responses should abstract the main characteristics of the attack and prepare defenses suited to any similar attacks in the future.

Dr. Sanjai Narain, a Senior Research Scientist in Information Assurance and Security at Telcordia Research, described his own work on distributed systems defense—a novel, concrete solution for the type of immediate containment suggested by Dr. Bagchi. Although the high-level abstraction of a network as a graph is relatively straightforward, the actual configuration space can be incredibly complex with very many variables to set at each node. ConfigAssure is an application which eliminates configuration errors by using SAT constraint solvers to find configurations which satisfy network specifications. For any given specification, there are likely many correct configurations. In order to successfully attack a network, an attacker must gain some knowledge of its layout (such as the location of gateway routers). By randomizing the network configuration between different correct solutions to the specification, an attacker can be prevented from learning anything useful about the network while the users themselves remain unaware of any changes.

Dr. Cristina Nita-Rotaru, an Assistant Director of CERIAS and an Associate Professor in the Department of Computer Science at Purdue, introduced an additional concern with maintaining mission continuity: maintaining continuity of communication. She offered the recent personal example of having her credit cards compromised while traveling. She was very quickly informed of this problem by her credit card companies and was thus able to make a risk-assessment of the situation and form a reasonable response (disabling one card while continuing to use the less vulnerable one until she could return home). When an attack compromises channels of communication, for example by taking out the network which would be used to communicate—as in jamming wireless networks, the information necessary to make a risk-assessment and form containment strategies is not available. Thus when considering real-time reactions to attacks, it’s important to make sure the communication network is redundant and resilient.

Dr. Hal Aldridge, the Director of Engineering at Sypris Electronics and a previous developer of unmanned systems for space and security applications at Northrop Grumman and NASA, discussed the utility of improving key-management systems to respond to real-time attacks. Key management systems which are agile and dynamic can help large organizations react immediately to threats. In a classic system with one or few secrets which are statically set, the loss of a key can be catastrophic. However, a much more robust solution is a centralized cryptographic key management system which uses a large, accurate model of the system to enable quickly changing potentially compromised keys, or using key changes to isolate potentially compromised resources. He briefly described his work on such a system.

Dr. Vipin Swarup, Chief Scientist for Mission Assurance Research in MITRE’s Information Security Division, emphasized one final very important point about real-time system defense: high-end threats are likely to exist inside the perimeter of the system. Our ability to prevent predictable low-end threats from entering the perimeter of our systems is reasonably good. However, we must also be able to defend against strategic, targeted, adaptive attacks which are able to launch from inside our security system. In this case, as the panel has discussed, the key problem is resiliency; we must be able to launch our real-time response from within a compromised network. Dr. Swarup summarized three main guidelines for approaching this problem: reduce threats (by deterring and disrupting attackers), reduce vulnerabilities (as Ratazzi described, understand system needs and protect critical resources), and reduce consequences (have a reliable response). Any real-time response strategy must take into account that the attacker will also be monitoring and responding to the defender, must be able to build working functionality on top of untrusted components, and must have a more agile response-set than simply removing compromised components.

After these introductions, there was time to address three questions to the panel [responses paraphrased].

“What time-scale should we consider when reconfiguring and reacting to an attack?”

Swarup: Currently we’re looking at attacks that flood a network in a day, and require a month to clean up [improvement is needed]. However, some attacks are multi-stage and take considerable time to execute [stuxnet]—these can be responded to on a human time scale.

Aldridge: It can take a lot of time to access all of the components in the network which need reconfiguring after an attack [some will be located in the ‘boonies’ of the network].

Bagchi: It can take seconds for a sensor to rest, while milliseconds are what’s needed.

“What are some specific attacks which require real-time responses?”

Aldridge: If you lose control of a key in the field, the system needs to eliminate the key easily and immediately.

Nita-Rotaru: When you are sending data on an overlay network, you need to be able to reroute automatically if a node becomes non-functional.

Narain: If you detect a sniffing attack, you can reroute or change the network-architecture to defend against it.

Ratazzi: Genetic algorithms can be used to identify problems at runtime and identify a working solution.

“What design principles might you add to the classic 8 to account for real-time responses/resiliency?”

Swarup & Nita-Rotaru: Assume all off-the-shelf mobile devices are compromised, focus on using them while protecting the rest of the system using partitioning and trust relationships, and by attempting to get trusted performance of small tasks over small periods of time in potentially compromised environment. Complete isolation [from/of compromised components] is probably impossible.

Ratazzi & Bagchi: minimize non-essential functionality of critical systems, focus on composing small systems to form larger ones, using segmentation-separate tools and accesses for separate functions-where possible to reduce impact of attack.

Panel #2: Scientific Foundations of Cyber Security (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • Victor Raskin, Purdue
  • Greg Shannon, CERT
  • Edward B. Talbot, Sandia National Labs
  • Marcus K. Rogers, Purdue

Panel Summary by Pratik Savla

Edward Talbot initiated the discussion by presenting his viewpoint on Cyber security. He described himself as a seasoned practitioner in the field of cyber security. He highlighted his concerns for cyber security. The systems have become too complicated to provide an assurance of having no vulnerabilities. It is an asymmetrical problem. For an intruder, it may just take one door to penetrate the system but for the person managing the system, he/she would need to manage a large number of different doors. Any digital system can be hacked and any digital system that can be hacked will be hacked if there is sufficient value in that process. Talbot described problems in three variations: near-term, mid-term and long term. He used a fire-fighting analogy going back two centuries when on an average a U.S. city would be completely gutted and destroyed every five years. If the firefighters were asked about their immediate need, they would say more buckets are required. But, if they were asked what to do to prevent this from happening again, they had no answer. Talbot placed this concern into three time-frames: near-term, mid-term and long term. The first time frame involves the issue of what to do today to prevent this situation. The second timeframe tries to emphasize that it is important to be ahead of the game. The third timeframe involves the role of science. In this context, the development of a fire science program in academia. To summarize, he pointed out that the thinking that gets one into a problem is insufficient to get one out of the problem.

Talbot quoted a finding from the JASON report on the science of cyber security which stated that the highest priority should be assigned to the establishment of research protocols to enable reproducible experiments. Here, he stated that there is a science of cyber security. He concluded by comparing the scenario to being in the first step of a 12-step program (borrowing from Alcoholics Anonymous). It means to stop managing an unmanageable situation and instead developing a basis to rethink what one does.

Rogers focused on the the question: Do we have foundations that are scientifically based that can help answer some of the questions in form of research? Are we going in the right direction? This lead to a fundamental question: how we define a scientific foundation? What defines science? He highlighted some common axioms or principles such as body of knowledge, testable hypotheses, rigorous design and testing protocols and procedures, metrics and measurements, unbiased results and their interpretation, informed conclusions, repeatability as well as feedback into theory that are found across different disciplines. The problems that one comes across are non-existence of natural laws, man-made technologies in constant flux, different paradigms of research such as observational, experimental and philosophical, non-common language, extent of reliability and reproducibility of metrics, difference in approach such as applied versus basic, studying symptoms as opposed to causes. Cyber security is informed by a lot of disciplines such as physics, epidemiology, computer science, engineering, immunology, anthropology, economics and behavioral sciences.

The JASON report on the science of cyber security came out with strategies that are areas such as modeling and simulation which involved biological, decisional, inferential, medical as well as behavioral models that could be considered when viewing it on a scientific foundation. He emphasized that cyber security problems lend themselves to a scientific based approach. He stressed that there will be a scientific foundation for cyber security only if it is done correctly and only when one is conscious about what constituted a scientific foundation. Even solutions such as just-in-time, near-term and long-term can be based on a scientific foundation.

He pointed out that currently the biggest focus was on behavioral directive. In other words, how do we predict what will happen 20 years from now if employee ‘X’ is hired?

Shannon addressed the question: How do we apply the scientific method? Here, he presented the software engineering process. He discussed its various components by describing the different issues each one addresses. Firstly, what data do we have? What do we know? What can we rely on? What is something that we can stand on which is reasonably solid? Secondly, why do we have data that is prone to exploitation? He highlighted reasons such as lack of technology as well as mature technology, lack of education and lack of capacity. Here, he concluded that these hypotheses do not seem to stand the test of data as the data indicated we have always had problems. He then stated some alternative hypothesis such as market forces, people and networks that can be considered. He stressed on the point that solutions are needed based on what people and systems do, not what we wish they would do. The stumbling block for such a case is the orthodoxy of cyber security which means being in the illusion that by just telling people to do the right thing and using the right technology would lead to a solution to a problem. It is analogous to an alchemist who would state that just by telling the lead to turn gold, it would become gold. He stressed that we need to understand what is going on and what is really possible. The key message was that if there is a science that is built on data, it would involve much more than just theory.

Raskin took a more general view of cyber science by offering some of his thoughts on the subject. He said that he did not agree to the “American” definition of science which defines it as a small sub-list of disciplines where experiments can be run and immediate verification is possible as he considered it to be too narrow. He conformed to the notion of science wherein any academic discipline that is well-defined is a science. He presented a schematic of the theory-building process. It involved components such as phenomena which corresponded to a purview of the theory, theory, methodology and the description, which is a general philosophical term for results. The theory is connected to the methodology and a good theory would indicate why it can help guide the methodology. He asked why we were not questioning what we were doing. The first thought was related to the issue of data provenance i.e. why are you doing what are you doing? The second thought focused on the question of how we deal with different sciences that all part of cyber science. A mechanism that can help address that is that of rigorous application. He disagreed with the notion that combining two things without any import/export of sub-components leads to some worthy result. He stated that from the source field, components such as data, theory and methods should be imported to the target field. Only the problems of the source field should be excluded from being imported. The second thought emphasized about forming a linkage between the two fields; source and target by a common application. He concluded that without a theory, one does not know what one is doing and one does not know why one is doing it? It does not imply that there is no theory in existence. On the contrary, anything that is performed has an underlying theory and one may not be having any clue about that theory.

A question about complexity theory brought up an example of a bad scientific approach wherein the researcher adds more layer of complexity or keeps changing the research question but does not ever question the underlying theory which may be flawed.

Panel #1: Traitor Tracing and Data Provenance (Panel Summary)

Tuesday, April 5, 2011

Panel Members:

  • David W. Baker, MITRE
  • Chris Clifton, Purdue
  • Stephen Dill, Lockheed Martin
  • Julia Taylor, Purdue

Panel Summary by Nikhita Dulluri

In the first session of the CERIAS symposium, the theme of ‘Traitor Tracing and Data Provenance’ was discussed. The panelists spoke extensively about the various aspects relating to tracing the source of a given piece of data and the management of provenance data. The following offers a summary of the discussion in this panel.

With increasing amounts of data being shared among various organizations such as health care centers, academic institutions, financial organizations and government organizations, there is need to ensure the integrity of data so that the decisions based on this data are effective. Providing security to the data at hand does not suffice, it is also necessary to evaluate the source of the data for its trust-worthiness. Issues such as which protection method was used, how the data was protected, and whether it was vulnerable to any type of attack during transit might influence how the user uses the data. It is also necessary to keep track of different types of data, which may be spread across various domains. Identification of the context of the data usage i.e., why a user might want to access a particular piece of data or the intent of data access is also an important piece of information to be kept track of.

Finding the provenance of data is important to evaluate its trustworthiness; but this may in-turn cause a risk to privacy. In case of some systems, it may be important to hide the source of information in order to protect its privacy. Also, data or information transfer does not necessarily have to be on a file to file exchange basis- there is also a possibility that the data might have been paraphrased. Data which has a particular meaning in a given domain may mean something totally different in another domain. Data might also be given away by people unintentionally. The question now would be how to trace back to the original source of information. A possible solution suggested to this was to pay attention to the actual communication, move beyond the regions where we are comfortable and to put a human perspective on them, for that is how we communicate.

Scale is one of the major issues in designing systems for data provenance. This problem can be solved effectively for a single system, but the more one tries to scale it to a higher level, the less effective the system becomes. Also, deciding how much provenance is required is not an easy question to answer, as one cannot assume that one would know how much data the user would require. If the same amount of information as the previous transaction was provided, then one might end up providing excess (or insufficient) data than what is required.

In order to answer the question about how to set and regulate policies regarding the access of data, it is important to monitor rather than control the access to data. Policies when imposed at a higher level are good, if there is a reasonable expectation that people will act accordingly to the policy. It is important not to be completely open about what information will be tracked or monitored, as, if there is a determined attacker, this information would be useful for him to find a way around it.

The issue of data provenance and building systems to manage data provenance has importance in several different fields. In domains where conclusions are drawn based on a set of data and any alterations to the data would change the decisions made, data provenance is of critical importance. Domains such as the DoD, Health care institutions, finance, control systems and military are some examples.

To conclude, the problem of data provenance and building systems to manage data provenance is not specific to a domain or a type of data. If this problem can be solved effectively in one domain, then it can be extended and modified to provide the solution to other domains as well.