A summary written by Nabeel Mohamed.
The main focus of the talk was to highlight the need for “information-centric security” over existing infrastructure centric security. It was an interesting talk since John was instrumental in providing real statistics to augment his thesis.
Following are some of the trends he pointed out from their research:
These statistics show that data is becoming increasingly important than ever before. Due to the above trends, he argued that protecting infrastructure alone is not sufficient and a shift in the paradigm of computing and security is essential. We need to change the focus from infrastructure to information.
He identified three elements in the new paradigm:
John advocated to adopt a risk-based/policy-based approach to manage data. A current typical organization has strong policies on how we want to manage the infrastructure, but we don’t have a stronger set of policies to manage the information that is so critical to the business itself. He pointed out that it is high time that organizations assess the risk of loosing/leaking different types information they have and devise policies accordingly. We need to quantify the risk and protect those data that could cause high damage if compromised. Identifying what we want to protect most is important as we cannot protect all adequately.
While the risk assessment should be information-centric, one may not achieve security only by using encryption. Encryption can certainly help protect data, but what organizations need to take is a holistic approach where management (for example: data, keys, configurations, patches, etc.) is a critical aspect.
He argued that it is impossible to secure without having knowledge about the content and without having good policies on which to base organizational decisions. He reiterated that “you cannot secure what you do not manage”. To reinforce the claim, he pointed out that 90% of attacks could have been prevented had the systems came under attack been managed well (for example, Slammer attack). The management involves having proper configurations and applying critical updates which most of the vulnerable organizations failed to perform. In short, well-managed systems could mitigate many of the attacks.
Towards the end of his talk, he shared his views for better security in the future. He predicted that “reputation-based security” solutions to mitigate threats would augment current signature-based anti-virus mechanisms. In his opinion, reputation-based security produces a much more trusted environment by knowing users’ past actions. He argued that this approach would not create privacy issues if we change how we define privacy and what is sensitive in an appropriate way.
He raised the interesting question: “Do we have a society that is sensitive to and understands what security is all about?” He insisted that unless we address societal and social issues related to security, the technology alone is not sufficient to protect our systems. We need to create a society aware of security and create an environment for students to learn computing “safely”. This will lead us to embed safe computing into day- to-day life. He called for action to have national approach to security and law enforcement. He cited that it is utterly inappropriate to have data breach notification on a state-by- state basis. He also called for action to create an information-based economy where all entities share information about attacks and to have information-centric approach for security. He mentioned that Symantec is already sharing threat information with other companies, but federal agencies are hardly sharing any threat information. We need greater collaboration between public and private partnerships.
A panel summary by Utsav Mittal.
Panel Members:
It’s an enlightening experience to listen to some of the infosec industry’s most respected and seasoned professionals sitting around a table to discuss information security.
This time it was Eugene Spafford , John Thompson and Ron Ritchey. The venue was Lawson computer science building. The event was a fireside chat as a part of the CERIAS 10th Annual Security Symposium.
Eugene Spafford started the talk by stating that security is a continuous process not a goal. He compared security with naval patrolling. Spaf said that security is all about managing and reducing risks on a continuous basis. According to him, nowadays a lot of stress is placed on data leakage. This is undoubtedly one of the major concerns today, but it should not be the only concern. When people are focused more on data leakage instead of addressing the core of the problem, which is in the insecure design of the systems, they get attacked which gives rises to an array of problems. He further added that the amount of losses in cyber attacks are equal to losses incurred in hurricane Katrina. Not much is being done to address this problem. This is partly due to the fact that losses in cyber attacks, except a few major ones, occur in small amounts which aggregate to a huge sum.
With regards to the recent economic downturn, Spaf commented that many companies are cutting down on the budget of security, which is a huge mistake. According to Spaf, security is an invisible but vital function, whose real presence and importance is not felt unless an attack occurs and the assets are not protected.
Ron Ritchey stressed upon the issues of data and information theft. He said that the American economy is more of a design-based economy. Many cutting edge products are researched and designed in the US by American companies. These products are then manufactured in China, India and other countries. The fact that the US is a design economy further encompasses the importance of information security for US companies and the need to protect their intellectual property and other information assets. He said that attacks are getting more sophisticated and targeted. Malware is getting carefully social engineered. He also pointed out there is a need to move from signature-based malware detection to behavior-based detection.
John Thomson arrived late as his jet was not allowed to land at the Purdue airport due to high winds. John introduced himself in a cool way as a CEO of a ‘little’ company named Symantec in Cupertino. Symantec is a global leader in providing security, storage and systems management solutions; it is one of the world’s largest software companies with more than 17,500 employees in more than 40 countries.
John gave some very interesting statistics about the information security and attack scene these days. John said that about 10 years ago when he joined Symantec, Symantec received about five new attack signatures each day. Currently, this number stands about 15000 new signatures each day with an average attack affecting only 15 machines. He further added that the attack vectors change every 18-24 months, and new techniques and technologies are being used extensively by criminals to come out with new and challenging attacks. He mentioned that attacks today are highly targeted, intelligently socially engineered, are more focused on covertly stealing information from a victim’s computer, and silently covering its tracks. He admitted that due to increasing sophistication and complexity of attacks, it is getting more difficult to rely solely on signature-based attack detection. He stressed the importance on behavior-based detection techniques. With regards to the preparedness of government and law enforcement, he said that law enforcement is not skilled enough to deal with these kind of cyber attacks. He said that in the physical world people have natural instincts against dangers. This instinct needs to be developed for the cyber world, which can be just as dangerous if not more so.
A panel summary by Ashish Kamra.
Panel Members:
The panel discussion included a 5-10 minute presentation from each of the panelists followed by a question and answer session with the audience.
The first presentation was from Lorenzo Martino. Lorenzo defined cloud computing as ‘computing on demand’ with the prominent manifestations being high-performance computing, and use of virtualization techniques. He also included other forms of pervasive computing such as body-nets, nano-sensors, intelligent energy grid, and so forth as cloud computing. The main security challenge identified by Lorenzo was coping with the complexity of the cloud environment due to the increasing scale of nodes. Another security issue identified by him was the decreasing knowledge of the locality of nodes as well as their trustworthiness with increasing scale of nodes. This in turn, introduces issues related to accountability and reliability. He outlined two main issues to be resolved in the context of a cloud computing environment. First issue is to strike the right balance between network security and end-point security. Second issue is the lack of clarity on attack/risk/trust models in a cloud environment.
According to Keith, the best way to explain cloud computing to a layman is to think of cloud computing services as utilities such as heat and electricity that we use and pay for as needed. His main security concern in the cloud environment was on the legal ramifications of the locality of data. For example, a company in the European Union (EU) might want to use Amazon cloud services. But it may not be legal if the data at the backend is stored in United States (US) as the data privacy laws of EU and US differ. Another concern raised by Keith was the lack of standards among cloud computing services. Because of insufficient standardization, verification of cloud services for compliance with regulations is very difficult. Finally, according to Keith, despite all the current security challenges in an cloud environment, the cloud will ultimately be widely adopted due to the lower costs associated with it.
Christoph reiterated that cloud services will become mainstream because of theirs flexibility (don’t have to worry about under-provisioning or over provisioning of resources) and lower total cost of ownership. But the key point to understand is that the cloud computing paradigm is not for everyone and everything as cloud computing means different things to different people based on their requirements. Citing the example of grid computing, Christoph highlighted the fact that there is inherent lack of demand for security in the cloud. For example in grid computing, more than 95% of the customers opt out of the grid security services due to various reasons. According to Christoph, the main challenge in cloud security is that increased abstraction and complexity of cloud computing technology introduces potential security problems as there are many unknown failure modes and unfamiliar tools and processes. He also added that the cloud security mechanisms are not unlike traditional security mechanisms, but they must be applied to all components in the cloud architecture.
Security issues arising out of the sheer complexity of the cloud environment were also raised by Dennis. There are layers of abstraction in the cloud environment with multiple layers of technologies each with their own configuration parameters, vulnerabilities and attack surface. There is also lots of resource sharing, and the hosting environment is more tightly coupled. All such factors make articulating sound security policies for such environment a potential nightmare. The questions that seem most difficult to answer in a cloud environment are how to achieve compliance, how is the trust shared across different regulatory domains and how much of the resources are shared or coupled. There is an inherent lack of visibility inside a cloud environment. A potential solution is to expose configuration visibility in to the cloud for performing root cause analysis of problems. Also, maintaining smaller virtualization kernels as close as possible to the hardware so that they can be more trustworthy and verifiable will help address some of the security risks.
A recurring theme during the question/answer session was related to trust management in the cloud environment. The first question was that how do ‘trust anchors’ in the traditional computing environment apply to the cloud? Dennis replied that the same trust anchors such as TPM etc. apply but the interplay between them is different in the cloud. There are some efforts to create virtual TPMs so that each virtual operating system gets its own vie of trust. But how that affects integrity is still being worked upon. Keith however was more skeptical of the ongoing work in trust anchors solutions because of the complexity of the provider stacks. Another question related to trust was that why should a normal user trust his private information to a cloud provider. Dennis replied that users will be driven to use the cloud services based on cost, and use such utilities because of the savings passed to them. The onus is thus on the cloud service providers to take of security and privacy issues. Christoph added that with respect to trust, the cloud is no different than other computing paradigms such as web services or grid computing. Companies providing cloud services will need to answer trust issues for their own customers. The next question was on the legal/compliance hang-ups of the cloud with respect to location of the data. Specifically, there were concerns on the chain of custody data and accountability. Dennis replied that the legal and compliance requirements haven’t and can’t (yet) keep up with change in technology. But this also provides an incentive for the providers to differentiate themselves from the rest on the basis of auditing and legal support offered with their services. Christoph suggested that such concerns should be addressed in the Service Level Agreements (SLAs) which then become binding on the service providers. Lorenzo pointed out that coming up with new regulations for the cloud environment will be a very difficult task as even the current regulations such as HIPPA have gray areas.
Another interesting question was whether outsourcing IT services to a cloud provider is a win in security because of the expertise at the cloud provider? Dennis agreed that at least with respect to managing configuration complexity, it is better off for the organizations to outsource their services to a cloud provider. Christoph pointed out that availability as a security issue is a big win in the cloud environment as it almost comes for “free” with any cloud provider. But maintaining in-house 24x7 availability is a very resource consuming task for most organizations.
There was a comment that there is a perception among customers that if the data is outside the organization, it is not safe. In reply to this comment, Christoph reiterated that cloud computing is not for everyone. If data is sensitive such that it cannot be outsourced then probably cloud computing is not the right answer. Strong SLAs can help mitigate many concerns, but still cloud computing may not be for everyone. Some related queries were on the safety of applications and on the enhanced insider threat concerns in the cloud environment. Dennis echoed similar concerns and mentioned that people in working in cloud environment are audited very carefully as they have a lot of leverage. Christoph added that there are techniques beyond auditing to mitigate insider threats such as application firewalls, no sniffing, scanning the data out of the hosted images, intrusion detection of hosted images, and so forth. Such controls can be articulated in the SLAs as well. Dennis gave an example that ESX with OVF-I can associate security controls in the models for each hosted instance which also puts less burden on the application developers.
Next question was whether it is possible to but keep applications local but still get the benefits of cloud security? Dennis replied that it is possible but the organizations need to be careful about applying security requirements consistently when using cloud only for some things. It is also important to understand the regulations and the risks involved before doing so.
To the query as to what data can be put into cloud without worries about security (public data), Keith replied that organizations needs to have a classification system in place to figure out what is acceptable for public access.
Many questions that followed at this stage were related to general cloud computing requirements such as on sustainability of cloud computing, applications of cloud computing, distinction between a hosting provider and cloud provider, and so forth. This was expected as the audience primarily consisted of people from academia who are yet to come to grips with the aggressive adoption of cloud services in the industry.
Finally, at the end there were two very interesting questions related to cloud security. First question was with regard to integrity of data in a cloud environment; how can the data integrity be preserved and also legally proved in a cloud environment? Christoph pointed out that data integrity is guaranteed as part of the SLAs. Dennis pointed out that many of the distributed systems concepts such as 2-phase commit protocol are applicable to the cloud environment. Integrity issues may be temporal as replication of data may not be immediate. But surely long term integrity of data is preserved. The second comment was that whether the clouds are now a more appealing target for attackers in the sense that does it increases reward over risk? Dennis agreed that putting all our eggs in one basket is a big risk, and also that such convergence will lead to a greater attack surface and will give much greater leverage to the attackers. But the underlying premise is that the benefits of the cloud technology are amazing, so we have to work towards mitigating the associated risks.
A panel summary by Jason Ortiz.
Panel Members:
There has been a lot of discussion recently surrounding the issue of standards and standard adoption. Many questions have been posed and openly debated in an attempt to find the correct formula for standards. When can a standard be considered a “good” standard, and when should that standard be adopted?
According to Dr. Pascal Meunier of Purdue University CERIAS, standard adoption should be based on what he calls transitive trust. Transitive trust indicates that an evaluation of the standard using criteria appropriate to the adopters has been done by an outside source. This ensures the standard applies to the adopter and that it has been evaluated or tested. Dr. Meunier says this allows for sound justification that a standard is appropriate. Unfortunately, most adoption and creation of standards are focused on assumptive trust, or simply knowing someone, somewhere did an evaluation.
Another concern surrounding the creation and adoption of standards raised during the panel discussion was, when standards interfere with economical development or technological progress, should they be adopted, even if they are well-tested, “good” standards? Tim Grance from NIST responded by saying as of right now, standards are mostly voluntary recommendations and they must be in accordance with economical and technological desires of industry in order for them to be widely adopted and widely accepted. There are very few punishments for not following standards and thus there must exist other motivation for industries to spend time and money implementing these standards.
Along with this, the audience posed a question surrounding the practical use of a standard. Even if a partner does decide to comply with a standard there is no easy method of ensuring they actually understand the standard or have the same interpretation of the standard as other partners. Simply establishing a mutual understanding of a standard within an industry poses another obstacle that requires time and resources.
As a result of this, “good” standards may never be used in practice if they are too costly to implement. Therefore, currently used standards may be out of date, flawed, or simply untested. This discussion lends itself to the question of which is better, a standard which is known to be flawed or no standard at all? There is no clear answer to this question, as there exists sufficient evidence supporting both sides.
An argument for the idea that a standard is better than no standard (even if it is a flawed or insecure standard) is that in this scenario, at least the flaw will be know, recognized and consistent throughout the industry. However, others point to the idea that this would actually be detrimental, as now any entity which has adopted the standard becomes vulnerable to the standard’s flaws as opposed to only a small number of industries.
It is clear that industries need standards to follow in many scenarios. However, the difficult questions include when a standard is needed, when a specific standard should be adopted versus when it could reasonably be adopted, and whether or not a flawed standard is better than no standard at all.
[Note: the following is primarily about U.S. Government policies, but I believe several points can be generalized to other countries.]
I was editing a section of my website, when I ran across a link to a paper I had forgotten that I wrote. I'm unsure how many people actually saw it then or since. I know it faded from my memory! Other than CERIAS WWW sites and the AAAS itself, a Google search reveals almost no references to it.
As background, in early April of 2002, I was asked, somewhat at the last moment, to prepare a paper and some remarks on the state of information security for a forum, Technology in a Vulnerable World, held on science in the wake of 9/11. The forum was sponsored by the AAAS, and held later that month. There were interesting papers on public health, risk communication, the role of universities, and more, and all of them are available for download.
My paper in the forum wasn't one of my better ones, in that it was somewhat rushed in preparing it. Also, I couldn't find good background literature for some of what I was writing. As I reread what I wrote, many of the points I raised still don't have carefully documented sources in the open literature. However, I probably could have found some published backup for items such as the counts of computer viruses had I spent a little more time and effort on it. Mea culpa; this is something I teach my students about. Despite that, I think I did capture most of the issues that were involved at the time of the forum, and I don't believe there is anything in the paper that was incorrect at that time.
Why am I posting something here about that paper, One View of Protecting the National Information Infrastructure, written seven years ago? Well, as I reread it, I couldn't help but notice that it expressed some of the same themes later presented in the PITAC report, Cyber Security: A Crisis of Prioritization (2005), the NRC report Towards a Safer and More Secure Cyberspace (2007), and my recent Senate testimony (2009). Of course, many of the issues were known before I wrote my paper -- including coverage in the NRC studies Computers at Risk: Safe Computing in the Information Age (1991), Trust in Cyberspace (1999) and Cybersecurity Today and Tomorrow (2002) (among others I should have referenced). I can find bits and pieces of the same topics going further back in time. These issues seem to be deeply ingrained.
I wasn't involved in all of those cited efforts, so I'm not responsible for the repetition of the issues. Anyone with enough background who looks at the situation without a particular self-interest is going to come up with approximately the same conclusions -- including that market forces aren't solving the problem, there aren't enough resources devoted to long-term research, we don't have enough invested in education and training, we aren't doing enough in law enforcement and active defense, and we continue to spend massive amounts trying to defend legacy systems that were never designed to be secure.
Given these repeated warnings, it is troubling that we have not seen any meaningful action by government to date. However, that is probably preferable to government action that makes things worse: consider DHS as one notable example (or several).
Compounding the problem, too many leaders in industry are unwilling to make necessary, radical changes either, because such actions might disrupt their businesses, even if such actions are in the public good. It is one of those "tragedy of the commons" situations. Market forces have been shown to be ineffective in fixing the problems, and will actually lead to attempts to influence government against addressing urgent needs. Holding companies liable for their bad designs and mistakes, or restricting spending on items with known vulnerabilities and weaknesses would be in the public interest, but too many vendors affected would rather lobby against change than to really address the underlying problems.
Those of us who have been observing this problem for so long are therefore hoping that the administration's 60 day review provides strong impetus for meaningful changes that are actually adopted by the government. Somewhat selfishly, it would be nice to know that my efforts in this direction have not been totally in vain. But even if nothing happens, there is a certain sense of purpose in continuing to play the role of Don Quixote.
Sancho! Where did I leave my horse?
Why is it that Demotivators® seem so appropriate when talking about cyber security or government? If you are unfamiliar with Despair.com, let me encourage you to explore the site and view the wonderfully twisted items they have for sale. In the interest of full disclosure, I have no financial interest or ties to the company, other than as a satisfied and cynical customer.
On a more academic note, you can read or purchase the NRC reports cited above online via the National Academies Press website.