The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Posters & Presentations

Page Content

Assured Identity and Privacy

Anonymity and Security in Genomic Datasets

Kristine Arthur, Dr. John A. Springer, Dr. Melissa J. Dark


Currently, genomic datasets are used in a variety of research applications.  For example, findings in pharmacogenomics have provided more accurate prescription drug dosages and resulted in lower healthcare costs.  However, the privacy of individuals included within datasets is a critical and challenging issue.  It is possible to link an individual’s identity to information out of a dataset.  When a sequence is known, information such as disposition to certain diseases and disorders can be learned.  This information could then be used to deny health cover, affect consideration for employment, or increase healthcare costs.  Additionally, this information could be used to adversely affect other biological family members, such as children.  While there have been attempts such as the Health Insurance Portability and Accountability Act (HIPAA) and the Genetic Information Nondiscrimination Act of 2008 (GINA), these measure alone are insufficient to guarantee protection.  An examination of the literature shows that there is very little research done to develop methods of protecting genomic data.  It is the aim of this project to identify best practices from currently existing privacy methods.

Covert Channels in Combinatorial Games

Philip Ritchey, Vernon Rego


A general framework for exploiting covert channels in combinatorial games is presented. The framework is applicable to all combinatorial games, including Chess and Go, but is applied to the game of Tic-Tac-Toe for ease of experimental analysis. The security and capacity of the resulting covert channel are analyzed experimentally. By considering the ways in which a passive adversary can attempt to detect and neutralize the usage of the channel, it is shown that the passive adversary cannot distinguish games which contain hidden information from games which do not. It is also shown that, even by enforcing a perfect-play requirement, the adversary cannot reduce the capacity of the channel to zero in order to prevent covert communication. Additionally, the framework is shown to be generalizable to multiplayer games and games without perfect information by identifying covert channels in two other games.

DBMask: Encrypted Query Processing over an Encrypted Database

Mohamed Nabeel,  Jianneng Cao, Mohamed Sarfraz, Elisa Bertino


Many organizations are moving their data to cloud based relational databases to manage their data due to many economical benefits it offers. However, there are many other organizations hesitant to move their sensitive data to the cloud due to security and privacy concerns. One such concern is the confidentiality of the data stored in a relational cloud. Most commonly utilized technique is to encrypt the data before uploading to the cloud. However, a simple encryption negates the benefits provided by a relational database and makes it difficult to control access to the data. In this work, we propose an approach to preserve the confidentiality of the data stored in a relational database in the cloud while supporting relational operations over encrypted data. Further, our solution supports fine-grained access control when executing SQL queries over encrypted relational data. Our approach does not require modifications to the relational database engine and hence can utilize any existing relational database available in the cloud. DBMask, our prototype system, is implemented to demonstrate the feasibility of our approach on Amazon RDS, a public cloud based relational database service.

Differentially Private Grids for Geospatial Data

Wahbeh Qardaji, Weining Yang, Ninghui Li


We tackle the problem of constructing a differentially private synopsis for two-dimensional datasets such as geospatial datasets. The current state-of-the-art methods work by performing recursive binary partitioning of the data domains, and constructing a hierarchy of partitions. We show that the key challenge in partition-based synopsis methods lies in choosing the right partition granularity to balance the noise error and the nonuniformity error. We study the uniform-grid approach, which applies an equi-width grid of a certain size over the data domain and then issues independent count queries on the grid cells. This method has received no attention in the literature, probably due to the fact that no good method for choosing a grid size was known. Based on an analysis of the two kinds of errors, we propose a method for choosing the grid size. Experimental results validate our method, and show that this approach performs as well as, and often times better than, the state-of-the-art methods. We further introduce a novel adaptive-grid method. The adaptive grid method lays a coarse-grained grid over the dataset, and then further partitions each cell according to its noisy count. Both levels of partitions are then used in answering queries over the dataset. This method exploits the need to have finer granularity partitioning over dense regions and, at the same time, coarse partitioning over sparse regions. Through extensive experiments on real-world datasets, we show that this approach consistently and significantly outperforms the uniform grid method and other state-of-the-art methods.

Digital Forensics Evidence Acquisition In Cloud Storage Service: Examining and Evaluating Tools and Techniques

Student: Gilchan Park / Advisor: Simuel Liles


Cloud computing has rapidly become the focus of public attention recently. Most of leading companies of information area started to provide a variety of services based on cloud computing. Especially, cloud storage services such as Dropbox, Microsoft SkyDrive, and Google Drive, etc. have become prevalent. As the number of users of cloud storage increases (unprecedentedly), cloud storage has been identified as an emerging challenge to digital forensic researchers and practitioners (in a range of literature). Since cloud services can be utilized in disparate areas, the identification and acquisition (also known as “preservation” in the digital forensic community) of potential data is the difficulty. Crimes on cloud storage services already happened in a many different ways and the number will be quicker in the future. Cloud computing should be enhanced at the aspect of security. In this paper, I provide an evaluation of proposed forensic acquisition tools and discuss the limits of those existing tools. Furthermore, it is suggested that forensic examiners, law enforcement and the court evaluate confidence in evidence from the cloud environment in terms of being forensically sound.

Expanding Phish-NET: Detecting Phishing Emails Using Natural Language Processing

Student: Bryan R. Lee & GilchanPark / Advisor: Julia M. Taylor


Phishing is one of the most potentially disruptive actions that can be taken on the internet. Stealing a user’s account information within a business network through a phishing scam can be an easy way to gain access to that business network. Intellectual property and other pertinent business information could potentially be at risk if a phishing attack is successful. One of the most popular ways of carrying out a phishing attack is through email. Many businesses use typical spam filters such as blacklist-based or URL analysis techniques to protect users from some potentially malicious emails, but these alone are not enough. There have been quite a few attempts at creating a reliable, robust phishing email detection systems based on analyzing the content of the emails. For example, CANTINA, phishGILLNET and Phish-Net are proposed methods for content-based phishing detection. Phish-Net is a phishing detection utility that analyzes three parts of an email to determine whether or not it contains a phishing attack: the header, the text, and any links the email contains. The purpose of this research is to expand the text analysis portion of Phish-Net in determine whether it is possible to improve its email analysis capabilities. The text analysis portion of Phish-Net takes into account actionable verbs that tempt the user into performing an action. In this study, the new algorithm includes not only actionable verbs, but also other parts of speech so that it can catch any other actionable words in phishing emails.

Generalized Social Network Privacy

Christine Task, Chris Clifton


Many online services now allow you to designate friendship relations with other users, creating a quickly growing abundance of social network data-sets.  As social network analysts have raced to make use of these fascinating new data sources, privacy researchers have been simultaneously working to develop analysis techniques which protect individual privacy.  Their efforts have produced a diverse variety of privatization approaches.  But how exactly do these techniques affect privacy and how do they compare to each other?  This work is an initial step toward the development of a universal view of social network privatization.

Hardware/Software-in-the-loop Analysis of Cyberattacks on UASs

James Goppert, Andrew Shull, Nandagopal Sathyamoorthy, and Inseok Hwang


Unmanned aerial systems (UASs) have taken on a large role in military operations and there is considerable interest in expanding their use to commercial and scientific applications. Because of the dependence of these vehicles on computer systems, their high degree of autonomy, and the danger posed by a loss of vehicle control, it is critical that the proliferation of these vehicles be accompanied by a thorough analysis of their vulnerabilities to cyberattack.

Image Steganography Using Sudoku

Samuel Wagstaff, Vaishnavi Chandrasekaran


An image sharing scheme that encodes the secret image without altering the image size or requiring any additional information is proposed. The scheme ensures that the fidelity of the revealed secret data is distortion free and also possesses reversible characteristics providing for the retrieval of the host image as well.  The system uses the concept of Sudoku to conceal the shadow with reversibility. Given a secret image, the system derives shadows from the secret image and produces n shadow images. Given any t out of n shadow images along with the corresponding key values, the involved participants can losslessly reconstruct the secret and the original host image.

Layering Authentication Channels to Provide Covert Communication

Mohammed H. Almeshekah, Mikhail J, Atallah and Eugene H. Spafford


We argue the need for providing a covert back-channel communication mechanism in authentication protocols, discuss various practical uses for such a channel, and desirable features for its design and deployment. Such a mechanism would leverage the current authentication channel to carry out the covert communication rather than introducing a separate one. The communication would need to be oblivious to an adversary observing it, possibly as a man-in-the-middle. We discuss the properties that such channels would need to have for the various scenarios in which they would be used. Also, we show their potential for mitigating the effects of a number of security breaches currently occurring in these scenarios.

Privacy, Security and Forensics in Software as a Service (SaaS)

Susan Fowler


Abstract The issues of privacy and security are an important consideration to a forensic analyst. For example, if data capture compromises confidentiality or creates a security breach, the remaining steps may also be compromised or rendered useless. The considerations of private and public cloud environments and the collection of data within them add another angle. Additionally, as network forensics is an emerging discipline, the issue of privacy for both the provider and the consumer may not be well defined or established through policies and procedures. The main audience for this topic would be the investigator themselves as obtaining data while maintaining privacy and security for both the provider and the customer is the primary goal. Secondary stakeholders in this topic would be the providers and customers. While both providers and customers would have concerns, providers would be especially concerned with security, privacy and integrity during the data capturing process. Depending on the terms of service in use, the software as a service or SaaS provider may be vulnerable to legal recourse should a customer experience loss or exposure of data to security breaches. The primary method of answering this question will be through research of current forensic practices and tools as well as policies and procedures currently in place with regard to SaaS privacy and security. While this list is not exhaustive, some of the issues to be discussed are: end sides (client and provider) data collection and intermediary data collection with respect to data artifacts, collection, jurisdictional and confidentiality considerations. Further, the two main deployment models of private and public clouds will be discussed with regard to the research criteria posed. Keywords: Privacy, Security, network forensics, SaaS

Secure and Private Outsourcing to Untrusted Cloud Servers

Shumiao Wang, Mikhail Atallah


Storage and computation outsourcing to cloud servers has become an important research topic in recent years due to the large volume of data that needs to be hosted at cloud servers and the intent to employ servers to perform computational work for clients. Our research work is mainly focusing on developing secure and efficient protocols for outsourcing computational work and storage to untrusted cloud servers. In the poster, we will describe two different kinds of secure outsourcing setting, and present our research problems in them and give the protocols.

Unlinkable Complex Identity Claims

Ruchith Fernando, Bharat Bhargava


We propose an identity management system allows users to obtain certified identity claims from various identity providers. Those users will be able to use the certified identity claims for authentication and authorization purposes with service providers, with the guarantee of unlinkability of transactions. Furthermore users are able to use identity claims to satisfy complex authentication and authorization policies set by a service provider efficiently with the same level of privacy guarantees. Finally users will be able to mix and match identity attributes from different identity providers in satisfying service provider policies.

End System Security

A Comprehensive Access Control System for Scientific Applications

Peter Baker, Jia Xu, Elisa Bertino


Web based scientific application have provided a means to share scientific data across diverse groups and disciplines for integrated research extending beyond the local computing environment.  But the organization and sharing of large and heterogeneous data pose challenges due to their sensitive nature. The main challenge is providing a robust authorization mechanism that prevents unauthorized access of data in scientific applications. In this paper we analyze the security requirements of scientific applications and present an authorization model that facilitates the organization and sharing of data without compromising the security of data. We also discuss the key components and implementation of Computational Research Infrastructure for Science (CRIS), a web-based scientific application that currently deploys a partial set of the security requirements for scientific applications.

A Platform for Finding Attacks in Unmodified Implementations of Intrusion Tolerant Systems

Hyojeong Lee, Jeff Seibert, Endadul Hoque, Charles Killian, Cristina Nita-Rotaru


We present Turret, a platform for automatically find- ing performance attacks in unmodified implementations of intrusion tolerant systems. In performance attacks, malicious nodes deviate from the protocol when sending or creating messages, with the goal of degrading system performance. Turret assumes that the user provides the intrusion tolerant system binary, the format of messages sent by the system, and the metrics that measure its per- formance. Our platform leverages virtualization to run the user-specified operating system and intrusion tolerant system binary and uses a well-known network emulator to tunnel the network traffic. We ran Turret on 5 systems and found 29 performance attacks, 23 of which were not previously reported to the best of our knowledge. Turret was able to find these attacks in a matter of hours.

An Efficient Certificateless Cryptography Scheme without Pairing

Seung-Hyun Seo, Mohamed Nabeel, Xiaoyu Ding, Elisa Bertino


We propose a mediated certificateless encryption scheme without pairing operations. Mediated certificateless public key encryption (mCL-PKE) solves the key escrow problem in identity based encryption and certificate revocation problem in public key cryptography. However, existing mCL-PKE schemes are either inefficient because of utilizing expensive pairing operations or vulnerable against a partial decryption attack. In order to address the performance and security is- sues, in this poster, we propose a novel mCL-PKE scheme. We implement our mCL-PKE scheme and a recent scheme, and evaluate the security and performance. Our results show that our algorithms are efficient and practical.

FiPS: A File System Provenance

Elisa Bertino


A file provenance system supports the automatic collection and management of provenance i.e. the complete processing history of a data object. File system level provenance provides functionality unavailable in the existing provenance systems. In this paper, we discuss the design objectives for a flexible and efficient file provenance system and then propose the design of such a system, called FiPS. We design FiPS as a thin stackable file system for capturing provenance in a portable manner. FiPS can capture provenance at various degrees of granularity, can transform provenance records into secure information, and can direct the resulting provenance data to various persistent storage systems.

KMAG: VMM-level Malware Detection via Kernel Data Access Profiling

Chung Hwan Kim, Dannie Stanley, Rick Porter, Dongyan Xu


Many malware attacks involve kernel data accesses.  Existing approaches to data-centric malware analysis monitor memory accesses at binary-level.  Binary-level analysis, however, is known to be slow and impractical for real-world systems.  In contast, KMAG effectively performs kernel malware analysis at VMM-level.  We first generate attack profiles by analyzing accesses to kernel data, and then use the profiles to detect attacks that have the same or similar data access patterns while the system is running.  To monitor accesses to kernel data efficiently and transparently, we designed a page-level access detection mechanism built atop the KVM virtualization platform.  This mechanism leverages the hardware-supported memory protection to mark the pages of interest as not accessible, and detects the violations to the pages when the corresponding kernel objects are accessed in the guest virtual machine.

Motivation of Community Pharmacies to Use Biometric Authentication

Dr. Stephen Elliott, Dr. Melissa Dark, Dr. James Anderson, Dr. Dennis Depew


Using the ground theory strategy, this study analyzed the experiences and perspectives of the community pharmacists. The intent was to answer the question, “Why do community pharmacists consider using traditional authentication systems rather than biometric systems?” The study consisted of 10 pharmacists for an individual interview from Hawaii and 35 pharmacists across the United States who participated in an online survey. Based on the examination of the collected data, themes emerged and were revealed from the interviews and survey through various analyses. The contribution of this study presented the themes essential for the ground theory strategy and to be used for future research.

Secure Information Sharing and Access Control in PLM Systems

Rohit Ranchal, Bharat Bhargava


Modern enterprises operate in a global economy with their operations dispersed across internal processes and external partners. A partner can be a sub-contractor to whom work is outsourced. Enterprises collaborate with their partners through Product Lifecycle Management (PLM) systems for producing and delivering products and services to consumers. Each partner in a PLM system uses data, generates data and shares data with other partners, and all this collaboration contributes to producing and delivering the products or services. Shared data may contain highly sensitive information such as trade secrets, intellectual property, private organizational or personal information. In large enterprise systems, it is difficult to understand and track data dissemination. The main security challenge in PLM is the unauthorized disclosure and data leakage of information shared among the partners. Existing approaches ensure security within the domain of an organization and don’t address protection in a decentralized environment. We propose an approach for secure data dissemination using the Active Bundle scheme. This approach enables organizations to securely share information in their PLM steps and have control over its interaction in external domains.

Human Centric Security

Determining Authorship with Style

Lauren M. Stuart, Saltanat Tazhibayeva, Ji Hyeon Hong, Amy Wagoner, Julia M. Taylor, Victor Raskin


Stylometry is the characterization of a text’s author by capturing the style of writing as measurable features expressed in the text. There are three major applications for stylometry: authorship verification, authorship attribution, and deception detection. In authorship verification, the styles of a text and the body of work of the supposed author are matched. Stylometrics include the usages and frequencies of words, word categories, or some syntactic structures. Research questions include the use of current tools and techniques on different corpora of different size and origin, expansion of syntactic and possibly into semantic features, the exploration of style variance by topic rather than by author, and the performance tradeoffs in current tools.

Effective Risk Communication for Android Apps

Christopher Gates, Jing Chen, Ninghui Li, Robert W. Proctor


Due to the popularity and openness of Android plat form, it has been an attractive target for malicious and intrusive apps. Android relies on users to understand the permissions that an app is requesting and to base the installation decision off of the list of permissions. This reliance on users has been shown to be ineffective as most users do not understand or consider the permission information. We propose a solution to assign a summary risk score to each app, and then investigate the impact of presenting risk information as well as the most effective way in which to present this information.  We conduct three studies to evaluate our approach:  (1) an online study which presents the risk of an app in a simulated app selection scenario, and tracks participant behavior and selection under these different scenarios;  (2) an in-person lab study to evaluate the effects of framing the score with positive (safety) or negative (risk) information;  (3) a final online study to evaluate the framing in simulated app selection setting.  Our results show that the introduction of risk score information has significant positive effects in the selection process and can also lead to more curiosity about security related information.

Mutual Restraining Voting Involving Multiple Conflicting Parties

Dr. Xukai Zou (, Yan Sui, Huian Li, Wei Peng, and Dr. Feng Li


> Scrutinizing current voting systems including existing e-voting techniques, one can discern that there exists a gap between casting secret ballots and tallying & verifying individual votes. This gap is caused by either disconnection between the vote-casting process and the vote-tallying process or opaque transition (e.g., due to encryption) from vote-casting to vote-tallying and damages voter assurance, i.e., any voter can be assured that the vote he/she has cast is verifiably counted in the final tally. We propose a groundbreaking e-voting protocol that fills this gap and provides a fully transparent election. In this fully transparent internet voting system, the transition from vote-casting to vote-tallying is seamless, viewable, verifiable, and privacy-preserving. As a result, individual voters will be able to verify their own votes and are technically and visually assured that their votes are indeed counted in the final tally, the public will be able to verify the accuracy of the count, and political parties will be able to catch fraudulent votes. And all this will be achieved while still retaining what is perhaps the core value of democratic elections—the secrecy of any voter’s vote. The new protocol is the first fully transparent e-voting protocol which technologically enables open and fair elections and delivers full voter assurance, even for the voters of minor or weak political parties.

Threats, Vulnerabilities, and Security Controls in Cloud Computing

Hans Vargas Silva and Temitope Toriola


This project is a comparison of Cloud Computing Security policies. We explored threats, vulnerabilities, and controls in cloud computing, to gain a deeper understanding of cloud storage, accessibility, security, and threats. We examined security control models of three major cloud providers (Windows Azure, Google Drive, and Amazon Web Services S3).

Trust Management for Social Networks

Yefeng Ruan, Arjan Durresi


Inspired by the similarities between human trust and physical measurements, we have proposed a trust framework for social networks, including defining new trust metrics and their combinations, which captures both human trust level and its uncertainty, while being intuitive and user friendly. We show some great advantages of our framework, for example the trust view in an experiment was increased more than 2000 times. Based on our trust framework, we propose several security mechanisms, including filtering information on social networks and increasing the efficiency of advertisement and influence on social networks. Experiments on synthesized and real social networks validate the potential of our trust framework to develop trust based mechanisms for social networks.

Using Probabilistic Generative Models for Ranking Risks of Android Apps

Hao Peng, Chris Gates, Ninghui Li, Yuan Qi, Rahul Potharaju, Cristina Nita-Rotaru, Ian Molloy


One of Android’s main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information.  We introduce the notion of risk scoring and risk ranking for Android apps, to improve risk communication for Android apps, and identify three desiderata for an effective risk scoring scheme. We propose to use probabilistic generative models for risk scoring schemes, and identify several such models, ranging from the simple Naive Bayes, to advanced hierarchical mixture models. Experimental results conducted using real-world datasets show that probabilistic general models significantly outperform existing approaches, and that Naive Bayes models give a promising risk scor- ing approach.

Network Security

Adversarial Testing of Wireless Routing Implementations

Endadul Hoque, Hyojeong Lee, Rahul Potharaju, Charles Killian, Cristina Nita-Rotaru


We focus on automated adversarial testing of real-world implementations of wireless routing protocols. We extend an existing platform, Turret, designed for general distributed systems, to address the specifics of wireless routing protocols. Specifically, we add functionality to differentiate routing messages from data messages and support wireless specific attacks such as blackhole and wormhole, or routing attacks such as replay attacks. The extended platform, Turret-W, uses a network emulator to create reproducible network conditions and virtualization to run unmodified binaries of wireless protocol implementations. Using the platform on publicly available implementations of two representative routing protocols we (re-)discovered 14 attacks and 3 bugs.

Fine-Grained Analysis of Packet Loss Symptoms in Wireless Sensor Networks

Bilal Shebaro, Daniele Midi, Elisa Bertino


Packet losses in a wireless sensor network represent an indicator of possible attacks to the network. Detecting and reacting to such losses is thus an important component of any comprehensive security solution. However, in order to quickly and automatically react to such a loss, it is important to determine the actual cause of the loss. In a wireless sensor networks, packet losses can result from attacks affecting the nodes or the wireless links connecting the nodes. Failure to identify the actual attack can undermine the efficacy of the attack responses. We thus need approaches to correctly identify the cause of packet losses. In this paper, we address this problem by proposing and building a fine-grained analysis (FGA) tool that investigates the causes of packet losses and reports the most likely cause of these losses. Our tool uses parameters, e.g. RSSI and LQI, transmitted with every received packet to profile the links between nodes and their corresponding neighborhood. Through real-world experiments, we have validated our approach and shown that our tool is able to differentiate between the various attacks that may affect the nodes and the links.

Increasing Network Resiliency by Optimally Assigning Diverse Variants to Routing Nodes

Andrew Newell, Daniel Obenshain, Tom Tantillo, Cristina Nita-Rotaru, and Yair Amir


Networks with homogeneous routing nodes are constantly at risk as any vulnerability found against a node could be used to compromise all nodes. Introducing diversity among nodes can be used to address this problem. With few variants, the choice of assignment of variants to nodes is critical to the overall network resiliency.  We present the Diversity Assignment Problem (DAP), the assignment of variants to nodes in a network, and we show how to compute the optimal solution in medium-size networks. We also present a greedy approximation to DAP that scales well to large graphs. Our solution shows that a high level of overall network resiliency can be obtained even from variants that are weak on their own.

Secure Big Data Computations in the Cloud

Julian Stephen, Patrick Eugster


The shift to cloud-based computing is a paradigm change that offers considerable financial and administrative gains. But governmental and business institutions wanting to tap into these gains are concerned with security issues prevalent in cloud-based systems today. The cloud offers many new vulnerabilities and at the same time is dominated by new kinds of applications, which calls for new security solutions. Intuitively, byzantine fault tolerant (BFT) replication has many benefits to enforce integrity and support privacy in clouds. But BFT systems are not at all suited for typical “data-flow processing’’ cloud applications which analyze large amounts of data in a parallelizable manner: indeed, existing BFT solutions focus on replicating single monolithic servers, whilst data-flow applications consists of several different stages, each of which may give rise to multiple components at runtime to exploit cheap hardware parallelism; similarly, BFT replication hinges on comparison of redundant outputs generated, which in the case of data-flow processing can represent huge amounts of data. We present a system that secures computations being run in the cloud by leveraging BFT replication coupled with variable-degree clustering, sampling/hashing,  and separation of duty, to achieve a parameterized tradeoff between fault tolerance and overhead in practice.

Self-organizing self-adaptive network through differential elasticity

Prof. Fatma Mili


Redundancy has always been an essential ingredient of networks,  responsible and a contributor to for providing fault-tolerance.  Uniformly deploying nodes,  more than the minimal requirement,  distributed redundancy helps maximize fault-tolerance only in cases likenode failure due to power depletionto “mechanical”  node failure, but does not provide adequate tolerance to malicious attacks.  However,  in certain cases of node failure,  due to malicious attacks(targeted attack),  the use of such traditional measures of redundancy proves to be inadequate.  Malicious attacks,  by design,  are often targeted to affect the most vulnerable or the most critical resources of a system. In sensor networks, because of the large amount of inherent redundancy,  the most serious threats are the ones attacking critical paths in the networkattempting to break them,  thus disrupting the overall function of the network.  In this paper we define a set of graph properties that characterize the level of vulnerability of specific links. We use these properties to define a bio-inspired model of self-organization and adaptive reorganization that impart networks with resilience in the face of a variety of scenarios from simple power depletion to targeted malicious attacks.  The proposed concepts of differential connectivity and differential elasticity,  help us realize the objective of self-organizing nodes in a self-aware dynamic environment.

Policy, Law and Management

Applying the OSCAR Forensic Framework to Investigations of Cloud Processing

Bryan Lee and Dr. Sam Liles


This paper examines the OSCAR network forensic investigation model and attempts to apply it to the realm of infrastructure as a service cloud computing. There are many difficulties within a forensic investigation in the cloud that must be taken into account, such as jurisdiction due to the way information is stored in the cloud and chain of evidence and trust that can be placed in evidence from the cloud. Although a forensic model of investigation cannot solve all the problems faced within an investigation of this type, a standard model can help to ensure that all investigations of this variety are handled in the same manner. This can provide a standard level of trust in evidence discovered during an investigation of this nature and provide for a standard way to deal with problems such as jurisdiction throughout the course of these investigations. The OSCAR investigation model appears to be a fairly robust model when compared with other models, such as Martini and Choo’s implementation of the NIST model for use in cloud forensics.

Approaches for Acquiring Data from Flash Memory of Cellular Phone

Chandrika Silla, Dr. Sam Liles


Digital evidence that kept in the flash memory of the cellular phone can be extracted by using forensic tools. Forensic tools use two approaches to extract data. Logical approach is bit-by-bit copy of the flash memory using file system or protocol of the chip provider, physical approach is bit-by-bit copy of entire physical memory. Each method has it own advantages and disadvantages

Canada’s Cyber Warfare Capabilities

Bryan Lee and Dr. Sam Liles


This paper discusses Canada and its ability to wage cyber warfare. Several definitions of cyber warfare are presented and discussed, as well as the motives and potential actors behind a cyber attack. Several definitions of cyberspace are also discussed in order to provide a context for the domain of cyber warfare. A case is then made for why anyone should care about cyber warfare. Cyber attacks are a threat to a nation’s security. Cyberspace must be considered a fourth domain of war, with the other three domains being land, air, and sea. There are many dangers within cyberspace that can affect individuals, corporations, and nation-states. Canada’s cyber warfare capabilities are then examined. Both offensive and defensive capabilities are considered, with the focus of much of the research being on defensive capabilities. Canada recently released a cyber security strategy which is discussed in detail. Furthermore, capabilities of several government organizations are examined. Finally, a comparative assessment of Canada’s capabilities within cyberspace is given. Canada’s capabilities are found to be less than adequate to defend against a cyber engagement by an enemy nation-state. However, it is likely that many nation-states would be unable to defend against such an engagement from a knowledgeable and timely attacker.

Cloud Forensics: An Investigation into Imperfect Virtualization

Eric Katz, Dr. Samuel Liles


Cloud computing is becoming more popular and companies are quickly adapting cloud strategies as a cost saving means. Unfortunately, this also means that they are putting information onto cloud servers and devices without realizing the security implications, Depending on how the instantiations are made, data from previous virtual machines could be located in unallocated sectors or within the RAM of a different running instance.

Cyber Conflict Capability Assessment: Islamic Republic of Iran

Jake Kambic, Dr. Samuel Liles


The purpose of this study was to perform a topical OSINT analysis of Iran’s capability to engage in cyber conflict. The capabilities were assessed on an ordinal Likert-type scale which seeks to independently grade a nation-state’s cyber capabilities in a general way. The metrics used were intended to gauge both the offensive and defensive resources available to a country within the cyber domain.

Cyber Warfare Capabilities Analysis: Brazil

Eric Katz, Dr. Samuel Liles


Brazil is an emerging market country. This is characterized by fast economic growth, increased foreign investment, and international political clout. Brazil has the 6th largest economy in the world and is trying to find its place as a global power. As such it is going through many growing pains, as it becomes a world leader. Some of these pains include trying to protect its own cyber infrastructure as it tries to leverage its cyber power as an effective military tool.  onomy in the world.

Cyber Warfare Capabilities of Brazil

Mary Horner


The last several years have been essential for the formation of a national strategy for cybersecurity in Brazil.  There are many vulnerabilities and threats in the information society and the Brazilian government has undertaken efforts to strengthen the security of society and interests of the state. Cybersecurity has been characterized as an increasingly strategic role of government and essential to maintenance and preservation of the critical infrastructure in Brazil. Cybersecurity focuses on both prevention and repression of cyber attacks. The federal government has established standards and defined requirements for implementation of methods for federal agencies. Cyber defense is characterized by effective actions of the military aimed at defense or offensive attacks. The Ministry of Defense has established specific requirements for the Armed Forces to focus their efforts on cyber defense in Brazil.

Forensic Evidence in Apache’s CloudStack

Will Ellis, Dr. Sam Liles


Apache’s CloudStack allows service providers to create Infrastructure as a Service (IaaS) solutions.  The open source nature of the software is a draw for organizations wanting to provide low cost solutions either in the form of private clouds or reselling a cloud infrastructure to their clients.  As services such as CloudStack grow and compete with offerings from Amazon, Microsoft, or Google there will be a growing need to gather forensic evidence from cloud environments.  Law enforcement is able to request information from the major vendors, but an open source platform brought online to solve the needs of a small organization poses a significant problem for local law enforcement.  The study will attempt to provide a compendium of forensic evidence locations in Apache’s CloudStack environment.  This will enable law enforcement agencies to know where and what to look for in the environment, should they encounter an instance during the course of their investigations.

France’s Cyber Defense Capabilities

Kristine Arthur


In the 2011 release of Information System Defence and Security: France’s Strategy, it is stated that the nation’s strategic objectives are to: become a world power in cyber defense, strengthen the cyber security of critical national infrastructures, safeguard France’s ability to make decisions through the protection of information related to its sovereignty, and ensure security in cyberspace.  From these goals, it is clear that the focus of French cyber policies is defensive.  Therefore, it would be of interest to policymakers as well as information security specialists to answer the question “What cyber defense capabilities and policies is France integrating into its military doctrine?”

International Legal Implications for Cloud Computing

Mary Horner


Cloud computing is a rapidly growing global technological resource. This unique resource has raised many concerns about personal data security and confidentiality. It is important to understand how data will be protected if it is controlled by a third party.  The issue of jurisdiction is a global concern and this paper presents the international legal implications of cloud computing. Many laws from the United States and the European Union are discussed to explain how the international community will conduct investigations in the cloud environment.

Israel: An Assessment of Cyber Capabilities

Will Ellis, Dr. Sam Liles


Waging war in the cyber domain is not a new concept. Controlling the enemies command and control dates back to Roman times with intercepting simple rotation ciphers to modern attacks involving targeting a countries uranium enrichment plants.  While the domain of cyber is not universally defined, there is little doubt it is being used as an adjunct to land, sea, and air domains to achieve strategic goals.  Currently Israel is an ally in the Middle East, though this causes political tension in the region.  This assessment of open source intelligence serves to analyze the cyber capabilities of the nation.

Open-source analysis of the cyber warfare capability of North Korea

Tyler Jensen, Dr. Samuel Liles


This research provides an intelligence analysis, using information provided by publically-available sources, of the capability of North Korea to wage cyber war.  Areas of closest consideration include infrastructure, past cyber attacks, availability of human actors to engage in cyber war, quality of education, and the quality of hardware and software available to the military.  A close look at each area provides a big-picture look at the type and effect of future attacks that North Korea could use to engage in cyber war against the United States or its allies.

Petroleum Cyber Conflict: An Open Source Analysis of Entities’ Means, Motive, and Opportunity for Industry Manipulation

Kristine Arthur, William Ellis, Mary Horner, Tyler Jensen, Kyle Johansen, Jacob Kambic, Eric Katz, Bryan Lee, Marcus Thompson, Samuel Liles


Recent events of national significance within the oil and gas industry have brought to light both the question of defining threat sources and that of plausibly attributing known events to a threat source. The unprecedented rise in events begets the question of whether this is incidental to the continued advancement of technology and awareness, or suggests an ongoing conflict which may escalate. To that end, this report seeks to aggregate relevant events, present criteria for outlining threat origins, and determining the likelihood that the incidents are related as well as whether or not any observed correlation points to a persistent aggressor or simply circumstantial coincidence. The purpose of this analysis is to provide decision makers with a clearer idea of the current security outlook and what the causes for concern appear to be. All events and options should be considered cautiously and as empirically as possible.

SaaS Incident Response: Evidence Provenance in a Cloud Service

Jake Kambic, Dr. Samuel Liles


The purpose of this project was to analyze the origins of evidence in a cloud service, specifically targeting the Software as a Service (SaaS) business model. Due to the high volatility of cloud services, their abstract nature, and the physically dispersed infrastructure upon which they are based, forensic collection and analysis in the cloud is not realistically feasible. However, techniques for gathering evidence which can produce reasonably accurate results do exist. For this reason, an analysis of Incident Response in the cloud was undertaken, with the expressed purpose of identifying places where evidence is located in an SaaS cloud environment and the determining the level of effort required to acquire that evidence.

UK Leads World Cyber Race

Kyle Johansen


The United Kingdom is leading by example. They are one of the few nations that have a well defined National Cyber Strategy. Access, quality, and willingness to spend show other nations what can be done with enough determination.

Prevention, Detection and Response

Chemical Restoration of Damaged Hard Drives

Brian Curnett, Talin Darian, Sean McCarthy, Kevin Wojcik


Currently there are very few viable methods of recovering data from a damaged hard drive. Even less methods that are economically feasible. Creating a standard, repeatable technique for extracting data from a hard drive that has sustained salt, debris, or smoke damage would be beneficial to many stakeholders. Our methods are to develop a chemical means of cleansing a hard drive of superficial impurities in order to recover information contained within the damaged hard drive. The methods we have developed will allow this procedure to be done outside a traditional clean room.  To create this system in economically viable manner principles of organic chemistry, reverse osmosis filtration, and vacuum evaporation were utilized.

Forensic Implications of Apple’s New Fusion Drive.

Shruti Gupta


Mac computers are increasingly been seen at crime scenes today. Apple Computers has introduced the new Fusion Drive. It is important for the law enforcement to be able to examine a computer containing this new form of storage. However, the forensic implications of the Fusion Drive haven’t been discussed yet.  This research explores possible forensic changes that the Fusion Drive might bring in the forensic analysis procedures of Mac computers.

PostgreSQL anomaly detector

Bilal Shebaro, Asmaa Sallam, Ashish Karma, Elisa Bertino


We propose to demonstrate the design, implementation, and the capabilities of an anomaly detection (AD) system integrated with a relational database management system (DBMS). Our AD system is trained by extracting relevant features from the parse-tree representation of the SQL commands, and then uses the DBMS roles as the classes for the bayesian classifier. In the detection phase, the maximum apriori probability role is chosen by the classifier which, if not matching the role associated with the SQL command, raises an alarm. We have implemented such system in the PostgreSQL DBMS, integrated with the statistics collection and the query processing mechanism of the DBMS. During the demonstration, our audience will be given the choice of training our system using either synthetic role-based SQL query traces based on probability sampling, or by entering their own set of training queries. In the subsequent detection mode, the audience can test the detection capabilities of the system by submitting arbitrary SQL commands. We will also allow the audience to generate arbitrary workloads to measure the overhead of the training phase and the detection phase of our AD mechanism on the performance of the DBMS.

Predicting Failures in Distributed Cloud-Based Systems

Sebastian Moreno, Andrew Newell, Rahul Potharaju, Cristina Nita-Rotaru, and Jennifer Neville


Distributed cloud based systems consist of a set of geographically distributed routers organized in an overlay network, which promise to deliver high quality networking services to their customers (e.g., packet delivery within 200ms to/from any clients). To accomplish this requirement, their overlay network needs to be functional 24-7. Even a minor failure, such as a routing path that goes down for a couple of seconds, could negatively impact the performance of the system. However, to date there are few methods to predict or adaptively prevent failures in these distributed system. In this poster, we conducted an analysis of 2Tb of distributed system log files to identify example “failures” (i.e., signatures) that can be used to develop automated prediction methods via machine learning. Although the majority of the log information consists of normal behavior, we were able to characterize an important “outage” type of event where a significant number of customers jointly drop or change configurations in the network. Based on this pattern definition, we were able to identify several new examples of outage problems in the data. Considering these new set of training examples, we are now working on automated methods to discriminate among different types of failures and predict possible outages ahead of time, before they lead to large-scale failures.

pSigene: Webcrawling to Generalize SQLi Signatures

Gaspar Modelo-Howard, Fahad Arshad, Chris Gutierrez, Saurabh Bagchi, Yuan Qi


Intrusion detection systems (IDS) are an important component to effectively protect computer systems. Misuse detection is the most popular approach to detect intrusions, using a library of signatures to find attacks. The accuracy of the signatures is paramount for an effective IDS, still today’s practitioners rely on manual techniques to improve and update those signatures. We present a system, called pSigene, for the automatic generation of intrusion signatures by mining the vast amount of public data available on attacks. It follows a four-step process to generate the signatures, by first crawling attack samples from multiple public cybersecurity web portals. Then, a feature set is created from existing detection signatures to model the samples, which are then grouped using a biclustering algorithm which also gives the distinctive features of each cluster. Finally the system automatically creates a set of signatures using regular expressions, one for each cluster. We tested our architecture for the prevalent class of SQL injection attacks and found our signatures to have a True and False Positive Rates of over 86% and 0.03%, respectively and compared our findings to other SQL injection signature sets from popular IDS and web application firewalls. Results show our system to be very competitive to existing signature sets.

Security Analysis for Cyber-Physical Systems against Stealthy Cyber Attacks

Cheolhyeon Kwon and Inseok Hwang


Security of Cyber-Physical Systems (CPS) against cyber attacks is an important yet challenging problem. Since most cyber attacks happen in erratic ways, it is difficult to describe them systematically. In this paper, instead of identifying a specific cyber attack model, we are focused on analyzing the system’s response during cyber attacks. Deception attacks (or false data injection attacks), which are performed by tampering with system components or data, are not of particular concern if they can be easily detected by the system’s monitoring system. However, intelligent cyber attackers can avoid being detected by the monitoring system by carefully design cyber attacks. Our main objective is to investigate the performance of such stealthy deception attacks from the system’s perspective. We investigate three kinds of stealthy deception attacks according to the attacker’s ability to compromise the system. Based on the information about the dynamics of the system and existing hypothesis testing algorithms, we derive the necessary and sufficient conditions under which the attacker could perform each kind of attack without being detected. In the end, we illustrate the threat of these cyber attacks using an Unmanned Aerial Vehicle (UAV) navigation example.

Get Your Degree with CERIAS