The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Posters & Presentations

Page Content

Assured Identity and Privacy

Challenges in Biometric Usability

Michael Brockly, Thomas Cimino, Chris Clouser, Stephen Elliott, Jacob Hasselgren, Rob Larsen, Kevin O’Connor, Tyler Veegh


The purpose of this research is to explore the current challenges of the biometric community. With the use of biometric technologies increasing rapidly, it is important to develop potential solutions to these issues across a variety of modalities. From the analysis of earlier Human-Biometric Sensor Interaction models, the students in the lab have identified many areas for potential improvements which include the device, operator, and user.

Differential Identifiability

Jaewoo Lee and Chris Clifton


A key challenge in privacy-preserving data mining is ensuring
that a data mining result does not inherently violate
privacy. ϵ-Differential Privacy appears to provide a solution
to this problem. However, there are no clear guidelines on
how to set ϵ to satisfy a privacy policy. We given an alternate
formulation, Differential Identifiability, parameterized
by the probability of individual identification. This provides
the strong privacy guarantees of differential privacy, while
letting policy makers set parameters based on the established
privacy concept of individual identifiability.

Fine-Grained Encryption-Based   Access Control for Big Data

Mohamed Nabeel, Elisa Bertino


Big Data technologies are increasingly used to store and/or analyze personally identifiable information (PII) and other sensitive data. In order to comply with various regulations and organizational policies, such data needs to be stored encrypted and the access to them needs to be controlled based on the identity attributes of users. A simple solution is to use an efficient symmetric key encryption scheme. However, it requires sharing many keys with various entities in the system increasing the risk of key leakage. Further, when the user membership changes, these symmetric keys need to be re-issued incurring a high overhead.  A better solution is to utilize attribute based encryption (ABE) techniques. While ABE provides fine-grained access control for encrypted data, they require expensive pairing operations and, further, attribute revocation is inefficient. Having identified the strengths and weakness of these solutions, we propose a novel approach using attribute based group key management. Unlike the direct application of symmetric key encryption, keys are not stored in the system; they are dynamically derived when data is to be decrypted. Our approach is an order of magnitude efficient than the ABE based approach as ours is based on symmetric key encryption and broadcast group key management. The main bottleneck in our approach is the key generation operation. We utilize MapReduce framework to improve the performance of the key generation by generating intermediate keys during Map phase and generating the final key during the Reduce phase. We demonstrate our approach using Hadoop, a popular Big Data platform. The data blocks stored at DataNodes are encrypted and public information required to derive the key are stored as part of metadata in the NameNode. The encryption is performed at the granularity of HDFS (Hadoop Distributed File System) blocks. If the group membership changes before appending some new blocks to a file, a new symmetric key and public information are generated to encrypt the new blocks. Highlights of our approach are that the symmetric keys are neither stored nor transmitted, and the evolution of encryption is transparent to the Clients and the JobTracker which performs MapReduce tasks on Clients’ data.

Impact of Henry System of Classification on the Entropy of Fingerprint Images

Vandhana Chandrasekaran, Stephen J Elliott, Elisa Bertino, Matt Young


Research question – Does fingerprint images classified based on the Henry system of fingerprint classification have statistically significant difference in the amount of entropy? This is a follow on study from Young (2007).

Is This Hardcopy an Original?

Shriphani Palakodety, Aravind K. Mikkilineni, Mikahil Atallah, Edward J. Delp


This paper deals with the forensic problem of determining whether a document has been scanned and re-printed, as opposed to being the original printout. We consider two cases: (i) the adversary did no deliberate alteration of the contents
of the document between the scanning and the second printing; (ii) the adversary carries out one or more deliberate alterations to the document, such as changing a date, name, or dollar amount. In the latter case, we also want to determine the locations of the alteration(s). We would like the document to inherently carry the evidence of its own originality and of
any illicit alterations that can be carried out. We first present
a framework to determine document originality based on an existing embedding scheme for laser printed documents. We follow up with an adaptation to this framework of techniques from combinatorial group testing.

Issues in Cyber Forensics

Francis Ripberger (Advisor: Dr. Marcus Rogers)


This study hopes to identify the issues preventing the Cyber Forensic field from maintaining validity in its procedures, software, and expert witnesses; in additional to discovering general needs of the field’s practitioners.

Every day new issues are discovered; as well as existing issues continue to plague the field.  The issues for the field range from simple regulations on how to conduct a forensic procedure, to a lack of certifications, to judicial law shortcomings to properly enforce rules and guidelines.  And, as the individuals who are engrossed in the field are aware of one problem or another, many do not know the underlining issues.  Therefore, research is conducted based on the researcher’s experience not on the field’s needs.  On the other hand, some have completed research on issues in localized general regions, but no one has performed a national study.

This study aims to address these issues.  It will ask for volunteers of Cyber Forensic Analysts from each state’s law enforcement and students and professors from universities that offer a cyber forensic program.  All participants will complete a three-round delphi study.  From these questions, a list of categorized issues will be generated.  In addition, the top five issues in each category, as well as the top ten overall, will be identified.

Once the results have been properly organized and reviewed, it will be made available for public use.

Privacy Preserving Tatonnement

John Ross Wallrabenstein, Chris Clifton


Leon Walras’ theory of general equilibrium put forth the notion of tatonnement as a process by which equilibrium prices are determined. Recently, Cole and Fleischer provided tatonnement algorithms for both the classic One-Time and Ongoing Markets with guaranteed bounds for convergence to equilibrium prices.
However, in order to reach equilibrium, trade must occur outside of equilibrium prices, which violates the underlying Walrasian Auction model. We propose a privacy preserving protocol for the One-Time Market that follows naturally from the algorithms of Cole and Fleischer, allowing buyers and sellers to jointly compute the equilibrium prices by simulating trade outside of equilibrium. The protocol keeps utility functions of all parties private, revealing only the equilibrium price.
Finally, we show that our protocol is inherently incentive compatible, such that no party has an incentive to lie about their inputs.

Privacy through Identity Management

Bharat Bhargava


The migration of web applications to Cloud computing platform has raised concerns about the privacy of sensitive data belonging to the consumers of cloud services. The traditional form of security tokens like username/password used to access cloud services are prone to phishing attacks and hence do not provide complete security. In this work we propose to extend the Microsoft’s CardSpace identity management tool, to include more robust security tokens using the zero knowledge proof concept. These security tokens are in the form of SAML token supported by Windows Communication Foundation (WCF) and hence can prove interoperable with the existing security platforms.

Private Anonymous Messaging

Ruchith Fernando, Bharat Bhargava


We identify a set of requirements in broadcasting messages
among a set of untrusted peers connected to an entity. We propose a
scheme for those peers to obtain messages of the entity from each other using a pull mechanism where the contacts’ identities are maintained anonymous. We propose a modification to the hierarchical identity based encryption scheme proposed by Boneh et. al where a peer can request and obtain messages of a common peer from another peer while remaining anonymous. This scheme further allows changes in group composition using public data.

Public Population Information in Differential Privacy

Christine Task, Prof. Chris Clifton


Privatized queries which satisfy the strict requirements of differential privacy include randomized noise calibrated to cover up the impact any arbitrary individual could have on the query results.  An attacker viewing these results will only be able to learn aggregate information about the data-set, as the noise prevents any single individual from having a detectable effect on the results.  Not all queries can be effectively privatized in this fashion: if an arbitrary individual could potentially have a very large effect on the results of a query, then the requisite noise may be so large as to eliminate the utility of the results.  However, adding sufficient noise to obscure a truly ‘arbitrary’ individual is unnecessary if the data-set is sampled from a limited population publicly known to satisfy certain constraints. This is a simple but powerful insight; many queries which are too sensitive to be privatizable in general can be successfully privatized in constrained populations.  We prove that reducing noise based on public population information does not weaken privacy, and describe a selection of sensitive queries which become usefully privatizable within constrained populations.

Query Processing in Private Data Outsourcing Using Anonymization

Ahmet Erhan Nergiz, Chris Clifton


We propose a model supporting privacy-preserving data manipulation for private data outsourcing. This builds on the model of anatomization, where identifying and sensitive information are separated, and linked only in groups such that the probability of a particular sensitive value belonging to a particular individual is below a threshold; the information needed to join the identifying and sensitive information is encrypted with a key known only to the client/data owner. By exposing data where possible, the server can perform value-added services such as data analysis while being unable to violate privacy constraints.

We show how data can be queried in this model. The key contribution of this work is a relational query processor that minimizes the client-side computation while ensuring the server learns nothing violating the privacy constraints.

Resilient and active authentication and user-centric identity ecosystems

Yan Sui, Xukai Zou


Existing proxy based authentication approaches have problems
(e.g., non-binding, susceptible to theft and dictionary attack, burden
on end-users, re-use risk). Biometrics, which authenticates users by
intrinsic biological traits, arises to address the drawbacks. However,
the biometrics is irreplaceable once compromised and leak sensitive
information about the human user behind it. In this research, we
propose a usable, privacy-preserving, secure biometrics based identity
verification and protection system. Specifically, we propose a novel
biometric authentication token called Bio-Capsule (BC) which is
generated by a secure fusion of user biometrics and a (selected)
reference subject biometrics. The fusion process preserves the
biometric robustness and accuracy in the sense that the BC can be used
in place of the original user’s biometric template without sacrificing
the system’s acceptability for the same user and distinguishability
between different users. There are more potential applications on this
research: a user-centric identity ecosystem - a highly resilient,
privacy-preserving, revocable, interoperable, and efficient
user-centric identity verification and protection ecosystem; and an
active authentication system - a provably secure, privacy-preserving,
biometric active authentication system to support continuous and
non-intrusive authentication.

RFID Applications of Embedded Processing and Zero-Knowledge Proof

Robert Winkworth


Radio frequency identification has enabled exciting new prospects in wireless ID services, particularly for personnel.  Unfortunately, the earliest applications relied upon acquiescent mechanisms that accepted and returned data with no intermediary processing.  The advent of “smart” cards made possible an on-chip revolution in the use of contact identification devices.  What I explore here is the extension of these embedded systems to their non-contact counterparts (namely RFID).  The processing layer is of utmost importance to us in the study of privacy controls, as it opens the discussion on how personnel ID in general may evolve from storage devices that may be intercepted and duplicated into cryptographically sound logic devices that can resist common attacks and directly participate in the decisions concerning disclosure.  This brings us to one of the most vital aspects of the research, the zero-knowledge proof.  Frequently RFID is applied as a convenient means of answering questions about discrete state (from a comfortable distance).  Wherever possible, we wish to answer those questions without compromising user privacy in the process.  The integration of RFID and embedded systems allows us to perform proofs based on internal comparisons and calculations rather than irrevocable release of the data.  This is the security of the embedded systems used widely in electronic commerce with the convenience of near-field communications.  It holds the promise of a better all-around user experience, with better protections, more control, and easier methods of performing common objectives.

Trust, Empathy, Social Identity, and Contribution of Knowledge within Patient Online Communities

Jing Zhao, Ph.D.1, Kathleen Abrahamson, Ph.D., RN2, 1Department of Consumer Sciences and Retailing, Purdue University, West Lafayette, IN 47907


Objectives: With the development of internet technology, more and more people utilize patient online communities (POC) to seek useful health information and empathetic support. POCs are a unique type of virtual communityies and have attracted increasing research interest.  Although there are critical factors for developing and maintaining a successful POC, few studies examine the influence of trust, empathy, and social identity on members’ contribution of information and knowledge within the POC context. We aim to examine how trust and social identity influence empathy, which in turn motivates individuals to contribute knowledge within POC and how social identity also directly affects POC members’ knowledge contribution.
Design: This study examines the impacts of trust and social identity on knowledge contribution through the mediation effect of empathy. The direct relationship between social identity and knowledge contribution is also studiedincluded in this model.
Measurements: An online survey was conducted in three health related online communities. A confirmatory factor analysis was perrformedperformed and a structural equation modeling was constructed toconstructed to test the proposed model.
Results: Results indicate that trust and the development of a sense of social identity within the community are necessary antecedents to the development of empathy, which in turn influences members’ willingness to contribute personal knowledge and experience. or information.  Social Social identity also directly motivates members’ to contribute knowledge. In contrast to other studies that have emphasized threthe importance of providing tools that make information seeking more efficient, the fFindings of our study highlight the importance of trust, empathy and a sense of group cohesiveness within online health settings that motivzatemotivates members to contribute knowledge and support to other participants in the POC.

Understanding Malware through Classification and Machine Learning

Cory Q. Nguyen, Rui M. Esteves, Dr. Thomas J. Hacker, Dr. J. Eric Dietz


This study applies the concepts of machine learning and techniques of classification to help us better understand malware and its behavior and developing trends. In studying it’s behavior and characteristic , not only do we have a better understanding of what an unknown malware does and what its purpose is, but also a better idea of how to respond to the threat. It is important to accurately classify and identify unknown malware to reduce the response time.

Currently, our lab has collected a total of 579,342 samples with 74,191 unique samples. These samples are processed through an automated analysis engine which outputs reports where are then refined, formatted, and indexed into a central database. Once the raw data is uploaded. Specific attributes are selected to be processed for descriptive and predictive modeling.

We have used clustering algorithms, specifically, a mixture of partitional and hierarchical clustering, however, our attribute list continue to grow by n-dimension. Thus, we are currently accessing how to shorten our attribute list. Previously our attribute list was composed of more than 59,000 attributes. Recently, we shortened the list to 23 attributes, however, the accuracy and feasibility of these 23 attributes have yet to be evaluated.

End System Security

Gatling: Automatic Attack Discovery in Large-Scale Distributed Systems

Hyojeong Lee, Jeff Seibert, Charles Killian and Cristina Nita-Rotaru


In this paper, we propose Gatling, a framework that au- tomatically finds performance attacks caused by insider at- tackers in large-scale message-passing distributed systems. In performance attacks, malicious nodes deviate from the protocol when sending or creating messages, with the goal of degrading system performance. We identify a represen- tative set of basic malicious message delivery and lying ac- tions and design a greedy search algorithm that finds effec- tive attacks consisting of a subset of these actions. While lying malicious actions are protocol dependent, requiring the format and meaning of messages, Gatling captures them without needing to modify the target system, by using a type- aware compiler. We have implemented and used Gatling on six systems, a virtual coordinate system, a distributed hash table lookup service and application, two multicast systems and one file sharing application, and found a total of 41 attacks, ranging from a few minutes to a few hours to find each attack.

Motivation of Pharmacies to use Biometric Authentication

Dr. Stephen Elliott


The purpose of this research is to survey and interview pharmacists who had experienced biometric
authenticators in a pharmacy setting and those who have not, to answer the question, “Why would pharmacists
highly consider using traditional authentication systems than biometric systems despite the security issue of
memorability?” Understanding the pharmacists’ thoughts, motivations, influences, decisions, and attitudes of
using biometric systems may identify possible variables to develop a theory.

Process Implanting: A New Active Introspection Framework for Virtualization

Zhongshu Gu, Zhui Deng, Dongyan Xu, Xuxian Jiang


Previous research on virtual machine introspection proposed “out-of-box” approach by moving out security
tools from the guest operating system. However, compared to
the traditional “in-the-box” approach, it remains a challenge
to obtain a complete semantic view due to the semantic gap
between the guest VM and the hypervisor.
In this paper, we present Process Implanting, a new active
VM introspection framework, to narrow the semantic gap by
implanting a process from the host into the guest VM and
executing it under the cover of an existing running process.
With the protection and coordination from the hypervisor,
the implanted process can run with a degree of stealthiness
and exit gracefully without leaving negative impact on the
guest operating system. We have designed and implemented a
proof-of-concept prototype on KVM which leverages hardware
virtualization. We also propose and demonstrate application
scenarios for Process Implanting in the area of VM security

Human Centric Security

NL IAS Marches On

V. Raskin, J. M. Taylor


Natural language processing (NLP) has many applications for information assurance and security (IAS). We present a walkthrough of the analysis of NL texts by Ontological Semantic Technology, using an example that exhibits multiple forms of ambiguity.

Secure Communication among Robots and Humans with a Human Voice

Ji Hyeon Hong, Julia Taylor, Eric Matson, Victor Raskin


As firefighting situations are urgent, the demand of firefighting robots communicating with a human in a natural language is increasing as it is efficient in that there is no need an commander to get trained for operating a controller. However, when communicating with a human in a natural language, there is an issue regarding security; firefighting robots are supposed to operate according to commands by an authorized commander, it is difficult that firefighting robots verify authentication with a human voice. However, it is still important to keep the communication between a human (a commander, not an adversary) and a firefighting robot secure due to any potential situation of a person setting fire on purpose and wanting to hinder firefighting tasks. Thus, I am currently researching to realize a secure and accurate communication in a human language.

Social group and identity management across social networking sites

Geovon Boisvenue


This poster presents findings of a research study that uses in-depth qualitative interviewing to identify the strategies social media users employ to manage their online Identity (self-presentation) and various social groups. This research project is supported by Verisign Co.

Trust Framework for Social Networks

Arjan Durresi


We are developing a trust framework for social networks. The
framework includes new metrics for trust evaluation and its
confidence. We consider trust assessments similar to physical
measurements. Therefore, we apply measurement theory of errors in
trust evaluation. We have tested our framework with real datasets from
social networks. Our results confirm the validity of our approach.
Furthermore, we show some great advantages of our framework, for
example the trust view in an experiment was increased more than 2000
times. Our framework can be used to build security mechanisms for
social networks, including filters against DDoS and untrusted
information, detection of cliques of attackers and much more.

Network Security

Create Moving Target Defense in Static Networks by Learning from Botnets

Feng Li, IUPUI;  Xukai Zou, IUPUI; Wei Peng, IUPUI


Network disruptive attacks, such as Distributed Denial-of-service (DDos) attacks, routing attacks, and Man-in-the-middle attacks, are a major impediment in the development of networks. The static nature of network configuration enables adversaries attack these networks to effectively discover and disrupt network resources remotely.  Similar to network disruptive threats, the root of many security threats is the static or relatively stable status of the system, which can be easily exploited by attackers. A recent census regarding the game-changing theme of cyber security leads us to Moving Target Defense (MTD). The MTD aims to change the uneven cost between attackers and defenders caused by the static systems. Unfortunately, what to do and how to do it currently are not clear for MTD in network systems, even though there are several recent innovative security research attempts on host and software dynamics to increase the cost of the attacker.  Thus, this project will address this new, yet challenging, issue of MTD in the static network systems.

Although the research on MTD in static networks is in its infant stage, the attackers have actually accumulated valuable experiences and ideas in this area. The history of the botnet, which is a collection of compromised nodes (computers, which are also known as bots) connected through a network, vividly represents the evolution from static to `moving’ networks. Therefore, this project starts with a thorough investigation on the moving techniques used in the recent botnets. The objective of this research is to design an innovative moving target defense framework that will improve resiliency and harden existing static networks by learning from recent botnets. This framework will make the static network move to the disadvantage of the attackers, by increasing an attacker’s uncertainty, difficulty, and cost in the network disruptive attacks.

Multi-Path Overlay Routing to Improve Latency while Tolerating Intrusions

Andrew Newell, Endadul Hoque, and Cristina Nita-Rotaru


Many online services require better latency guarantees than offered by the general internet infrastucture, and previous work has shown this is achievable with overlay networks. We consider mission-critical network services which need to maintain these latency guarantees despite outages or intrusions of overlay nodes. Our approach is to route data along multiple node-disjoint paths to ensure data delivery along at least one path. We consider a new routing problem which optimizes the worst path out of a set of chosen paths which are node-disjoint. We formulate a Mixed Integer Program to solve this optimization problem, and we experiment on a real-world dataset to test the feasibility of our solution.

Newton Meets Vivaldi: Securing Virtual Coordinates by Enforcing Physical Laws

Jeff Seibert, Sheila Becker, Cristina Nita-Rotaru, Radu State


Virtual coordinate systems (VCS) provide accurate
estimations of latency between arbitrary hosts on a network,
while conducting a small amount of actual measurements and
relying on node cooperation. While these systems have good
accuracy under benign settings, they suffer a severe decrease
of their effectiveness when under attack by compromised nodes
acting as insider attackers. Previous defenses mitigate such
attacks by using machine learning techniques to differentiate
good behavior (learned over time) from bad behavior. However,
these defense schemes have been shown to be vulnerable to
advanced attacks that make the schemes learn malicious behavior
as good behavior.

We present Newton, a decentralized VCS that is robust to
a wide class of insider attacks. Newton uses an abstraction of
a real-life physical system, similar to that of Vivaldi, but in
addition uses safety invariants derived from Newton’s laws of
motion. As a result, Newton does not need to learn good behavior
and can tolerate a significantly higher percentage of malicious
nodes. We show through simulations and real-world experiments
on the PlanetLab testbed that Newton is able to mitigate all
known attacks against VCS while providing better accuracy than
Vivaldi, even in benign settings.

Over-the-Air Penetration Testing

Eric Katz, Bryan Lee, Richard Mislan


The purpose of this study is to determine whether it is possible to penetrate a mobile device from another device that is on the same cellular network. The study will concentrate on the Android platform and focus on attempts to penetrate the most popular applications for the platform. For this study, we will be testing some of the top free Android Play Store applications, such as Facebook for Android and Skype for Android. The purpose will be to see if it is possible to gather pertinent information ranging from contacts and messages to a full forensic image of the device from the target. Previous research in this area has involved man-in-the-middle techniques that require the target device to connect to hardware controlled by the attacker, which then forwards the information to the cellular network. This means that special equipment and the target phone are required in order to carry out the attack. If a mobile-to-mobile attack is possible, all that is needed is a phone that is able to connect to the network the target is on and any scripts and software created for the exploit. This could be a very useful technique in areas where pertinent information is passed over cellular networks, such as a drug trafficking ring or terrorist cell.

Privacy-Preserving and Efficient Friend Recommendation in Social Networks

Bharath K. Samanthula,  Lei Cen,  Wei Jiang,  Luo Si


Friend recommendation is a well-known application in many social networks and has been studied extensively in the recent past.
However, with the growing concerns about users’ privacy, there is a strong need to develop privacy preserving friend recommendation methods for social networks.  In this paper, we propose two novel methods to recommend friends for a given user by using the common neighbors proximity measure in a privacy preserving manner. The first method is based on the properties of additive homomorphic encryption scheme and also utilizes a universal hash function for efficiency purpose.  The second method utilizes the concept of protecting the source privacy through randomizing the
message passing path and recommends friends accurately. In addition, we empirically compare the efficiency and accuracy of the two methods. The proposed protocols act as a trade-off among security,
accuracy, and efficiency; thus, users can choose between these two
protocols depending on the application requirements.

Resource Mapping on Hybrid Testbeds

Wei-Min Yao, JiahongZhu, Sonia Fahmy


The integration of real hardware and systems increases the fidelity of a testbed, but limits its scalability. Scaling techniques, such as virtualization and real-time simulation, can increase the scale of experiments conducted on the testbed. Manual mapping of the experimental scenario onto the
testbed resources and scaling techniques is intractable for large experiments.

We propose a method to automatically map testbed resources according to user-specified requirements. Our mechanism identifies suitable scaling techniques for parts of the input experimental topology and adjusts their resource requirements according to the goal of the experiment. New network features can be easily incorporated to support a wide diversity of network experiments and testbed resources.

Secure Configuration of Intrusion Detection Sensors for Changing Enterprise Systems

Gaspar Modelo-Howard, Jevin Sweval, Saurabh Bagchi


Current attacks to distributed systems involve multiple steps, due to attackers usually taking multiple actions to achieve their goals. Such attacks are called multi-stage attacks (MSA) and have the ultimate goal to compromise a critical asset for the victim. An example would be compromising a web server, then achieve a series of intermediary steps (such as compromising a developer’s box thanks to a vulnerable PHP module and connecting to a FTP server with gained credentials) to ultimately connect to a database where user credentials are stored. Current detection systems are not capable of analyzing the multi-step attack scenario.
We present a distributed detection framework based on a probabilistic reasoning engine that communicates to detection sensors and can achieve two goals: (1) protect the critical asset by detecting MSAs and (2) tune sensors according to the changing environment of the distributed system monitored by the distributed framework. As shown in the experiments, the framework reduces the number of false positives that it would otherwise report if it were only considering alerts from a single detector and the reconfiguration of sensors allows the framework to detect attacks that take advantage of the changing system environment.

Securing Application-Level Topology Estimation Networks: Facing the Frog-Boiling Attack

Jeff Seibert, Cristina Nita-Rotaru, Radu State


Peer-to-peer real-time communication and media streaming applications optimize their performance by using application-level topology estimation services such
as virtual coordinate systems. Virtual coordinate systems allow nodes in
a peer-to-peer network to accurately predict latency between arbitrary nodes
without the need of performing extensive measurements. However, systems that leverage virtual coordinates as supporting building blocks, are prone to attacks conducted by compromised nodes that aim at disrupting, eavesdropping,
or mangling with the underlying communications.

Recent research proposed techniques to mitigate basic attacks (inflation,
deflation, oscillation) considering a single attack strategy model where
attackers perform only one type of attack. In this work we explore supervised
machine learning techniques to mitigate more subtle yet highly effective
attacks (frog-boiling, network-partition) that are able to bypass
existing defenses. We evaluate our techniques on the Vivaldi system against a more complex attack
strategy model, where attackers perform sequences of all known attacks against
virtual coordinate systems, using both simulations and Internet deployments.

Securing HARMS-based Communication between Heterogeneous Robots

Sherry Wei, Lauren M Stuart and Ji Hyeon Hong


*Secure communication in HARMS (Human, Agent, Robot, Machine, and Sensor) for Heterogeneous Robotic Teams
*Our aim is to be able to command and control the Robot by natural language
*Issues on communication in HARMS
Uncertain time delay
Uncertain data loss
Data transmission security problems

Simulation of Data Transmission in Network Environment

Di Jin


Last two decades have witnessed the development of Internet. It is deniable that Internet shortened the distance among people. However, the requirement of data transmission various from situations. My study is inspired by the data transmission in the metro system. In such situation, there are two kinds of signals: one is control signal and the other is video signal. For the first one, the amount of data is small, but the accuracy must be guaranteed. For the second one, data loss is allowed, but the transimitting speed must be large enough to ensure video transmission.
My project tries to simulate data transmission which can be further implemented in secure communication situation.

T-dominance: A Stealthy Propagation Strategy for Mobile Botnet

Wei Peng, Feng Li, Xukai Zou (@ IUPUI), Jie Wu (@Temple University)


Smartphone-based mobile computing is emerging as a preferred computing platform of the future.  Mobile botnet, which has deep roots in traditional Internet botnet and exploits unique characteristics of mobile computing in propagation, is an imminent security threat to the emerging computing platform. In the spirit of ``forewarned is forearmed’‘, we play devil’s advocate by proposing techniques that mobile botnets might use to circumvent defense.  The key ideas of our work are a novel concept of stealthy botnet and the distinct strategies used in two consecutive phases, herding and attack, in the lifetime of a stealthy botnet.  A few points distinguish our research in the context of previous works. 1) Based on proximity malware propagation, we propose the concept of botnet-level stealthiness and a novel structural property, T-dominance, for a mobile botnet; the T-dominance property is defined upon mobility and social pattern of smartphone users. 2) We design a distributed algorithm which maintains the T-dominance structural property for a mobile botnet; the algorithm is localized and delay-tolerant in the sense that it maintains the structural property based solely on local and potentially outdated information.

Policy, Law and Management

Amazon Kindle Forensics

Marcus Thompson


The Amazon Kindle is becoming an increasingly popular e-book reader. This examination of the Kindle Keyboard is important for law enforcement investigators who have seized a Kindle; however, documented forensic acquisition methods for the Kindle do not exist. This research explores possible forensic processes including privilege escalation and documents locations of items of interest for investigations.

CERIAS Information Security Archive

Temitope Toriola, Eugene Spafford, Mike Focosi


A database of technical papers on information security and cyber security that were published in the earlier stages of field but are still considered fundamental.

Cyber warfare as a form of conflict: Evaluation of models of cyber conflict as a prototype to conceptual analysis

Samuel Liles, Dr. Marcus Rogers, Dr. J. Eric Dietz, Dr. Dean Larson, Dr. Victor Raskin


This research states, “Given the unstructured domain of cyber warfare knowledge a specific model will allow experts to produce a concept map significantly more detailed than absent the model.” Experts were solicited in a variety of venues to map cyber warfare using a concept mapping process and provide a deeper understanding of the concept. Two technology-centric models were given to groups of experts to assist them in explaining elements of cyber conflict. One group was just given the cyber warfare question and no specific model as guidance. The groups were then compared to see if either of the models had better explanatory power per the experts responses.

Identity Policy Choices as Propagation of Structure

M. Dark


The debate over how identifying information is used online has a focus in the decisions over identity policies employed in online communities. The now-common decision—between anonymous, pseudonymous, and “real name” handles—has implications for interaction not only within the community in question but outside of it. We explore these in a framework for the explanation of how individual decisions form and are formed by existing social structures.

Minding our ISPs and ICTs: A Model of the Policy Challenges to Alleviate the Digital Divide

Michael R. Brownstein


The digital divide still exists in a time when technological innovation and e-governance are becoming popular. This poster makes the recommendation that there is more competition between ISPs and tech firms that make ICTs, respectively. In doing so costs will be lowered making access more affordable. Also, by involving lawmakers in the tech development process there can also be fruitful regulations as well as more more appropriate technological diffusion.

Motorola Xoom Examination

Justin Tolman


My research is searching for a forensically sound method of acquiring and analyzing data from the Motorola Xoom Android tablet running the Ice Cream Sandwich operating system.

Primary focus is on the needs within the Law Enforcement community.

Risk Assessment in an information centric world: Threats, vulnerabilities, countermeasures and impacts (a work in progress)

Samuel Liles


What is this project trying to answer?
How do you do analysis of risk across the domain of information technology using metrics based on empirical evidence for decisions that are evidence based in mitigation and allow for decision processes based on the best information?

Prevention, Detection and Response

A Robust One Class Bayesian Approach for Masquerade Detection

Qifan Wang, Luo Si


Masquerade attack is a serious computer security problem,
which can cause significant damage. Many previous research
works were based on two-class training that collected data
from multiple users to train one self (i.e., regular) model and
one non-self (i.e., abnormal) model for each user. Two-class
learning methods for masquerade detection can generate accurate
results but demand data from all users, which may
not be available for many practical applications. On the other
side, one-class learning methods build a model for each
user by utilizing only his/her own data. One-class learning
methods are more practical but they also suffer from
the limited amount of training information from a single user.
To address the data sparsity issue, we propose a robust
one-class Bayesian approach for masquerade detection. The
new method explicitly considers model uncertainty by integrating
out the unknown model parameters for generating
robust results, while previous one-class methods only use a
single point estimate to find an optimal model. We derive
the full analytical solution of the predictive distribution over
all possible model parameters. A set of experimental results
demonstrate that the proposed approach outperforms most
previous one-class approach for masquerade detection.

Closing the Pandora’s Box: Defenses for Thwarting Epidemic Outbreaks in Mobile Adhoc Networks

Rahul Potharaju, Endadul Hoque, Cristina Nita-Rotaru, Saswati Sarkar, Santosh S. Venkatesh


With the advent of Google’s Android, the world has observed a dramatic increase in the number of wireless devices with complex capabilities and supporting open source OSes. While the openness of OSes induces developers’ motivation, it also introduces a new propagation vector for mobile malware. In this paper, we model the propagation of mobile malware using theory of epidemiology and study the problem as a function of the underlying mobility models. We define the optimal approach to heal an infected system with a set of static-healers as the T-COVER problem and show that it is NP-HARD. We then propose two families of healer protocols - a time-optimized randomized version that is simple to implement and an energy-optimized profile-based version that enables a healer to “learn” about its proximity to make better decisions. We show through extensive simulations using the NS-3 simulator that despite lacking knowledge of the future, our healer-based protocols show reasonable performance when compared to an oracle-based solution.

Co-tenant Application Security on Mobile Devices

Pelin Angin, Bharat Bhargava


The proliferation of mobile devices has made them increasingly exposed to security risks, with attackers shifting their targets towards these due to their increased use in daily life. This work focuses on the security risks current mobile device architectures are prone to in terms of leakage of sensitive information due to multiple applications sharing the same resources (i.e. application co-tenancy). In this work, we propose a virtualization-based approach to isolate security-critical applications from others running on the same mobile device. The proposed approach for isolating applications is based on running security critical applications in their own virtual machines, which allows for separation of the data space of these from that of other applications on the same device.

Computer Forensics at NIST

Dr. Paul Black, Dr. James Lyle, Mr. Douglas White


An introduction to three computer forensics efforts at
the National Institute of Standards and Technology (NIST);
Computer Forensic Tool Testing (CFTT), The National Software
Reference Library (NSRL), and Computer Forensics Reference
Data Sets (CFReDS).

Data Hiding in Cell Phones

Kyle Johansen, Marvin Michels, Marcus Rogers, Detective Paul Huff


As cell phone use increases, everyday life sees more data stored on phones. Therefore there is a greater need to access the data stored on them. Certain tools, such as the Cellebrite UFED and the Susteen Datapilot, claim to have universal data collecting ability. Data state supposedly does not matter to these devices. The manufacturers assert normal, hidden, or deleted content can be extracted. We intend to test these claims by using data hiding techniques common to computers and evaluating the results.

Finding the Story in the TweetStack: Mining Spatio-temporal Clusters for Event Correlation and Visualization

Rahul Potharaju, Andrew Newell, Cristina Nita-Rotaru


In recent years, social media activity has reached unprecendented levels. Hundreds of millions of users now participate in online social networks and forums, subscribe to microblogging services or maintain blogs. Twitter, in particular, is currently the major microblogging service, with more than 175 million subscribers. Twitter users generate short text messages, called tweets, to report their current thoughts and actions, comment on breaking news and engage in discussions. This work presents time series based clustering of real-time stream data as a pre-cursor to applying sophisticated natural language processing or machine learning techniques. First, we show that for entities related in the physical world, even without the aid of heavy natural language processing techniques, we can cluster them together merely based on the structure of their timelines. Second, by converting the inherent timeline structure into a symbolic representation, we intend to cluster time series of different words to obtain an initial set of clusters that can then be analyzed further using alternate techniques.

Insider Threat Mitigation Framework

Victor Raskin, Melissa J. Dark, Simon Slobodnik


In military cyber operations, Mission Oriented Risk Design Analysis (MORDA) (Buckshaw et al 2005)  is used to carry out risk assessment of adversary action.  MORDA has been used in operations since 1999 in various missions.  It is a systemic and comprehensive model for risk, vulnerability and cost assessment. 
Methodology of such scope is lacking in the world of insider threat mitigation.  Mechanisms exist to detect and sometimes predict insider threat.  However models and the scope of MORDA are not used with insider threat in mind.
Events that lead to an insider becoming malicious are rarely viewed as deterministic because full information is not available for computation.  Therefore identifying an individual as “high risk” in terms malicious activity has historically fallen to humans rather than any automated system.
MORDA in conjunction with a Reasoner that is based on a Dynamic Bayesian Network (DBN) as used by Greitzer et al (2009), and Bishop’s et al (2010) Unifying Policy Hierarchy to be used to evaluate malicious insider threat comprehensively. 
This approach assures a systemic approach to evaluating impact of adversary action, specifically that of a malicious insider.  A systemic, approach with costs attached will allow to address the issue of a malicious insider from a business as well as security viewpoint.  This will contribute to insider threat detection and prevention mechanisms rather than “after-the-fact” response.

Modeling and Simulating the Cost and Impact of Cyber Attacks : Malware Threats

Cory Q. Nguyen,  Dr. J. Eric Dietz


This study’s goal is to model and simulate the impact and cost a malware threat has on an organizations and its subsidiaries.  The model’s purpose is to measure both direct and indirect cost and impact of a given malware.  Currently, a model of campus is being developed.

Given the ability to visualize and reasonably assess the impact of a given malware and the potential cost incurred, organizations can have a better grasp of its current ability to prevent, address, and mitigate future potential threats. Additionally, it would have a more clearer picture of future implementations and action items needed to manage the risk responsibly.  Simply, with the ability to model and simulate a given malware threat of a specific organization gives knowledge and insight to develop a thorough and reliable risk management system.

Secure Sensor Network SUM Aggregation with Detection of Malicious Nodes

Sunoh Choi, Gabriel Ghinita, and Elisa Bertino


In-network aggregation is an essential operation which reduces communication overhead and power consumption of resource-constrained sensor network nodes. Sensor nodes are typically organized into an aggregation tree, whereby aggregator nodes collect data from multiple data source nodes, and perform a reduction operation such as sum, average, minimum, etc. The result is then forwarded to other aggregators higher in the hierarchy toward a base station (or sink node) that receives the final outcome of the in-network computation. However, despite its performance benefits, aggregation introduces several difficult security challenges with respect to data confidentiality, integrity and authenticity. In today’s outsource-centric computing environments, the aggregation task may be delegated to a third party that is not fully trusted. In addition, even in the absence of outsourcing, nodes may be compromised by a malicious adversary with the purpose of altering aggregation results.
To defend against such threats, several mechanisms have been proposed, most of which devise aggregation schemes that rely on cryptography to detect that an attack has occurred. Although they prevent the sink from accepting an incorrect result, such techniques are vulnerable to denial-of-service if a compromised node alters the aggregation result in each round. Several more recent approaches also identify the malicious nodes and exclude them from future computation rounds. However, these incur high communication overhead as they require flooding or other expensive communication models to connect individual nodes with the base station. We propose a flexible aggregation structure (FAS) and an advanced ring structure (ARS) topology that allow secure aggregation and efficient identification of malicious aggregator nodes for the SUM operation. Our scheme uses only symmetric key cryptography, outperforms existing solutions in terms of performance, and guarantees that the aggregate result is correct and that malicious nodes are identified.

Security Auditing in Service Oriented Architecture

M. Azarmi , P. Angin, B. Bhargava, N. Ahmed , A. Sinclair


Service oriented architectures (SOA) introduce new security challenges not present in the single-hop client-server architectures due to the involvement in multiple service providers in a service request. Considering the additional security threats on SOAs, the interactions of independent service domains could violate service policies or SLAs.  We provide an efficient solution for auditing service invocations in SOA. This solution is transparent to the services, which allows using legacy services without modification. Moreover, we are investigating the transition to the cloud computing.

The Evolution of Agricultural Pest Control and Food Security

Lisa Liu


Food Security is one of the critical issues in homeland security. Just as the infrastructure, electric grid, or Internet, it is the primary responsibility of homeland security authorities to protect the food we grow from any eventuality that will hurt the consumer, be it a direct act of contamination or an unintended side effect of a production practice. This research deals with the latter in a long-term historical aspect, demonstrating that protective measures must be undertaken now to prevent considerable damage to food supply later.

Trustworthy Data from Untrusted Databases

Rohit Jain, Sunil Prabhakar


Ensuring the trustworthiness of data retrieved from a database is of utmost importance to users.  The correctness of data stored in a database is defined by the faithful execution of only valid (authorized) transactions. In this paper we address the question of whether it is necessary to trust a database server in order to trust the data retrieved from it. The lack of trust arises naturally if the database server is owned by a third party, as in the case of cloud computing. It also arises if the server may have been compromised, or there is a malicious insider. In particular, we reduce the level of trust necessary in order to establish the authenticity and integrity of data at an untrusted server. Earlier work on this problem is limited to situations where there are no updates to the database, or all updates are authorized and vetted by a central trusted entity. This is an unreasonable assumption for a truly dynamic database, as would be expected in many business applications, where multiple clients can update data without having to check with a central server that approves of their changes.

We identify the problem of ensuring trustworthiness of data on an untrusted server in the presence of transactional updates that run directly on the database, and develop the first solutions to this problem. Our solutions also provide indemnity for
an honest server and assured provenance for all updates to the data. We implement our solution in a prototype system built on top of Oracle with no
modifications to the database internals. We also provide an empirical evaluation of the proposed solutions and establish their feasibility.

Unmanned Aerial Systems Cyberattack Identification and Analysis

James Goppert, Andrew Shull, Inseok Hwang


Unmanned aerial vehicles have taken on a very large role in military operations and there is considerable interest in expanding their use to commercial and scientific applications. Because of the dependence of these vehicles on computer systems, their high degree of autonomy, and the danger posed by a loss of vehicle control, it is critical that the proliferation of these vehicles be accompanied by a thorough analysis of their vulnerabilities to cyberattack.

Get Your Degree with CERIAS