The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Presentations and Posters

Page Content

Presentations

Distinguished Lecture by John Thompson

Morning Keynote Address by Ron Ritchey

Fireside Chat: Ron Ritchey, Eugene Spafford, John Thompson

Panels

Project Posters

A Design for Securing Data Delivery in Mesh-Based Peer-to-Peer Streaming

Jeff Seibert, Xin Sun, Cristina Nita-Rotaru and Sanjay Rao
The proliferation of P2P streaming overlay services on the public Internet raises questions about their susceptibility to attacks. Mesh-based overlays have emerged as the dominant architecture for P2P streaming. We provide a taxonomy of the implicit commitments made by nodes when peering with others. We show that when these commitments are not enforced explicitly, they can be exploited by malicious nodes to conduct attacks that degrade the data delivery service. We propose a secure design for mesh-based P2P streaming that protects against data delivery attacks. We evaluate our design with real-world experiments on the PlanetLab testbed and show that our design is effective. Even when there are 30% malicious attackers, we can mitigate an 80% loss of data back to normal levels of performance.

Analyzing Protection Quality of Security-Enhanced Operating Systems

Hong Chen, Ninghui Li, Ziqing Mao
Recently, several Mandatory Access Control protection systems, e.g., SELinux and AppArmor, have been proposed to enhance the security of operating systems. We propose an approach to analyze and compare the quality of protection of these protection systems. We introduce the notion of vulnerability surfaces under attack scenarios as the measurement of protection quality, and implement a tool for computing the vulnerability surfaces. We use our tool to analyze and compare SELinux and AppArmor in several Linux distributions.

BioAPI Java Project

Preeti Rao, Ashwin Mohan, Shimon Modi, Keith Watson
The BioAPI Consortium developed the biometric application programming interface (BioAPI) for the implementation of software that is platform and device independent. Version 2.0 of the BioAPI standard (ISO/IEC 19784-1:2005) has a reference implementation in the C programming language. This project offers a reference implementation of the 2.0 BioAPI specification written in the Java programming using object oriented design techniques.

Biometrics and Identity Management in Healthcare Applications

C. Blomeke, S. K.Modi,Ph.D., E. Bertino, Ph.D., & S. J. Elliott Ph.D.
The long term objective of this research is to provide a model for the delivery of identity management services to a diverse range of healthcare applications. . This is a challenge, due to various regulations and requirements that are unique to the healthcare system. Healthcare applications require a strong form of authentication, which goes beyond the capabilities of passwords, based on what you remember, and tokens, based on what you have. Healthcare applications are now considering biometrics as a potential authentication methodology. Biometrics is the automated recognition of humans based on biological and behavioral traits, like fingerprint recognition etc. This research proposes the development of an identity management model and in doing so provides the healthcare system with knowledge about the efficacy of biometrics in various healthcare applications.

Centralized Monitoring in Poly^2

Keith Watson, Anurag Jain
The Poly^2 Project is a research project in security architecture. The goal of this project is to isolate tasks on dedicated processors. These processors have the minimum capabilities to support their specific task. Dynamic provisioning of tasks provides redundancy and capacity management. For the initial design, we applied good security design principles to achieve these goals. The design incorporates separation of network services onto multiple computing systems and strict control of the information flow between the systems and networks. This allows us to build reliability and redundancy into the platform while increasing overall trust. Additionally, we create minimized, application-specific operating systems. The operating system will only provide the minimum set of needed services and resources to support a specific application or network service. This customization will increase the difficulty in attacking and compromising the system. To manage the individual systems and services in this design, a management system will be created to allow administrators to quickly provision new and additional network services.Centralized Monitoring in Poly^2 automates the collection of events on the network and within the applications nodes and the dynamic provisioning of services. System events in the form of raw log data and network events in the form of intrusion and anomaly detection logs are collected by the Poly^2 security server and filtered to capture critical information. That information is passed to the Poly^2 administration server where actions on the configuration of the Poly^2 are determined. This project is under design and development on the current implementation of the Poly^2 framework.

Compliance Auditing in Data Stream Management Systems

Rimma V. Nehme and Elisa Bertino
Recent improvements in location-based technologies and the drop in prices of sensor devices have spurred a new wave of stream-based applications, such as location-based services, geo-social networking and ubiquitous healthcare. These applications often rely on highly privacy-sensitive information (such as physical location, activity, health state) and introduce a potential for serious abuses including privacy violations, discrimination and unwanted services. In our research work, we tackle the problem of compliance auditing on infinite data streams, where users may specify the streaming (sensitive) data subject to disclosure review. We will describe a framework, called StreamAudit, for determining whether a Data Stream Management Systems (DSMS) is adhering to its data disclosure policies at all times. In StreamAudit the audit can be initiated by: (i) users sending their real-time data through embedded into data streams audit metadata, and (ii) administrators formulating continuous audit expressions on the server. StreamAudit continuously monitors executing queries returning all queries (deemed “suspicious”) that accessed the specified data during their execution. The overhead of our approach on query processing is small, involving primarily the logging of each query information along with other minor annotations. To compress audit information for infinite data streams, StreamAudit employs “tilted time frame” data structure to enable “approximate” compliance answers for distant past.

Defeating Cross-Site Request Forgery Attacks with Browser-Enforced Authenticity Protection

Ziqing Mao, Ninghui Li, Ian Molloy
A cross site request forgery (CSRF) attack occurs when a user’s web browser is instructed by a malicious webpage to send a request to a vulnerable web site, resulting in the vulnerable web site performing actions not intended by the user. CSRF vulnerabilities are very common, and consequences of such attacks are serious. We recognize that CSRF attacks are an example of the confused deputy problem, in which the browser is viewed by websites as the deputy of the user, but may be tricked into sending requests that violate the user’s intention. We propose Browser-Enforced Authenticity Protection (BEAP), a browser-based mechanism to defend against CSRF attacks. BEAP infers whether a request reflects the user’s intention and whether an authentication token is sensitive, and stripes sensitive authentication tokens from any request that may not reflect the user’s intention. The inference is based on the information about the request (e.g., how the request is triggered and crafted) and heuristics derived from analyzing real-world web applications. We have implemented BEAP as a Firefox browser extension, and show that BEAP can effectively defend against the CSRF attacks and does not break the existing web applications.

Evading Client Honeypots

Jason Ortiz, Ankur Chakraborty, Pascal Meunier
There is more awareness surrounding the usage of client honeypots in an effort to thwart client-side attacks. Therefore, malicious attackers have begun to implement techniques designed to evade detection by such automated systems. They attempt to evade client honeypots actively and passively using the techniques described on this poster.

Filter Selection Schema for Improved Face Recognition

I.J., Jun, Ph.D, S.K., Modi, Ph.D & S.J., Elliott, Ph.D.
Face recognition is a convenient method of authentication because facial characteristics are easy to capture. However recognition algorithms use fixed parameters, such as face area, but many images remain uncorrected using this technique.The proposed schema can adapt to changing illumination conditions by adapting several filtering algorithms to select in accordance with the identified illumination condition in the sample image.

Impact of Training on Biometric System and User Performance

E. P. Kukula, Ph.D., R. W. Proctor, Ph.D., & R. E. Thamerus
Increasingly sophisticated biometric methods are being used for a variety of applications in which accurate authentication of people is necessary. Because all biometric methods require humans to interact with a device of some type, effective implementation requires consideration of human factors issues. One such issue is the training needed to use a particular device appropriately. The purpose of this study is to examine the impact that training methods have on biometric usability and performance results. Previous research has shown that biometric devices have usability, ergonomic, and design issues which have an impact on the performance of the entire biometric system. This research is looking specifically at the effectiveness of a poster instructional method to train users how to use a 10-print fingerprint sensor and the usability issues of the method and the effects of self-determined vs. fixed durations for instruction.

Improving Fingerprint Sensor Interoperability using Sensor Agnostic Image Transformation

S.K. Modi, Ph.D., A. Mohan, Prof. S.J. Elliott, Ph.D.
The increased use of fingerprint recognition systems has brought the issue of fingerprint sensor interoperability to the forefront. Fingerprint sensor interoperability refers to the process of matching fingerprints collected from different sensors. Variability in the fingerprint image is introduced due to the differences in acquisition technology and interaction with the sensor. The effect of sensor interoperability on performance of minutiae based matchers was examined in this research. Fingerprints from 190 participants were collected on nine different fingerprint sensors which included optical, capacitive, and thermal acquisition technologies and touch, and swipe interaction types. Analysis results from testing original fingerprint images showed a higher rate of errors for matching fingerprints from multiple sensors. A sensor agnostic image transformation method is being developed to reduce the number of errors that arise due to interoperability of fingerprint sensors.

Indiana DOC Legacy Image Quality and Performance Assessment

G. Hales, Graduate Researcher & S. J. Elliott Ph.D.
In recent times it has become apparent that data sharing capabilities across state departments and law enforcement agencies is an issue, especially in terms of tracking, monitoring, and identifying persons of interest. There is a need to assess the image capture process, as well as sharing capabilities, and to incorporate commercially available facial recognition technology to reduce the errors in identifying persons of interest. The objective of this project is to evaluate legacy face images, assess and standardize the image capture process across Indiana Department of Corrections (DOC) agencies, integrate facial recognition to link face databases, and integrate mobile devices in law enforcement vehicles for face recognition. This research will lead to improvements in the efficiency and quality of the face image capture process in Indiana’s DOC facilities and BMV branches and facilitate image sharing capabilities across Indiana state agencies.

Integration of COBIT, Balanced Scorecard & SSE-CMM as a strategic Information Security Management (ISM) framework

James E. Goldman, Suchit Ahuja
The purpose of this study is to explore the integrated use of Control Objectives for Information Technology (COBIT) and Balanced Scorecard (BSC) frameworks for strategic information security management. The goal is to investigate the strengths, weaknesses, implementation techniques, and potential benefits of such an integrated framework. This integration is achieved by “bridging” the gaps or mitigating the weaknesses, that one framework inherently contains, using the methodology prescribed by the second framework. Thus, integration of COBIT and BSC can provide a more comprehensive mechanism for strategic information security management – one that is fully aligned with business, IT and information security strategies. The use of Systems Security Engineering Capability Maturity Model (SSE-CMM) as a tool for performance measurement and evaluation can ensure the adoption of a continuous improvement approach for successful sustainability of this comprehensive framework. There are some instances of similar studies conducted previously: • metrics based security assessment (Information Security and Ethics: Social and Organizational, 2004) using ISO 27001 and SSE-CMM• mapping of processes for effective integration of COBIT and SEI-CMM (IT Governance Institute, 2005)• mapping of COBIT with ITIL and ISO 27002 (IT Governance Institute, 2008) for effective management and alignment of IT with businessThe factor that differentiates this research study from the previous ones is that none of the previous studies integrated BSC, COBIT and SSE-CMM, to formulate a comprehensive framework for strategic information security management that is aligned with business, IT and information security strategies.

Integrity of Graphs Without Leaking

Ashish Kundu, Elisa Bertino
Secure data sharing in multi-party data sharing environments over third-party distribution frameworks requires that both integrity and confidentiality of the data be assured. Digital signature schemes are commonly used for integrity verification of data. However no such technique exists for graphs, even though graphs are one of the most widely used data organization structures. Techniques exist for directed acyclic graphs and trees, which are restricted forms of graphs. Such techniques are integrity-preserving (binding) but not confidentiality-preserving (hiding), which lead to leakage of sensitive information during integrity verification. The recently proposed structural signature scheme for trees is both binding and hiding, however is not suitable for graphs.In this paper, we propose a signature scheme for graph structures, which is provably binding and hiding. The proposed scheme is based on the structure of the graph as defined by depth-first graph traversals. Graphs are structurally different from trees in that they have four types of edges: tree, forward, cross, and back-edges. The fact that an edge is a forward-edge, a cross-edge or a back-edge conveys information that is sensitive in several contexts. Moreover, back-edges pose a more difficult problem than the one posed by forward, and cross-edges primarily because back-edges add bidirectional properties to graphs. We prove that the proposed technique is both binding and hiding. While providing such strong security guarantees, our signature scheme is also efficient: for DAGs, it incurs $O(n)$ (linear) cost for computation, storage and distribution of structural signatures, and for cyclic graphs, it incurs $O(n+d)$ cost, where $n$ is the number of nodes and $d$ is the maximum number of back-edges incident on a node in the graph.

Investors’ Perceptions of Information Security Incidents and Short-term Profitable Investment Opportunities

Ta-Wei “David” Wang, Karthik Kannan, Jackie Rees
This paper investigates investors’ perceptions of the impact of security breaches on a firm’s future performance and uncertainty. The results show that, from the sophisticated investors’ perspective, security incidents do not affect a firm’s future performance and uncertainty. Different perceptions among investors regarding security incidents also provide profitable short-term investment opportunities.

Mandatory Access Control for Experiments with Malware

Jacques Thomas, Pascal Meunier, Patrick Eugster, Jan Vitek
Traditionally, malware is analyzed by executing it in a virtual machine (VM). The VM is used to protect the host system from the malware and prevent the malware from escaping outside of the analyst’s system.This approach presents two problems: the malware can attack the VM to escape, or it can refuse to exhibit its malicious behavior after detecting that it is confined in a VM. We are investigating the use of SELinux as a confinement mechanism for experiments with malware. The Type Enforcement mechanisms provided by SELinux can be used to confine a VM, to prevent escape from it, and to run malware directly on the host system, without a VM.

Modeling and Integrating Background Knowledge in Data Anonymization

Tiancheng Li, Ninghui Li, Jian Zhang
Recent work has shown the importance of considering the adversary’s background knowledge when reasoning about privacy in data publishing. However, it is very difficult for the data publisher to know exactly the adversary’s background knowledge. Existing work cannot satisfactorily model background knowledge and reason about privacy in the presence of such knowledge.This paper presents a general framework for modeling the adversary’s background knowledge using kernel estimation methods. This framework subsumes different types of knowledge (e.g., negative association rules) that can be mined from the data. Under this framework, we reason about privacy using Bayesian inference techniques and propose the skyline (B, t)- privacy model, which allows the data publisher to enforce privacy requirements to protect the data against adversaries with different levels of background knowledge. Through an extensive set of experiments, we show the effects of probabilistic background knowledge in data anonymization and the effectiveness of our approach in both privacy protection and utility preservation.

Polymorphing Software by Randomizing Data Structure Layout

Zhiqiang Lin, Ryan D. Riley, Dongyan Xu
We will discuss a new software polymorphism technique that randomizes the program data structure layout. This technique will generate different layouts for program data structure definition and diversify the software that is compiled from the same suite of program source code. It can thwart the data structure-based program signature generation system, and can also mitigate the data structure need-to-know attacks such as kernel rootkit attacks. We have implemented our polymorphism technique on top of an open source compiler collection, gcc-4.2.4, and applied it to a number of programs. Experimental results show that our data structure randomization can achieve software code diversity (with a rough instruction difference of 10%), cause false positives to the state-of-the-art data structure signature generation system, and provide diverse kernel data structures to mitigate a variety of kernel rootkit attacks.

Procedure Based Classification of Environments

D. Surabattula and S. J. Landry
Procedures are followed in different environments. Based on the importance of getting a positive outcome and the consequence of getting a negative outcome, operators in the different environments behave differently. They have different strategies to follow procedures. Based on these concepts a 2 dimensional space has been identified. The purpose of the research is to validate this 2 dimensional space.

Provenance-Based Confidence Policy Management in Data Streams

Hyo-Sang Lim, Yang-Sae Moon, and Elisa Bertino
In this paper we address the problem of enforcing the confidence policy in data stream management systems (DSMSs in short). The confidence policy is a novel notion that supports confidence concept in data management and query processing; more specifically, such a confidence policy restricts access to the query results by specifying the minimum confidence level of a certain task. To deal with the confidence policy over data streams, we first propose a provenance-based framework of controlling confidence policy in DSMSs. As the measure of confidentiality of network nodes or data items, we use their trust scores obtained from data provenance as well as data values. We first discuss how to identify data items belonging to each single event among all data items that arrive at servers from various sources. We then propose a framework for computing trust scores. There is an inter-dependency of network nodes and data items in computing their trust scores, i.e., the trust score of a network node affects on trust scores of its related data items, and inversely the trust score of a data item affects on trust scores of its related network nodes. To calculate trust scores, we use two similarity properties: data similarity inferred from data values and path similarity inferred from data provenance. First, data similarity comes from a simple intuition that the more data items having similar values, the higher trust scores. Second, path similarity comes from an observation that different paths, but similar data values may increase trustworthy of data items.

Removing the Blinders: Utilizing Data-Plane Information to Mitigate Adversaries in Unstructured Multicast Networks

David Zage, Charles Killian, and Cristina Nita-Rotaru
Numerous collaborative Internet applications, such as video conferencing and broadcasting, have benefited tremendously from multicast services. Multicast overlay networks were proposed as a viable application level multicast architecture to overcome the scarcity of native IP multicast deployments. Many of these networks utilize adaptivity mechanisms to increase performance and provide fault tolerance for end-to-end communication. While pushing functionality to end-systems allows overlay networks to achieve better scalability, it also makes them more vulnerable since end-nodes are more likely to be compromised than core routers. Thus, end-system overlay networks are more vulnerable to malicious inside attacks coming from an attacker or group of colluding attackers that infiltrate the overlay. In particular, attacks that exploit the adaptivity mechanisms can be extremely dangerous because they target the overlay construction and maintenance while requiring no additional communication bandwidth on the attacker side. Such attacks can allow an adversary to control a significant part of the traffic and further facilitate other attacks such as selective data forwarding, cheating, traffic analysis, and attacks against availability. This work presents a solution for mitigating the effect of malicious adversaries on adaptive overlay networks by aggregating and utilizing data-plane and control-plane information to determine the reliability and utility of received information.

The Efficacy of Cross-Discipline Representations for ill-defined IAS Concepts

Steven Rigby, Melissa J. Dark, Marcus Rogers, J Ekstrom, Gary Bertoline
A universal problem to our society is the dramatic increase in the number of security threats, risks, and vulnerabilities to our nation’s computer systems, data, and infrastructure. Our future success depends upon the problem-solving and thinking abilities of professionals entering the Information Assurance and Security (IAS) field. These professionals will be faced with many problems that are complex, ll-defined, and multi-disciplinary in nature. But how do we, as educators, prepare these professionals to be successful? Are traditional approaches sufficient for the complexity they will face? This study examines ways of increasing learners’ expertise of ill-defined concepts through the use of varying types of representations. Of particular interest is to what extent the number and context of representations increase learners’ conceptual understanding of an ill-defined concept. Research suggests that the more varying the context of the representation presented to the learner the greater the understanding. Through a quasi-experimental research methodology, students were assigned into one of three groups and given multiple representations of an ill-defined concept “threat analysis”. After each treatment, students created concept maps of their understanding of threat analysis, which was then scored against an expert concept map using a rubric. Results show that multiple representations did increase conceptual understanding of an ill-defined concept from the first treatment to the second with diminishing returns thereafter. The varying contexts of the representations was not a factor, however the use of different instructional strategies did show a difference over the three treatments. Future research in measuring “far-transfer” of ill-defined concepts will be of benefit to the field.

The Influence of Force on Fingerprint Recognition Using Automated Data Capture

B. Senjaya, T.B. Lee, Ph.D., S.J. Elliott, Ph.D., S.K. Modi, Ph.D.
Fingerprint image quality has a positive on recognition systems. By improving the quality of fingerprint images the performance of the system can be increased. Current automated fingerprint capture processes conduct quality analysis on fingerprints after capturing the fingerprint and prompt the user for additional fingerprints if the image does not conform to the quality criterion. The process of recapturing increases throughput time and increases the inconvenience faced by users. The objective of this research was to redesign the image capture process by identifying optimal force levels for initiating the capture operation. 70 subjects interacted with an optical fingerprint sensor at several force levels to identify the force level that yielded the best fingerprint image quality and least number of matching errors.

The Poly^2 Project

Keith Watson
The Poly^2 Project is a research project in security architecture. The goal of this project is to isolate tasks on dedicated processors. These processors have the minimum capabilities to support their specific task. Dynamic provisioning of tasks provides redundancy and capacity management. For the initial design, we applied good security design principles to achieve these goals. The design incorporates separation of network services onto multiple computing systems and strict control of the information flow between the systems and networks. This allows us to build reliability and redundancy into the platform while increasing overall trust. Additionally, we create minimized, application-specific operating systems. The operating system will only provide the minimum set of needed services and resources to support a specific application or network service. This customization will increase the difficulty in attacking and compromising the system. To manage the individual systems and services in this design, a management system will be created to allow administrators to quickly provision new and additional network services.

VeryIDX- A Privacy Preserving Digital Identity Management System for Mobile Devices

Federica Paci, Ning Shang, Kevin Steuer Jr, Sam Kerr, Ruchith Fernando
The combined use of the Internet and mobile technologies is leading to major changes in how individuals communicate, conduct business transactions and access resources and services. In such a scenario, digital identity management (DIM) technology is fundamental for enabling transactions and interactions across the Internet. We propose to demonstrate VeryIDX, a system for the privacy-preserving management of users’ identity attributes on mobile devices.

Wireless Security Analysis

Prof. Anthony Smith, Sarath Geethakumar, Utsav Mittal, Ryan Poyar
Our research focuses on the wireless security vulnerabilities that exist in 802.11. We specifically analyze the surrounding area of Purdue University as a sample to estimate the overall wireless security posture in and around major cities within the United States. Some common wireless attacks will be demonstrated during the session.

Get Your Degree with CERIAS