The Center for Education and Research in Information Assurance and Security, or CERIAS, is the world's foremost University center for multidisciplinary research and education in areas of information security. Our areas of research include computer, network, and communications security as well as information assurance.

This site's design is only visible in a graphical browser that supports web standards, but its content is accessible to any browser or Internet device. (Why?)

Center for Education and Research in Information Assurance and Security

View PDF

A Digital Investigation Process Model

Area: Incident Detection, Response, and Investigation
Personnel: Brian Carrier, Eugene Spafford

A Holistic Approach for Detecting System Intrusions

Area: Incident Detection, Response, and Investigation
Personnel: Greg Rice (PI)
The purpose of this work is to improve intrusion detection techniques by developing a more general framework and infrastructure for detecting system penetrations through information sharing. Unlike traditional misuse and anomaly detection architectures, we propose a novel approach where information is gathered at multiple layers within the computing system and analyzed according to specified behavior pattern at each level. By gathering event data at the application, system, and network layers within a system, we describe a new approach to intrusion detection that departs from previous work in specification-based intrusion detection systems. Using this approach, events intercepted at each layer within the IDS hierarchy can be correlated to their consequential effects within adjoining layers. By examining how events relate across several adjacent layers within the computing system, future intrusion detection systems may gain the ability to not only detect deviant program activity outside the specified security profile for a system but also generate alerts for more complex attacks that generate inconsistent events between adjoining layers.

View PDF

A simple protocol for digital cash by elliptic curves

Area: Trusted Social and Human Interactions
Personnel: Shuo Shen (PI)
Digital cash is a important applications of modern cryptology. Traditional cash spreads germs, and people can steal it from the owner. Checks and credit card have an audit trail; you can't hide to whom you gave money. Digital cash can avoid all these defects. There have been a lot of excellent protocols designed for digital cash, which allow for secure and untraceable transactions. In this poster, we try to create a simple and efficient protocol of digital cash by use the powerful mathematics theory of elliptic curve.

View PDF

A Statistical Analysis of Image Quality on Live, Generated, and Fake Fingers

Area: Identification, Authentication and Privacy
Personnel: R. T. Katen (PI), Stephen Elliot

View PDF

Access Control Techniques and Methodologies for Grid Systems

Area: Identification, Authentication and Privacy
Personnel: Pietro Mazzoleni, Elisa Bertino, Elena Ferrari
In this poster we present access control techniques and methodologies for Grid systems.

Our approach supports fine-grained access control policies, by using which entities participating in a grid are able to specify their own set of rules. We have identified five different types of authorization and we are investigating the integration of access control with resource scheduling in order to achieve good performance.

The poster presents some of the open issues we aim at addressing as well as some of the preliminary results concerning a multi-level organization for authorization rules and the development of a flexible rule evaluation component integrated with the scheduler in order to support secure allocation of resources.

View PDF

Animal Identification By Retinal Imaging and Applications For Biosecurity

Area: Identification, Authentication and Privacy
Personnel: C.R. Blomeke (PI), C.P. Rusk Ph.D., M.A. Balschweid Ph.D., & S.J. Elliott, Ph. D

View PDF

Behavioral Authentication of Server Flows

Area: Enclave and Network Security
Personnel: J. Early (PI), C. Brodley, and C. Rosenberg
Understanding the nature of the information flowing into and out of a system or network is fundamental to determining if there is adherence to a usage policy. Traditional methods of determining traffic type rely on the port label carried in the packet header. This method can fail, however, in the presence of proxy servers that re-map port numbers or host services that have been compromised to act as backdoors or covert channels.

We present an approach to classifying server traffic based on decision trees learned during a training phase. The trees are constructed from traffic described using a set of features we designed to capture stream behavior. Because our classification of the traffic type is independent of port label, it provides a more accurate classification in the presence of malicious activity. An empirical evaluation illustrates that models of both aggregate protocol behavior and host-specific protocol behavior obtain classification accuracies ranging from 82-100\%.

Can source code auditing software identify common vulnerabilities and be used to evaluate software

Area: Assurable Software and Architectures
Personnel: Pascal Meunier (PI), Jon Heffley
Software vulnerabilities are a growing problem (c.f. MITRE's CVE, http://cve.mitre.org). Moreover, many of the mistakes leading to vulnerabilities are repeated often. Source code auditing tools could be a great help in identifying common mistakes, or in evaluating the security of software. We investigated the effectiveness of the auditing tools we could access, using the following criteria: number of false positives, false negatives by comparison to known vulnerabilities, and time required to validate the warnings related to vulnerabilities. Some of the known vulnerabilities could not be found by any code auditor, because they were fairly unusual or involved knowledge not contained or codified in the source code. The coding problems that could be identified consisted of string format vulnerabilities, buffer overflows, race conditions, memory leaks, and symlink attacks. However, we found it extremely time-consuming to validate warnings related to the latter four types, because the number of false positives was very high, and because it was not easily apparent if they were real vulnerabilities. These required that the code be audited locally, by people familiar with the code, and carefully inspected to see if the values could be manipulated in such a way as to produce malicious effects. However, the string format vulnerabilities were much easier to recognize. In small and medium scale projects, the open source program Pscan was useful in finding a mix of coding style issues that could potentially enable string format vulnerabilities, as well as actual vulnerabilities. The limitations of Pscan were more obvious in large scale projects like OpenBSD, as more false positives occurred. Clearly, auditing source code for all vulnerabilities remains a time-consuming process, even with the help of the current tools, and more research is needed in identifying and avoiding other common mistakes.

View PDF

CERIAS Educational Outreach

Area: Security Awareness, Education and Training
Personnel: Matt Rose (PI)
This poster highlights recent CERIAS Educational Outreach activities, including faculty and curriculum development, continuing education efforts, train-the-trainer material, and self-paced learning options.

View PDF

Computer Aided Forensics

Area: Incident Detection, Response, and Investigation
Personnel: Igor Nai Fovino (PI)
Law enforcement people who investigate computer crimes have to deal with several different criminal conduits and apparently unrelated evidence data. It is unlikely that a single group of investigators has all the skills needed to interpret correctly all the evidence collected and understand all the intricacies of the different technologies involved in a non trivial case. We propose a system for supporting investigators that aims at: * producing reusable knowledge about investigations * structuring evidence support to prosecution hypotheses * guiding less skilled detectives during evidence collection The prosecution objective is to prove an accusatory "theorem" on the basis of evidence collected. In general, every criminal offence can be decomposed in a number of necessary conditions to be verified in order to establish guiltiness. Each condition should be checked by a number of typical tests as the experience in its domain suggests. Evidence support provided by performed tests can evaluated to asses global soundness and completeness of the inquiry.

View PDF

Computer Criminal Taxonomy: A Critical Analysis

Area: Security Awareness, Education and Training
Personnel: Kathryn Seigfried (PI)
Despite the rise in computer related crimes throughout the last decade, there have been only limited empirical attempts to profile computer criminals. In 1995, the FBI developed the Computer Crime Adversarial Matrix (CCAM). Unfortunately, the CCAM is now out of date and full of statistical and methodological problems. This study is a critical analysis of the CCAM model. The findings will be combined with other classification models (Rogers, 2000 & Taylor, 1990) and the results used to develop the foundation for a new computer criminal taxonomy based on better empirical/statistical support.

Detecting Stepping Stones in the presence of timing perturbations

Area: Enclave and Network Security
Personnel: Linfeng Zhang (PI)
Network attackers often use stepping stones to attain anonymity. It's difficult for the victims to find out who is the real attacker when the contents of packets are encrypted, and timing analysis is a capable method. However, timing analysis is vulnerable if the attackers introduce random delays or reorder the packets on the stepping stones.

We build a Linux environment to do experiments on timing perturbations and gather large information which is useful for future work on detecting stepping stones in the presence of timing perturbations.

DLLT: Distributed Link List Traceback

Area: Enclave and Network Security
Personnel: Basheer Al-Duwairi
Denial of Service (DoS) attacks represent a major threat to the availability of Internet services. Identifying the sources of these attacks is considered an important step toward a DoS-free Internet. In this poster, we propose a new scheme, called Distributed Link-List Traceback (DLLT), which combines the good features of probabilistic packet marking (PPM) and Hash-based traceback. The scheme is based on a novel concept called distributed link list (DLL), in which we keep track of some of the routers that were involved in forwarding certain packet by establishing a temporary link between them in a distributed manner. DLL is based on "store, mark and forward" approach. A single marking field is allocated in each packet. Any router that decides to mark the packet, stores the current IP address found in the marking field along with the packet ID in a special data structure called Marking Table maintained at the router, then marks the packet by overwriting the marking field by its own IP address, and then forwards the packet as usual. Any router that decides not to mark the packet just forwards it. Our studies show that the proposed scheme requires small number of packets, adjustable amount of memory. At the same time, offers high attack source detection percentage.

View PDF

Effects of Illumination Changes on the Performance of Geometrix FaceVision 3D FRS

Area: Identification, Authentication and Privacy
Personnel: Eric Kukula (PI), Stephen Elliot

View PDF

Effects of Template Aging on Facial Recognition

Area: Identification, Authentication and Privacy
Personnel: J.R. Kitchel (PI), Stephen Elliot

View PDF

Electronic Assertive Community Treatment

Area: Identification, Authentication and Privacy
Personnel:Jacob Schroeder, Abhilasha Bhargav, Maryam Abdul Karim, Khurram Kazi1, Serkan Unal1
* Mission Statement: To develop a mobile communications solution that will effectively enable 10 members PACT team, working in an urban or rural setting to constantly communicate with each other. * Project Overview The Assertive Community support team (ACT) works with the mentally-ill people. The volunteers are very mobile in terms of their work. The project enables them to modify and access clients data efficiently, which decreases paperwork and provides more time for the client. Electronic Assertive Community Treatment (eACT) is a project that brings together the ACT team and their patients by efficient means through wireless communication.

View PDF

Evaluation Methods for Internet Security Technology

Area: Enclave and Network Security
Personnel: Dr. Catherine Rosenberg, Dr. Carla Brodley, Dr. Sonia Fahmy, Roman Chertov, Bryon Gloden, Chris Kanich, Jacky Lee, Yu-chun Mao, Kevin Robbins, Rupak Sanjel
This poster introduces the multi-organization project funded by the National Science Foundation and the Department of Homeland Security entitled "Evaluation Methods for Internet Security Technology". Its main objective is to develop scientifically rigorous testing frameworks and methodologies for representative classes of network attacks and defense mechanisms.

View PDF

Evaluation of Biometric Implementers to Investigate a Viable Return On Security Investment Technique

Area: Risk Management, Policies and Laws
Personnel: R.C. Skinner (PI), P.J. Duparcq Ph. D, S.J. Elliott Ph. D, M.J. Dark Ph. D
Within an enterprise-level organization's IT system, ultimately upper-level management approves of new investments and technologies that are to be deployed or tested. One of the key factors that management takes into account is what the overall effect is on the bottom line, or the return on investment. There are various techniques that organizations use to determine this factor; however the process has not been standardized. The term ROSI (Return on Security Investment) was developed a couple of years ago, with exceptional contributions from University of Idaho (using Network Intrusion Detection Systems) and Stanford, MIT, @Stake (developing Secure Software Engineering). The term did strike interest within a variety of different sources. However, the solution of an ample return on security investment is still being asked today. There are multiple technologies that allow an organization to secure their IT system. Biometric security is one of the technologies however; it has been a hard technology to adopt based upon a variety of factors including cost to implement, lack of standards, and lack of large scale published deployments. According to Ernst and Young's 2003 Global Information Security Survey that had responses from over 1,400 organizations, "nearly 60% or organizations say they rarely or never calculate ROI for information security spending."

View PDF

FFT-ECM by Division Polynomials for Factoring

Area: Cryptology and Rights Management
Personnel: Zhihong Li (PI), Sam Wagstaff
Factoring large numbers is very useful in cryptography. One can break certain ciphers, such as the RSA cryptosystem, if one can factor large numbers. In this poster we develop a new integer factoring algorithm similar to the ECM(elliptic curve method). The difference is that this algorithm uses division polynomials and a FFT(fast Fourier transform) to compute multiples of many points simultaneously. The ordinary ECM has little chance of factoring an RSA public key and breaking the cipher. This algorithm has a much greater chance of factoring a number of the size of RSA public keys currently used.

View PDF

Fingerprint Image Quality Evaluation: Elderly and Younger Populations

Area: Identification, Authentication and Privacy
Personnel: Nathan Sickler (PI), Stephen Elliot

View PDF

Fingerprint Sensor Degradation

Area: Identification, Authentication and Privacy
Personnel: Chris Hernandez (PI), Stephen Elliot

Flexible Data Collection and Reduction System for Integrated Attribution

Area: Enclave and Network Security
Personnel: Yongping Tang (PI)
Integrated attack attribution is based on analysis and correlation of traffic data, we provide a system to monitor and gather the traffic data on the network infaces. And due to the large quantity data we need save, a data reduction system will greatly help to save the disk space. Our data reduction will use a flexible mechanism to compress the traffic data by the adpative arithmetic coding and will exploit the characteristic of the traffic data to construct a proper model .

View PDF

Foundational and Applied Research in Access Control

Area: Assurable Software and Architectures
Personnel: Mahesh Tripunitara, Ninghui Li
We discuss components of our works in access control here at CERIAS. Our work includes the following: - A theory for gauging the expressive power of an access control model, and the use of that theory to compare the expressive power of different models, - Analysis of safety, availability and mutual exclusion properties of access control models, specifically, the hierarchical role based access control model, - Administrative models for hierarchical role based access control that satisfy both pragmatic and formal requirements. The administrative model we are building scales well with large numbers of users and roles, supports complex role hierarchies and lends itself to automation. It has favorable complexity properties in deciding safety, availability and mutual exclusion, and, - The relationship of hierarchical role based access control to other models, such as the HRU model (DAC) and Trust Management languages.

Graphical Authentication Scheme

Area: Identification, Authentication and Privacy
Personnel: Yefei Li (PI)
Currently most picture based authentication methods are vulnerable to shoulder surfing attack. The only project known so far countering this problem is Graphical Password by L. Sobrado at Rutgers University. But their scheme still have a drawback, it needs to store password in plaintext(or in some representation of plaintext). With some modification, our scheme could counter soulder surfing attack without saving the password in plaintext. And our scheme is also resistant to statistical attack even the attacker has unlimited ability to shoulder surf user's login session.

View PDF

Intrusion Detection System Research Group

Area: Incident Detection, Response, and Investigation
Personnel: IDS Group, Blake Matheny
Intrusion detection systems are becoming standard as a protection device for large companies. The market for such systems is large and growing however there is as yet no rigorous testing method to determine their effectivenes. This poster presents some of the previous work and suggests improvements.

View PDF

Implementation of Hand Geometry in Recreational Sports Center

Area: Identification, Authentication and Privacy
Personnel: Chris Hernandez (PI), Stephen Elliot, M. Huddard

View PDF

Information Security Management

Area: Risk Management, Policies and Laws
Personnel:M. R. Averbukh, J. R. Kitchel, M. J. Dark, PhD

View PDF

K-12 Outreach

Area: Security Awareness, Education and Training
Personnel: Teri Schmidt (PI)
The CERIAS K-12 Outreach Program mission is three fold: to increase the security of K-12 information systems, integrate information security as a discipline into the K-12 curriculum, and to raise parent and community awareness of information security issues through K-12 schools. Building upon existing work, the program will continue to grow and fulfill its mission through collaboration with K-12 schools in the state of Indiana, outreach entities on the Purdue University campus and nationwide, Indiana Service Centers, and CERIAS sponsors.

Key Pre-Distribution Schemes for Wireless Sensor Networks

Area: Enclave and Network Security
Personnel: Zhen Yu (PI)
Sensor networks pose security and privacy challenges when deployed for various distributed control and monitoring applications. Encrypting sensor communications is required especially in a hostile environment. The challenge is how to bootstrap secure communications among sensors. Compared to some general key management schemes, key pre-distribution scheme is a promising one. We investigated some existing key pre-distribution schemes, proposed a framework of these schemes and defined some metrics useful for design and evaluation of these schemes for wireless sensor networks.

View PDF

Metrics Based Security Assessment (MBSA): Combining the ISO 17799 Standard with the Systems Security

Area: Security Awareness, Education and Training
Personnel: Jim Goldman (PI), V.R. Christie
This research introduces the Metrics Based Security Assessment (MBSA) as a means of measuring an organization's information security maturity. It argues that the historical (i.e., first through third generations) approaches used to assess/ensure system security are not effective and thereby combines the strengths of two industry proven information security models, the ISO 17799 Standard and the Systems Security Engineering Capability Maturity Model (SSE-CMM), to overcome their inherent weaknesses. Furthermore, the authors trust that the use of information security metrics will enable information security practitioners to measure their information security efforts in a more consistent, reliable, and timely manner. Such a solution will allow a more reliable qualitative measurement of the return achieved through given information security investments. Ultimately, the MBSA will allow professionals an additional, more robust self-assessment tool in answering management questions similar to: "How secure are we?"

View PDF

National Plant Diagnostic Network

Area: Incident Detection, Response, and Investigation
Personnel: Eileen Luke (PI), Jim Pheasant
Following September 11, America's attention and resources were refocused on homeland security. While emphasizing the security of facilities, such as airports, tourist attraction sites, and major public buildings, etc. Congress also recognized the vulnerability of its agricultural systems. On June 12, 2002, the President signed into law the Agricultural Bio-terrorism Protection Act of 2002. The Act covers both animal and plant production and directed the Secretary of Agriculture to develop a network linking plant and animal disease diagnostic facilities across the country. The National Plant diagnostic Network (NPDN) focuses on the plant diseases and pest aspects of the program. Its mission is to enhance national agricultural security by quickly detecting introduced pests and pathogens. This will be achieved by creating a functional nationwide network of public agricultural institutions with a cohesive, distributed system to quickly detect deliberately introduced, high consequence, biological pests and pathogens into our agricultural and natural ecosystems by providing means for quick identifications and establishing protocols for immediate reporting to appropriate responders and decision makers. The network is comprised of Land Grant University plant disease and pest diagnostic facilities across the United States. Lead universities have been selected and designated as regional Centers to represent five regions across the country. These Centers are located at Cornell University, Michigan State University, Kansas State University, University of Florida at Gainesville, and University of California at Davis. The National Agricultural Pest Information System (NAPIS), located at Purdue University, has been designated as the central repository for archiving select data collected from the regions. NAPIS maintains information from the Cooperative Agricultural Pest Survey (CAPS), a network of state agricultural organizations and universities that survey for invasive species. As a part of the NPDN, NAPIS will expand its collection of data on plant diseases and other pests. The system will provide a national perspective on agricultural pests through dynamic maps and reports of plant pest distribution. Currently the pest information system houses 1.3 million records on more than 3,800 organisms, and that number will grow significantly as the plant diagnostic network centers start feeding information into the national database in the Spring, 2004. The establishment of the network will provide the means necessary for ensuring all participating Land Grant University diagnostic facilities are alerted of possible outbreaks and/or introductions and are technologically equipped to rapidly detect and identify pests and pathogens. This will be accomplished by establishing an effective communication network between regional expertise, developing harmonized reporting protocols with the national diagnostic network participants, and cataloging pest and disease occurrence to be included in the national database. The NPDN national database at Purdue University will provide summary reports, distribution maps, pattern analysis, and data sets for use in other studies

View PDF

Natural Language Information Assurance and Security

Area: Identification, Authentication and Privacy
Personnel: Victor Raskin (PI), Katrina Triezenberg
Use the 2003 Abstract

Network-based Integrated Attribution for Multistage Attacks

Area: Incident Detection, Response, and Investigation
Personnel: Wei Wang
In this project we aim to develop integrated attribution techniques to find the origin of multi-stage attacks. By combing both passive and active approaches, we will merge multiple origin concealment solutions into a single framework. In the first step, we will develop flexible data correlation techniques and an upgradeble language-based archtecture. Furthermore, we will apply data fusion and efficient data reduction techniques to explore correlation information for complex attack scenarios.

View PDF

Privacy Preserving Data Mining

Area: Identification, Authentication and Privacy
Personnel: E. Bertino, I. Nai Fovino and L. Parasiliti Provenza
Securing data against intruders attacking implicit sensitive information is an open research problem. In order to make a publicly available system secure, we must ensure not only that private sensitive data have been trimmed out, but also that certain inference channels have been blocked as well. Moreover, the need to make a system as open as possible - to the degree that data sensitivity is not jeopardized - asks for techniques that account for the disclosure control of sensitive data. In this context an European project (Codmine) was developed in order to reach the following objectives: (a) Investigation of new techniques for privacy preserving data mining; (b) Implementation and testing of newly proposed techniques for privacy preserving data mining; (c) Specification of an evaluation framework in order to compare all the techniques in a common platform.

View PDF

Privacy Preserving Data Mining on Vertically Partitioned Data

Area: Cryptology and Rights Management
Personnel: Jaideep Vaidya, Chris Clifton
Jaideep Vaidya is a Ph.D. candidate working with Prof. Chris Clifton in the Department of Computer Sciences at Purdue University. He received his B.E. degree in Computer Engineering from the University of Mumbai, India in 1999 and his M.S. degree from Purdue in 2001. His research interests lie in the areas of data mining, databases and security.

View PDF

Privacy-Preserving Distributed Data Mining on Horizontally Partitioned Data

Area: Identification, Authentication and Privacy
Personnel: Murat Kantarcioglu, Chris Clifton

View PDF

Proactive Defenses Against DDoS and Worm Attacks

Area: Enclave and Network Security
Personnel: Kihong Park (PI), Hyojeong Kim, Ali Selcuk, Bhagya Bethala, Humayun Khan, Wonjun Lee

View PDF

Promoting Memorability and Security of Passwords Through Sentence Generation

Area: Identification, Authentication and Privacy
Personnel: Bik-Lam Belin Tai, Kim-Phuong L. Vu, Abhilasha Bhargav, and Robert W. Proctor (PI)
Much of our personal information flows through the Internet as we receive online services or make online transactions, making it essential to have reliable security systems to protect against information theft, denial of service, and fraud. The most commonly used method for authentication and identification is the username-password combination. However, this method is weak because users select passwords that are easy to remember and easy to crack. We evaluated a sentence generation method designed to improve recall and security. The sentence generation method produced crack-resistant passwords when the users were instructed to embed a digit and special character into the sentence (and password). However, the requirements of including a digit and special character also resulted in a cost in the memorability of the password. An analysis of errors identified three areas of research that may develop techniques that promote better recall of passwords using this sentence generation method.

View PDF

Quantifying the Vulnerability Likelihood of Software Artifacts

Area: Assurable Software and Architectures
Personnel: Rajeev Gopalakrishna
To Err is Human: as long as programming involves human activity, software vulnerabilities will exist. Mistakes leading to vulnerabilities are repeated often. Certain types of mistakes are more common than others. An analysis of vulnerabilities in widely deployed software artifacts suggests that mistakes in coding lead to nearly 72 percent of the vulnerabilities. While the areas of software engineering, software quality, and software reliability have been studied extensively over the last three decades, they are mostly concerned with assuring the usability of software under normal conditions. Accidental failures resulting from accidental faults are their subject of concern and not malicious attacks resulting from vulnerability exploits. Software security has to deal with both accidental and malicious failures resulting from faults introduced either accidentally or deliberately. It is this differentiating factor of "intention" that makes software security a challenging task.

A software vulnerability is an instance of an error in the specification, development, or configuration of software such that its execution can violate the security policy. As such, any assurance about the security properties of a software artifact should ultimately translate into a quantitative assessment of the vulnerabilities in that artifact. Therefore, detecting the presence of vulnerabilities in a software artifact is important. There are several checkers for doing that. However, such checkers approach the detection of vulnerabilities in a binary manner---a vulnerability is either present or absent. This binary treatment coupled with the fact that the checkers make approximations to keep the analysis decidable, tractable, and scalable (program analysis in languages such as C, C++, and Java is undecidable in the general case) leads to both false negatives and false positives. Furthermore, a vulnerability detected by such checkers can be classified as a false positive only by manual verification (by auditing the source code for instance); unless, *all* the vulnerabilities present in the analyzed program are known a priori. More importantly, such checkers look only for specific program properties that lead to vulnerabilities. They generally ignore aspects of syntax and style and focus only on the semantics of a program.

Given that program analysis is only approximate, it is essential to perceive vulnerability detection in a probabilistic manner. Also, because programming is still primarily a human activity, factors of syntax and style affect the psychological complexity of programming which is widely believed to influence the occurrence of errors leading to faults (or vulnerabilities) and failures. Furthermore, if the past is any indicator of the future, statistical analysis of the historical data on the vulnerabilities in a software artifact may help predict future trends of the vulnerabilities in the software artifact. However, there is currently no technique or methodology to probabilistically characterize the "vulnerability likelihood" of a software artifact by considering historical data and measuring different aspects of syntax, semantics, and style. Generating such measurements would provide a novel way of quantitatively assessing software assurance. And suggesting ways to improve upon the obtained metrics could result in software of better quality and higher assurance.

View PDF

Rights Protection for Relational Data and Sensor Streams

Area: Cryptology and Rights Management
Personnel: Radu Sion (PI), Mike Atallah, Sunil Prabhakar
Information, as an expression of knowledge is probably the most valuable asset of humanity today. By enabling relatively cost-free, fast, and accurate access channels to information in digital form, computers have radically changed the way we think and express ideas. As increasingly more of it is produced, packaged and delivered in digital form in a fast, networked environment, one of its main features threatens to become its worst enemy: zero-cost verbatim copies. The inherent ability to produce duplicates of digital Works at virtually no cost can be now misused e.g. for illicit profit. This dramatically increases the requirement for effective rights protection mechanisms. Different avenues are available, each with its advantages and drawbacks. Enforcement by legal means is usually ineffective, unless augmented by a digital counter-part such as Information Hiding. Digital Watermarking deploys Information Hiding as a method of Rights Protection to conceal an indelible "rights witness" (watermark) within the digital Work to be protected. The soundness of such a method relies on the assumption that altering the Work in the process of hiding the mark does not destroy the value of the Work, and that it is difficult for a malicious adversary (Mallory) to remove or alter the mark beyond detection without destroying the value of the Work. The ability to resist attacks from such an adversary (mostly aiming at removing the embedded watermark) is one of the major concerns in the design of a sound watermarking solution.

With the notable exception of software watermarking, the overwhelming majority of research efforts have been invested in the framework of multimedia data (e.g. images, video and audio). In this work, I analyze digital watermarking from a higher level, domain-independent perspective. I propose a theoretical model [Sion et al, IEEE ITCC 2002] and in [Sion et al, SPIE 2004] ask: are there any limitations to what watermarking can do? What are these and when can they be reached? I then propose, design and analyze watermarking solutions for (i) numeric sets [Sion et al, IWDW 2002], (ii) numeric relational data [Sion et al, SIGMOD 2003, ICDE 2004], (iii) categorical data [Sion, ICDE 2004], (iv) streams [Sion et al, under review] and (v) semi-structures [Sion et al, IWDW 2003, NSF EIA-9903545]. I also explored the ability to hide information in natural language text [Atallah et. al., IHW02, Springer-Verlag], and developed a text tamper-proofing proof-of-concept [Naval Research grant N00014-02-1-0364/2002].

View PDF

Secure Interoperation in a Multi-Domain Environment

Area: Identification, Authentication and Privacy
Personnel: Basit Shafiq, James Joshi, Elisa Bertino, Arif Ghafoor
With the increase in information and data accessibility, there is a growing concern for security and privacy of data. Numerous studies have shown that unauthorized access, in particular by insiders, constitutes a major security problem for enterprise application environments, highlighting the need for robust access control management systems. This problem can get magnified in a collaborative environment where distributed, heterogeneous and autonomous organizations interoperate with each other. Collaboration in such a diverse environment requires integration of local policies to compose a global security policy for controlling information accesses across multiple domain. Integration of security policies, local to the collaborating domains, entails various challenges regarding reconciliation of semantic differences, secure interoperability, containment of risk propagation, and policy management etc. An access control model that can be used to uniformly represent policies of the individual domains is desirable. Such a model should allow interoperation and information sharing among multiple domains and at the same time guarantee that such inter-domain data accesses do not violate the underlying policies of constituent domains.

View PDF

Secure Outsourcing of Some Applications

Area: Identification, Authentication and Privacy
Personnel: Jiangtao Li, Mike Atallah

View PDF

Secure Spread: Providing a Secure Infrastructure for Collaborative Applications

Area: Trusted Social and Human Interactions
Personnel: Y. Amir, Y. Kim, C. Nita-Rotaru (PI), J. Stanton, Gene Tsudik

View PDF

Skin and Environmental Temperature Effects on Fingerprint Image Quality

Area: Identification, Authentication and Privacy
Personnel: Jeremy M. Morton, Nathan C. Sickler & S.J. Elliott, Ph. D

View PDF

Social use and encryption practices on wireless (802.11) networks in Lexington, Kentucky. A GPS/GIS

Area: Trusted Social and Human Interactions
Personnel: Sorin Matei (PI)
The present study analyzes the early stage diffusion of 802.11 networks in a relatively small and self confined American urban area: Lexington, Kentucky. Our methodology takes advantage of a novel method of monitoring and locating wireless networks, which combines GPS location of wireless access points, Geographic Information Systems location and identification of land parcels on which the access points are found and data provided by the US Census bureau about the neighborhoods in which the networks are found. The goal of the study is to estimate relative proportion of residential vs. business networks, patterns of geographic diffusion, and to explain, at the macro-level of analysis, using neighborhoods as cases, what socio-demographic factors explain the emergence of such networks. Preliminary results indicate that of the 776 wireless access points identified, 66% were found to be located in residential, 21% in business and 13 % in publicly owned land parcels. We also found that more than 2/3rds of all the networks identified are "open", not using WAP encryption. In addition, there are only small differences in encryption rates between residential, business and public networks. Although at a rate of 24%, business networks are the most likely to be encrypted, this is not statistically different from the encryption rates observed on residential (18%) or public (17%) networks.

Classic and spatial statistical analysis results indicate that, overall, access points tend to be more numerous in areas where a larger proportion of the residential population works in a professional field (law, medicine, education, etc.) and in the mixed-use areas, situated closer to the urban core.

Analyzing the same issue by looking separately at the specific predictor factors for residential, business or public access points we found that all three are more likely to be present where single households are more prevalent or where family households are scarcer (relatively speaking). In addition, residential access points are more likely to be found in suburban neighborhoods, business networks in the downtown areas, and public networks in higher population density neighborhoods.

Re-examining the encryption issue at the neighborhood level, we found that encrypted networks are more likely to occur in the higher populated, downtown areas.

One of the most interesting finding is the fact that wireless networks, like other "hip" technologies, do live up to their fame as "agents of freedom," since single households are more likely to adopt them. Singles are culturally more in tune with the idea of "personalized technologies" and love fashionable devices, since they are younger and they enjoy more disposable income. Presence of business and public networks in the higher density, central areas, of the city suggests, on the other hand, that newer communication technologies tend to concentrate in the urban core, which is an indirect evidence of the fact that space has not "died," as some researchers suggest, and that geographic clustering still matters. Environments with a lot of pre-existing business and financial investment are still preferred for the development of newer technologies.

These findings are tentative and they only give a glimpse into how wireless network technologies evolve. We will pursue this research line with new locations and improved sampling methodologies. More important, we will repeat the monitoring exercise in Lexington, to better understand the evolution of this technology over time.

View PDF

The Effects of Varying Illumination Levels on FRS Algorithm Performance

Area: Identification, Authentication and Privacy
Personnel: Eric Kukula (PI), Stephen Elliot

View PDF

The Embedded Sensors Project

Area: Incident Detection, Response, and Investigation
Personnel: Keith Watson, Ali Yilmaz Kumcu, Sarika Agarwal, Christopher Kois
The Embedded Sensors Project is an advanced research project in intrusion detection. Internal sensors are placed at critical points inside operating system, network stack, and application source code to detect attacks. The sensors are a relatively small amount of source code that detect attacks against the system. Information collected by the sensor is passed to an internal sensor support framework for logging and reporting. Sensors are difficult for attackers to circumvent, tamper-resistant, provide both host and network attack detection, consume few system resources, and provide real-time detection. We plan to expand upon the initial research efforts and make the source code available to other researchers and experimenters. We also plan to do further research into designing sensors that detect unknown attacks. In order to demonstrate the usefulness of the embedded sensor concept, we plan to make the sensor support framework very portable so that other systems and applications can incorporate sensors.

View PDF

The Poly^2 Project

Area: Assurable Software and Architectures
Personnel: Keith Watson
The Poly^2 Project is an research project in security architecture. The goal of this project is to secure critical network services and provide reliability and redundancy to these services. For the initial design, we applied good security design principles to achieve these goals. The design incorporates separation of network services onto multiple computing systems and strict control of the information flow between the systems and networks. This allows us to build reliability and redundancy into the platform while increasing overall trust. Additionally, we create minimized, application-specific operating systems. The operating system will only provide the minimum set of needed services and resources to support a specific application or network service. This customization will increase the difficulty in attacking and compromising the system. To manage the individual systems and services in this design, a management system will be created to allow administrators to quickly provision new and additional network services. In conjuction with the SODA project, we are proposing a new research project to analyze various computing architectures and their strengths in different contexts.

View PDF

The Psychology of Hackers and Virus Writers: A Comparative Analysis

Area: Risk Management, Policies and Laws
Personnel: Marc Rogers (PI), Sarah Gordon, Jia Liu, Natalie Dove-Smoak, Erin Johnson
This study was exploratory and designed to add to the growing body of knowledge in the field of deviant computer behavior analysis. The study examined certain characteristics of individuals engaged in virus writing/releasing and hacking behaviors. Introductory psychology students took part in the study as part of their course requirements for experimental credits (N = 283). The students filled out a battery of self-report questionnaires that measured their frequency of deviant computer activity (Rogers, 2001), big-five factor traits (Goldberg, 1992), exploitive manipulative behavior (Altemeyer, 1995), and morality (Hladkyj, 2002). The general finding was that there were significant differences in certain personality traits and characteristics of self reported virus writers, hackers, and the public. Four out of the five hypotheses were supported. Additional exploratory analyses and the implications for future research are also discussed.

View PDF

The Reverse Loophole of HIPAA Security Compliance

Area: Risk Management, Policies and Laws
Personnel: Gram Ludlow (PI), Rich Skinner, Dan Hadaway
This project investigates the relationship between large and small healthcare providers in relation to the HIPAA security ruling. Contained within the ruling is a provision that, to be HIPAA compliant, a healthcare provider must ensure that all parties the organization does business with are also compliant. The deadline for compliance for large providers is April 2005, and the deadline for small clinics is April 2006. These two factors create a "reverse loophole" effect. For the large entities to comply and continue their current business practices, they must in some way convince small clinics they do business with to become HIPAA compliant a year earlier than they are otherwise obligated. Our project explores the dynamics of this relationship in order to develop the means for large healthcare providers to address this potential problem. When the security ruling was finalized, additional requirements were put on healthcare providers. Without a plan or method of addressing non-compliant business partners, large healthcare organizations are at risk for at least the one-year window. One of the goals of this project is to find a way to reduce or eliminate that risk. The research team will interview officials from two hospitals and several small clinics, including CIO's, IT managers, compliance officers, doctors, practice managers, and end users. That information combined with the knowledge of HIPPA that the consulting firm infotex possesses will be used to create a report that will be presented to healthcare organizations.

View PDF

The State of Risk Assessment Practices in Information Security: An Exploratory Investigation

Area: Risk Management, Policies and Laws
Personnel: Jackie Rees (PI), Jonathan P. Allen
Risk in Information Systems Security can be defined as a function of a given threat source's exercising a particular vulnerability, and the resulting impact of that adverse event on the organization. Risk management is the process of identifying and assessing risk and taking steps to reduce it to an acceptable level, given the costs involved in doing so. The objective of this research is to assess the current state of practice in conducting risk assessments for information security policy management and to make recommendations to managers to improve their risk assessment activities. We also wish to provide researchers with direction for areas requiring further study. Results from the exploratory survey of U.S. headquartered firms indicate that while overall, respondents are not completely dissatisfied with their risk assessment choices, better models and techniques for risk assessment for information security planning and management are required.

View PDF

The United States Military Academy

Area: Security Awareness, Education and Training
Personnel: Ferguson Aaron J. (PI)

View PDF

Transactional Data Analysis and Visualization Lab

Area: Trusted Social and Human Interactions
Personnel: Chris Clifton, Ananth Iyer, Reha Uzsoy, Richard Cho, Wei Jiang, Eirik Herskedal, Murat Kantarcioglu, Jaideep Vaidya
Sharing capacity across decentralized service providers can lead to improvements in logistics efficiency. However, sharing the information needed to determine if capacity can be shared poses interesting problems. We focus on a problem faced by independent trucking companies that have separate pickup and delivery tasks; the identification of potential efficiency enhancing task swaps between these companies. A second goal is to limit the information the parties must reveal to identify these swaps. By integrating state-of-the-art techniques from cryptography into an operations context we present an algorithm that finds opportunities to swap loads without revealing any information except the loads swapped, along with proofs of the security. We then apply this algorithm to an empirical dataset from a large transportation company and present results that suggest significant opportunities to improve efficiency through Pareto improving swaps.

View PDF

Trust-X: a Trust Negotiation Proposal

Area: Identification, Authentication and Privacy
Personnel: E. Bertino, E. Ferrari, A. C. Squicciarini
Trust negotiation is a promising approach for establishing trust in open systems like the Internet, where sensitive interactions may often occur among entities with no prior knowledge of each other. In this poster we present the work we have achieved in the trust negotiation research area. In particular, we overview the system we have designed, named Trust-X. Trust-X is a comprehensive XML framework designed to efficiently support trust negotiations. The framework we propose keeps into account all aspects related to negotiations, from the specification of the profiles and policies of the involved parties to the determination of the strategy to succeed in the negotiation. Key features of the system are the efficient support of a number of approaches to carry out a negotiation as well as techniques to protect privacy between the negotiating parties.

View PDF

Use of Information Technology by Primary Care Physicians in the U.S.

Area: Trusted Social and Human Interactions
Personnel: James Anderson, Marilyn Anderson
The objective of this study was to estimate the current and future level of use of information technoloogy by primary care physicians in the U.S. Primary care physicians listed by the American Medical Association were contacted by e-mail and asked to complete a Web-based survey. A total of 2,145 physicians responded. Primary care physicians reported use of electronic medivcal records, e-prescribing, point-of-care decuision supports tools, and electronic communication with patients. They also inedicated their perceptions of benefits and barriers to implementation of information technology (IT) in their practices. These data were used to construct a computer simulation model using STELLA to predict physician's future implementation of IT. The model predicts that primary care physicians are likely to rapidly implement electroonic prescribing but are likely to be slower in adopting electronic communication with their patients.

View PDF

User Re-Authentication via Mouse Movements

Area: Incident Detection, Response, and Investigation
Personnel: Maja Pusara (PI), Carla Brodley
We present an approach to user re-authentication based on the data collected from the computer's mouse device. Our underlying hypothesis is that one can successfully model user behavior on the basis of user-invoked mouse movements. Our implemented system raises an alarm when the current behavior of user X, deviates sufficiently from learned "normal" behavior of user X. To learn a model of normal behavior we explore supervised learning (learn to discriminate among the behaviors of the k users of the computer). Our empirical results show that although individuals that use the same set of applications can be differentiated via their mouse behavior, analyzing mouse movements alone is not sufficient for a stand-alone intrusion detection system. We conclude that user re-authentication via mouse movements is the first line of defense in the hierarchical structure of an intrusion detection system.

View PDF

Whodunnit? - An intrusion analysis system

Area: Incident Detection, Response, and Investigation
Personnel: Sundararaman Jeyaraman, Mike Atallah
Intrusion analysis has traditionally been a very arduous and largely manual task. The reasons vary from insufficient logs to lack of proper tools for analysing the available audit logs. In our work, we hope to build an intrusion analysis system that could provide precise answers, efficiently, to the most commonly asked questions by the system administrators investigating an intrusion. We take the view that, the failings of today's intrusion analysis tools can be largely attributed to insufficient auditing. In our work, we capture all the necessary dependency relationships between the system events so that precise intrusion analysis is possible.

View PDF

WLAN Security Solution Selection Matrix

Area: Enclave and Network Security
Personnel: Anthony Scheller (PI), P.T. Rawles, J.E. Goldman
The purpose of this project was to create a Security Solution Selection Matrix for WLAN implementations using Luehrman's Real Options methodology. The methodology was used to evaluate the effectiveness of WLAN security techniques for the following environments: Small Office Home Office (SOHO)/Home, Corporate, and Public Access. The matrix allowed for the security techniques to be compared to one another so that a WLAN user or network administrator could select the appropriate technique, or a combination of techniques, to implement based on the individual's environment. Through this project a method is developed for applying Real Options to a technologically-associated, assessment paradigm. This poster presents this approach of employing Real Options for a technologically-associated, assessment paradigm.

View PDF

X-GTRBAC: An XML-based Framework for Distributed Access Control

Area: Identification, Authentication and Privacy
Personnel: Rafae Bhatti, James.B.D.Joshi, Elisa Bertino, Arif Ghafoor
This research presents X-GTRBAC: A framework for distributed access control in open systems. The X-GTRBAC framework is based on an XML-based Generalized Temporal extension of the well-known Role Based Access Control model, and provides fine-grained context-aware access control. A salient feature of the framework is that it allows expressing context-based constraints on resource accesses in distributed systems using an XML-based grammar specification called X-Grammar. This specification is derived from the well-defined semantics of the underlying formal model. Hence, the framework enables efficient and secure interoperation of heterogeneous domains across an enterprise. It supports decentralization of policy administration tasks through the X-GTRBAC Admin model. A prototype implementation of X-GTRBACv1.2 version is available. Related publications can be found at: http://web.ics.purdue.edu/~bhattir/academics/main.htm

View PDF

XML security

Area: Identification, Authentication and Privacy
Personnel: E. Bertino, E. Ferrari, B. Carminati and L. Parasiliti Provenza
The increasing ability to interconnect computers through internetworking, and the use of the Web, as the main means to exchange information, has sped up the development of a new class of information-centered applications focused on the dissemination of XML data. Since it is often the case that data managed by an information system are highly strategic and sensitive, a comprehensive framework for ensuring the main security properties in disseminating XML data, is needed. In our poster, we provide an overall overview of both the model and the system, provided by Author-X, for securing XML data especially conceived for push dissemination mode. Our model allows the specification of both access control and signature policies in order to require the satisfaction of confidentiality, integrity and authenticity requirements for both the receiving subjects and information owners. The Author-X system provides ad hoc techniques for enforcing the specified security policies.