Assured Identity and Privacy

There is a tension between increased confidence and granularity of authorization provided by better identification of on-line entities, and with the need to protect the privacy rights of individuals and organizations. This area includes research in role-based access control (RBAC), biometrics, pervasive surveillance (“Panoptic Effects”), privacy-protecting transformations of data, privacy-protecting data mining methods, privacy regulation (e.g., HIPAA and COPPA), oblivious multiparty computation, and trusted proxy research.

The Cloud’s DNA

Principal Investigator: John A. Springer, Ph.D

When conducting research, life scientists rely heavily on clinically annotated specimens, and the most thorough and effective clinical annotations contain information that is found in the electronic health records (EHRs) for the human subjects that are participating in the scientists’ studies. One primary piece of legislation pertinent to electronic health records is the Health Information Portability and Accountability Act (HIPAA, 1996). To protect the privacy of the human subjects, HIPAA dictates differing levels of access to the information found in the EHRs based on the roles that researchers play in a particular study; these levels vary from full access (including protected health information) to very limited (i.e., public) access. In the case of public access, the data must be de-identified based on criteria elucidated in the HIPAA legislation, and some of these criteria are stated in a general fashion to reflect the fluid nature of modern science. Due to these ambiguities, the complex measures that are often necessary to de-identify protected health information, and the risk of litigation and lost reputation, scientists rarely share their de-identified annotated data beyond their current study.

Unfortunately, this lack of sharing negatively impacts the reuse of experimental data beyond its current context, and in turn, this lack of reuse can adversely affect the translational impact of basic life sciences. In contrast to this constricting approach to the management of clinical annotations is the move in computing toward the “Cloud” wherein data are stored for easy retrieval and sharing. In our current study, we are surveying life scientists to ascertain their perceptions of a cloud-based approach to the management of their annotated data.

Health Insurance Portability and Accountability Act of 1996 (HIPAA). (1996). Retrieved July 10, 2009 from http://www.cms.hhs.gov/HIPAAGenInfo/Downloads/HIPAALaw.pdf.

Undergraduate Research Project:

Poster .pdf

The undergraduate student will conduct a comprehensive literature review and perform an analysis of the large data repositories frequently used in the life sciences. There are several large repositories. The Susan B. Komen Virtual Tissue Bank is one example. The Komen Virutal Tissue Bank is the only repository in the world for normal breast tissue and matched serum, plasma and DNA. By studying normal tissue, we accelerate research for the causes and prevention of breast cancer. To more deeply understand the evolution of the disease, it is necessary to compare abnormal, cancerous tissue against normal, healthy tissue. Student research projects include: - Characterization of how these large data repositories handle the sensitivity and privacy of the information they store. - Best practices for designing proteomic, genomic and metabalomic databases to enable data sharing and reuse while managing privacy and security requirements.

Biometric Information Lifecycle Framework

Principal Investigator: Stephen Elliott; Shimon Modi; Keith Watson

The deployment and usage of biometric systems is increasing at a rapid rate as the technology becomes more mature and gains user acceptance. Large-scale civilian applications like Registered Traveler program and US-VISIT program rely heavily on biometric systems as part of its authentication process. Biometric systems are also deployed in commercial applications like Automated Teller Machines (ATM) to replace or complement ATM cards. Securing the user’s biometric information is just as important as securing the biometric system. Improving security of biometric systems does have a positive impact on securing biometric information, but securing the system does not imply that the information is also secure. The technology ecosystem needs to be analyzed taking into account its principle constituents: the biometric system, the biometric process and the biometric information lifecycle. The concept of information lifecycle management has been under development for some time now, but it has not been applied to biometric information. Biometric Information Lifecycle Management refers to a sustainable strategy of maintaining confidentiality, integrity and availability of biometric information and developing policies or its use. The Biometric Information Lifecycle comprises of the following phases: creation, transformation, storage, usage, and disposition. This research is a work in progress which will define the biometric information lifecycle phases, create a taxonomy of attacks on biometric information lifecycle phases, and improve the security and management of biometric information.

Assessment of Indiana Dept. of Corrections Image Capture Process

Principal Investigator: Stephen Elliott; Shimon Modi; Eric Kukula

It has become apparent that data sharing capabilities across state departments and law enforcement agencies is an issue, especially in terms of tracking, monitoring, and identifying persons of interest. There is a need to assess the image capture process, as well as sharing capabilities, and to incorporate commercially available facial recognition technology to reduce the errors in identifying persons of interest. The objective of this project is to evaluate legacy face images, assess and standardize the image capture process across Indiana Dept. of Corrections (DOC) agencies, integrate facial recognition to link face databases, and integrate mobile devices in law enforcement vehicles for face recognition. This research will lead to improvements in the efficiency and quality of the face image capture process in DOC facilities and BMV branches and facilitate image sharing capabilities across State agencies.

A Novel Approach to Robust, Secured, and Cancellable Biometrics

Principal Investigator: Xukai Zou

Biometrics is to automatically identify or verify a person using physical, biological, and behavior characteristics, which include face, iris, fingerprints, hand geometry, voice, and etc. Compared to the traditional identification and verification methods (such as, some paper, plastic ID card, or password), biometrics is more convenient for users, reduces fraud, and can be more secure. Biometrics is becoming an important ally of security, intelligence, and law enforcement.

However, there are concerns about biometrics for daily life applications, such as security issues, privacy issues, standards, and etc. Among them, the biggest concern is the security of the biometric data. Unlike traditional identity methods, it is very hard, sometimes impossible, to re-issue a person’s biometric data. If biometric data is obtained, for example compromised due to identity theft, the user will lose control over them forever and lose his/her identity.

Some researchers proposed to encrypt biometric data. They are using quite standard methods such as Advanced Encryption Standard (AES) and Public key cryptosystem RSA and cryptographic hash functions. The main issue related to them is key and key management, which has been studied independently from biometrics. As a result, there is a lack of research on the dependent relation between biometrics and encryption/integrity/key management and on comprehensive mechanisms involving authentication, encryption, data integrity, and key management.

Recently, some biometric researchers have proposed cancellable biometrics, which allows the system to re-issue the biometric for a user. The key idea of the cancellable biometrics is to distort the biometric image/signal/features before matching. The distortion parameters can be easily changed, which provides the cancelable nature of the scheme.

However, few if any have combined encryption and cancellable biometrics together to ensure the security of biometric data in storage, transmission, and identification. The simple and naïve approach is to put them together by designing a cancellable biometric method and applying encryption. This approach does not take consideration of the characteristics of biometrics and would not be applicable to real-life scenarios.

In this project, we propose a robust, secured, and cancellable biometrics method, which incorporates the encryption/key/key management into the cancellable biometric method design to provide the optimum solution. The PIs are experts in the field of biometrics, security, and network administration, which are essential for the success of this project.

Secure, Composable, & Scalable Framework for Trusted Collaborative Computing

Principal Investigator: Xukai Zou

Collaborative Computing (CC) is a critical application domain within the Internet environment. A few examples of CC are multi-party computation, collaborative defense, tele-medicine and collaborative decision making. Participants in CC demand confidentiality, privacy, integrity, and controlled sharing of sensitive information. Also, CC environments involve many entities, which are dynamic, heterogeneous, distributed, and can be hostile. Currently, CC uses the Internet as the underlying infrastructure, which by design is not secure and suffers from incessant attacks ranging from eavesdropping to vulnerability exploitation. Hence, it is imperative for the success of CC to require a reliable and secure framework built on top of the Internet to remedy some of its limitations. CC, based on such an underlying framework, can be termed as Trusted Collaborative Computing (TCC). Thus, the long term objective of this research is to develop a framework that will enable TCC. This framework consists of: (1) (group-oriented) secure and anonymous communication, (2) finely-controlled data sharing and (3) secure, composable and scalable integration. The framework will effectively address the underlying challenges of secure communication and guaranteed access, anonymity, composability, interoperability, and scalability.

The core technique in the proposed TCC framework is Access Control Polynomial (ACP) which was just presented at and published in the proceedings of INFOCOM’08, one of the highest international conferences in the networking and security field. The short term yet intensive summer work is to implement and evaluate such an innovative ACP mechanism and related security modules. This work will significantly help the accomplishment of the long term objective and secure the application for external funding.

Trusted Medical Information System and Health Informatics

Principal Investigator: Xukai Zou

In December of 2004 a US Marine is severely wounded during combat operations in Iraq. After receiving world class treatment at Bethesda Naval Hospital and the Indianapolis VA medical center, the patient is able to carry on a normal civilian life in Indianapolis. Several months later the veteran gets in an accident and is transported via medi-vac to a non-VA facility trauma center in Indianapolis for care. The provider looks up the patient’s data using the Indiana Health Information Exchange and the patient has a highly positive outcome. This outcome is only because critically important medical data was made available to the provider at the right time via a collaborative database between local hospitals. This scenario is only possible if VA hospitals can securely manage sharing of data between non VA health care facilities and themselves. The security schema the VA needs to meet this is a highly secure, manageable, portable, scalable, granular to the record & field level and most importantly cost effective security architecture.

It is with great enthusiasm we present the VISTALOCK security schema to the Department of Veterans Affairs. The scientists who have invented this technology are offering the Department of Veterans Affairs the opportunity to collaborate with them by implementing the already developed and proven technology across the VA Health Care domain. The VISTALOCK security architecture, using TEGO technology, is designed to be flexible and adaptable to support the security needs of VA and ALL of its national, regional and local affiliates.

VISTALOCK addresses four major security functions needed in collaborative data exchange and sharing, that is, Hierarchical Access Control (HAC), Secure Group Communication (SGC); Differential Access Control (DAC); Secure Dynamic Conferencing (SDC), enforces confidentiality, integrity, authentication, and fine tuned authorized access of patient records with granularity to the field and record level based on Cryptography and Key Management, and provides the capabilities of scalability, efficiency, dynamics, flexibility, and transparence.

The VISTALOCK security system is a bolt on security architecture that works in addition to the existing system(s) for which it protects, it will require no changes to the VISTA database repository and will act as a security gateway for all VISTA data traffic between the client and host. The VA will be able to apply best of breed technology to its security architecture, by providing modular and portable security services to the Vista/HealtheVET system. This enables the VA to continue full speed ahead with HealtheVET development as planned while still enabling secured collaborative data sharing capabilities to its architecture with external local health care facilities and practices.

Secure Video Stream Framework for Dynamic and Anonymous Subscriber Groups

Principal Investigator: Xukai Zou

Secure video content distribution is a key aspect in the deployment of Telepresence Services and Video on Demand, two critical applications for the ecosystem targeted by Cisco products. Efficient mechanisms and systems need to be developed to guarantee confidentiality and controlled access to a broad range of broadcast video streams. At the same time, an effective framework for secure video content distribution should also guarantee subscribers’ privileges to access video streams matching their respective subscription and on-demand requirements.

In this project, we will build, by employing an innovative approach called Access Control Polynomial (ACP), a Secure Video Stream Framework for dynamic and anonymous subscriber groups. The framework will effectively address the underlying challenges of secure video stream broadcasting and guaranteed access, anonymity, dynamicity, granularity, and scalability.

Human Factors in Online Security and Privacy

Principal Investigator: Robert Proctor

This research focuses on human aspects of online security and privacy assurance. With respect to online security, we have performed task analyses of the procedures required to use different types of authentication methods (e.g., passwords, biometrics, tokens, smart cards) and determined the costs and benefits of the alternative methods. Although passwords are the weakest of the methods, they are the most pervasive and widely accepted form of authentication for many systems. Thus, we have performed experiments designed to identify techniques for improving both the security and memorability of passwords. With respect to privacy assurance, we have performed analyses on Web privacy policies to determine organizations’ privacy and security goals. We also conducted usability tests examining users’ comprehension of privacy policies, factors that influence users’ trust in an organization, and users’ ability to configure privacy agents to check machine-readable policies for an organization’s adherence to specific privacy practices. Because the methods for ensuring security and privacy involve human users, our goal is to improve the interaction between humans and the technical devices and interfaces employed in security- and privacy-related tasks.

Role Mining in Enterprise Access Control Systems

Principal Investigator: Ninghui Li; Elisa Bertino

Role-based access control (RBAC) has established itself as a well-accepted model for access control in many organizations and enterprises. The process of building an RBAC system is referred to as role engineering. According to a NIST report, the process of role engineering is the costliest part of migrating to an RBAC implementation. The problem of role mining, which applies data mining technique to construct RBAC systems from user-permission relations so as to minimize human effects, has raised significant interests in the research community. This project aims at developing new role mining techniques to construct RBAC systems that are optimized with respect to some objective measure of “goodness”, such as the structural complexity of systems. Also, by taking user attributes into account, we try to construct RBAC systems through role mining such that roles in the systems have semantic meanings. This overcomes a major weakness of existing role mining approaches, whose constructed roles do not have meanings. Last but not least, we study the problem of building RBAC systems whose cost of future updates is minimum.

Operating System Access Control

Principal Investigator: Ninghui Li

Most of today’s operating systems use Discretionary Access Control (DAC) as their primary access control mechanism. One key weakness of DAC is that it is susceptible to the trojan horse attack. An attacker can create a malicious program as a trojan horse, and a process running the trojan horse program will have the privileges of the user who runs it; thus the process can abuse these privileges and violate the intended DAC policy. For similar reasons, existing DAC mechanisms provide inadequate protection when softwares are buggy. When attackers are able to feed malicious inputs to buggy softwares, they may be able to exploit the bugs and take control of the process. From this point of view, buggy softwares become trojan horses when the attacker is able to feed inputs to them. Exploiting this weakness of DAC, attackers are able to execute malicious code under the privileges of legitimate users, compromising end hosts. Host compromise further leads to a wide range of other computer security problems. Computer worms propagate by first compromising vulnerable hosts and then propagating to other hosts. Compromised hosts may be organized under a common command and control infrastructure, forming botnets. Botnets can then be used for carrying out attacks such as phishing, spamming, and distributed denial of service.

This project aims at developing Mandatory Access Control (MAC) techniques to enhance existing DAC mechanism to prevent host compromise. This project has several important differences from previous projects with a similar goal. First, usability is treated as a top priority. The usability goals are as follows: Configuring such a MAC system should not be more difficult than installing and configuring an operating system; and existing applications and common usage practices can still be used. This resulted in design choices that trade off security for simplicity and the introduction of novel exception mechanisms to the MAC rules. Second, the security objective is clearly defined and limited. The goal is to protect end host and user files against network attackers, malicious websites, and user errors. Third, the project closely integrates DAC and MAC, rather than viewing them as disjoint components. For example, MAC labels for files are inferred from their DAC permissions.