The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Presentations

Page Content

Panel Presentations

Virtually Secure or Securely Virtual?

Wireless: Can You Secure Me Now?

Malware and Trojans and Intrusions…Oh My!

Finance & Healthcare: The Same; But Different

Poster Presentations

Assurable Software and Architectures

A Serious Form of Attack: SN2K - Software-based Need-to-Know (N2K) Attacks

Ashish Kundu

In this research, we discover a new form of attack called “software-based need-to-know attacks” (SN2K attacks), which is simple and yet very efficient (with negligible cost) to carry out by the attacker, but is a serious form of attack on security of information. The attack is based on the implicit trust model of existing programming paradigm.

It is interesting to observe the following: in a computation that involves a requestor of a service (caller/client) and a provider of the service (function/service), the provider gets to define the information it must receive from the requestor in order to successfully provide the service. Interfaces (or function signatures) declare a set of parameters to specify such information requirement. Such a model was defined when security was not a major concern, but correctness of programs was a major concern. However as it can be observed, this model of interaction leads to monopolistic behavior from providers especially when sensitive data is in question. A provider can define an interface for the service it is going to support such that the interface asks for extra information (sensitive in nature) from the client. Such an interface leads to a serious security attack: it is serious in nature because (1) of the simplicity and efficiency with which it can be carried out - just by writing an imprecise interface requesting more information and (2) the client almost always would provide the data to receive the service.

The growth of software services (such as web services) is a context where such attacks could be easily carried out. We study the breadth and depth of this new attack using an attack tree and propose a technique to determine the degree of imprecision in an interface defined by a software module (a web service) as a measure of the “extra” information the provider requests for. We further propose a technique using which interfaces could be made more secure by making them more precise. These two techniques together can be used for analyzing software services for SN2K attacks and certifying them in such a context. Moreover we call for a new paradigm of computing that eliminates such monopolistic behaviour by the service provider.

Continuous Security Policy Enforcement in Streaming Environments

Rimma V. Nehme, Hyo-Sang Lim, Elisa Bertino

The management of privacy and security in the context of data stream management systems(DSMS) remains largely an unaddressed problem to date. Unlike in traditional DBMSs where access control policies are persistently stored on the server and tend to remain stable, in streaming applications the contexts and with them the access control policies on the real-time data may rapidly change. We propose a novel “stream-centric” approach, where security restrictions are not persistently stored on the server, but rather streamed together with the data. The data provider access control policies are expressed via security constraints called “data security punctuations” (or short, dsps). Server-side policies are specified by administrators in the form of “continuous policy queries” which emit query security constraints called “query security punctuations” (or short, qsps). The advantages of our model include flexibility, dynamicity and speed of enforcement as both data and query security punctuations are embedded inside data streams. Administrators can specify complex context-aware authorization policy queries. At run-time, continuous policy queries are evaluated, authorizations are produced and the engine can enforce any context-aware policy automatically. Moreover, DSMSs can adapt to not only data-related but also security-related selectivities, which helps reduce the waste of resources, when few subjects have access to data.

Poly^2 Application Nodes

Keith Watson

The Poly^2 project advances the understanding in building secure and reliable system architectures for critical services in hostile network environments. A secure and reliable system architecture must only provide the required services to authorized users in time to be effective. The proposed architecture is based on widely acknowledged security design principles. The Poly^2 application nodes host the external network services.

Structural Execution Indexing and its Applications

Nick Summer, Bin Xin, and Dr. Xiangyu Zhang

Execution indexing uniquely identifies a point in an execution. Execution indices can show correlations between points in an execution and correspondence between points across multiple executions. They can be used to organize program profiles; they can precisely identify the point in a re-execution that corresponds to a point in an original execution; furthermore, they provide a precise basis for dynamic analysis of program properties.

Enclave and Network Security

“Won’t You Be My Neighbor?” Neighbor Selection Attacks in Mesh-based Peer-to-Peer Streaming

Jeff Seibert, David Zage, and Cristina Nita-Rotaru

P2P streaming has grown in popularity, allowing people in many places to benefit from live audio and television services. Mesh-based P2P streaming has emerged as the predominant architecture in realworld use because of its resilience to churn and node failures, scalability, and ease of maintenance. The proliferation of these applications on the public Internet raises questions about how they can be deployed in a secure and robust manner. Failing to address security vulnerabilities could facilitate attacks with significant consequences such as content censorship, unfair business competition, or external impact on the Internet itself. In this paper, we identify and evaluate neighbor selection attacks against mesh-based P2P streaming which allow insider attackers to control the mesh overlay formation and maintenance. We demonstrate the effect of the attacks against a mesh-based P2P streaming system and propose a solution to mitigate the attacks. Our solution is scalable, has low overhead, and works in realistic heterogeneous networks. We evaluate our solution using a mesh-based P2P streaming system with real-world experiments on the PlanetLab Internet testbed and simulations using the OverSim P2P simulator.

Attacks and Defense on Virtual Coordinate Based Routing in Wireless Sensor Networks

Jing Dong, Kurt Ackermann, Brett Bavar, Cristina Nita-Rotaru

Virtual coordinate system (VCS) based routing provides a practical, efficient and scalable means for point-to-point routing in wireless sensor networks. Several VCS-based routing protocols have been proposed in the last few years, all assuming that nodes behave correctly. However, many applications require deploying sensor networks in adversarial environments, making VCS-based routing protocols vulnerable to numerous attacks.

In this paper, we study the security of VCS-based routing protocols. We first identify novel attacks targeting the underlying virtual coordinate system. The attacks can be mounted with little resource, yet are epidemic in nature and highly destructive to system performance. We then propose lightweight defense mechanisms against each of the identified attacks. Finally, we evaluate experimentally the impact of the attacks and the effectiveness of our defense mechanisms using a well-known VCS-based routing protocol, BVR.

Behavior-Based Characterization of Peer-to-Peer (P2P) Traffic

Ruben Torres, Mohammad Hajjat, and Sanjay Rao

P2P traffic is dominant. But, do we really understand it? P2P traffic has similar behavior to worms:
* Contacting many nodes
* High failure ratio
* Content prevalence: common substrings among many packets
Goal: Provide an accurate P2P traffic characterization based on intrinsic understanding of P2P clients behavior:
I. Selection of intuitive metrics to characterize P2P clients
II. Understanding the distribution of metrics
III. Using simple probabilities to characterize P2P nodes behavior

Device Independent Router Modeling

Roman Chertov, Sonia Famy, and Ness B. Shroff

Many popular simulation and emulation tools use high-level models of forwarding behavior in switches and routers, and provide little guidance on setting model parameters such as buffer sizes. Thus, a myriad of papers report results that are highly sensitive to the forwarding model or buffer size used. In this paper, we argue that measurement-based models for routers and other forwarding devices are crucial. We devise such a model and validate it with measurements from three types of Cisco routers and one Juniper router, under varying traffic conditions. The structure of our model is device-independent, but requires device-specific parameters. The compactness of the parameter tables and simplicity of the model make it versatile in high-fidelity simulations that preserve simulation scalability. We construct a profiler to infer parameter tables within a few hours. Our preliminary results indicate that our model can approximate different types of routers. Additionally, the results indicate that queue characteristics vary dramatically among the devices we measure, and that backplane contention must be modeled.

ReAssure: A Publicly Accessible, Safe, Virtualization-based Testbed for Logically Destructive Experiments

Pascal Meunier

The ReAssure testbed has reached maturity with version 1.0. It offers a safe and reproducible environment for experiments, storage, sharing of images or appliances, flexible shared access to experimental PCs, and if needed, under complex network topologies. Features, usage scenarios and a deployment of ReAssure at Northern Kentucky University are discussed.

Tackling the Memory Bottleneck Problem for Large-scale Network Simulation

Kihong Park, Hyojeong Kim

A key obstacle of large-scale network simulation over PC clusters is the memory bottleneck problem where a memory-overloaded machine can slow down an entire simulation due to disk I/O overhead. Memory balancing is complicated by (i) the difficulty of estimating the peak memory consumption of a group of nodes during network partitioning—-a consequence of per-node peak memory not being synchronized—-and (ii) trade-off with CPU balancing whose cost metric depends on total—-as opposed to maximum—-number of messages processed over time. In this paper we investigate memory balancing for large-scale network simulation which admits solutions for memory estimation and balancing not availed to small-scale or discrete-event simulation in general.

First, we advance a measurement methodology for accurate and efficient memory estimation, and we establish a trade-off between memory and CPU balancing under maximum and total cost metrics. We show that the performance gap depends on network topology and application traffic. Second, we show that combined memory-CPU balancing can overcome the performance trade-off—-in general not feasible due to constraint conflicts—-which we connect to network simulation having a tendency to induce correlation between maximum and total cost metrics. Performance evaluation is carried out using benchmark applications with varying traffic characteristics—-BGP routing, worm propagation under local and global scanning, and distributed client/server system—-on a testbed of 32 x86 machines running a measurement-enhanced DaSSF.

Identification, Authentication and Privacy

A Multi-Phased Approach to Steganography Detection

James Goldman, William Eyre, and Asawaree Kulkarni

Steganography is the art and science of hiding information in other information. Free and inexpensive tools are available on the Internet which enable the hiding of information in static images, especially GIFs, JPGs and PNGs. There is concern that terrorist and criminal organizations are using steganography to hide information in images on the Web and accessing the information in a manner which might confound inference tracking efforts. In Phase 1 of the study, an effort was made to scan specific suspected sites designated by the NW3C and the Indiana State Police for steganography using signature scanning techniques. Over one million URLs were scanned for steganography. Results of the scan are reported and analyzed. The Stego Method, a process-oriented model is introduced. In Phase 2, a shift of research focus to host system artifacts is recommended. Confiscated drives of criminal suspects are currently being scanned for artifacts of steganography producing applications.

Bacterial Transfer from Biometric Devices

C.R. Blomeke, T. Walter, and S.J. Elliott

This poster outlines current research centering around the survivability and transferability of bacteria from fingerprint sensors and a hand geometry reader. The cleanliness and potential for these devices to serve as conduits for transferring bacteria is of concern as biometric devices are deployed in a variety of public domains. The devices were intentially contaminated and the survivabilility was measured over a period of 60 minutes. The transfer studies required the devices to be intentionally contaminated and successive touches of the device to a gloved hand were plated onto growth medium. The results indicate that the biometric devices show similar survivability and transfer rates as you would find from a common door handle.

Third-Party Grid-Data Integrity Verification

Mikhail J. Atallah, YounSun Cho, and Ashish Kundu

In the third-party model for the distribution of data, the trusted data creator or owner provides an untrusted distributor D with integrity verification (IV ) items that are stored at D in addition to the data itself. When a user U requests data from D, the user is provided by D with that data and a (hopefully small) number of IV items that make it possible for U to verify the integrity of the data received. Most of the published work in this area uses the Merkle tree or variants thereof. For the problem of 2-dimensional range queries, the best published solutions require D to store O(n log n) IV items for a database of n items, and allow a user U to be sent only O(log n) of those IV s for the purpose of verifying the integrity of the data it receives from D (regardless of the size of U’s query rectangle). For data that is modeled as a 2-dimensional grid (such as GIS and image data), this paper shows that better bounds are possible: The number of IV s stored at D (and the time it takes to compute them) can be brought down to O(n), and the number of IV s sent to U for verification can be brought down to practically a constant. More precisely, for data modeled as an m × m 2-dimensional grid of n = m2 cells, and with requests from users taking the form of arbitrary rectangular ranges, our solution stores O(n) IV items (generated in O(n) time) at D and requires only O(log(log* n)) IV items to be provided to U for the verification of its rectangular range; this is essentially constant because log(log* n) is less than 2 even for huge values of n. Moreover, in our scheme, D can find these IV items also in O(log(log* n)) (hence practically constant) time.

Fingerprint Sensor Interoperability: Performance Evaluation of a Multi-Sensor System

Shimon Modi, S.J. Elliott, H. Kim, E. Bertino, and M.J. Dark

The distortions and variations introduced when acquiring fingerprint images propagate from the acquisition subsystem all the way to the matching subsystem. These variations ultimately affect performance rates of the overall fingerprint recognition system. Fingerprint images captured using the same sensor technology during enrollment and recognition phases will introduce similar distortions, thus making it easier to compensate for such distortions and reducing its effect on the performance of the overall fingerprint recognition system. However, an impact on performance is expected, but unpredictable, when different fingerprint sensor technologies are used during enrollment and recognition phases. The purpose of this study is to examine the effect of sensor dependent variations and distortions, and characteristics of the sensor on the interoperability matching error rates of minutiae based fingerprint recognition systems. This study aims to achieve this by acquiring fingerprints from 200 participants across 9 different fingerprint sensors.

Memorability Issues Associated with Updating Passwords

Devin G. O’Brien, Robert W. Proctor, and Kim-Phuong L. Vu

Since many organizations require passwords to be updated frequently, interference from previous passwords may occur. This study investigated the memorability of 5 unique passwords for different accounts using a mnemonic technique that has been shown to be effective for producing secure passwords. 18 Purdue undergraduate students engaged in three cycles of generating, recalling, and updating passwords. Students took a long time to generate the first password, and their performance improved for subsequent passwords and updating sessions. Participants who forgot passwords did so after a long interval (2 days or 1 week) but not after a short-term 5-minute interval. When participants updated their passwords, they often did not create completely new passwords but modified old ones.

Printer and Sensor Forensics

Pei-Ju Chiang, Nitin Khanna, Aravind K. Mikkilineni, Maria V. Ortiz, Vivek Shah, Sungjoo Suh, George T.-C. Chiu, Edward Delp, and Jan P. Allebach

In today’s digital world securing different forms of content is important in terms of protecting copyright and verifying authenticity. One example is watermarking of digital audio and images. We believe that a marking scheme analogous to digital watermarking but for documents is very important. We describe the use of laser amplitude modulation in electrophotographic printers to embed information in various types of documents. In addition, we describe methods for forgery detection in scanned images.

Private Searching for Nearest Neighbors

Yinian Qi and Mikhail Atallah

We give efficient protocols for secure and private k-nearest neighbor (k-NN) search, when the data is distributed between two parties who want to cooperatively compute the answers without revealing to each other their private data. Our protocol for the single-step k-NN search is provably secure and has linear computation and communication complexity. Previous work on this problem had a quadratic complexity, and also leaked information about the parties’ inputs. We adapt our techniques to also solve the general multi-step k-NN search, and describe a specific embodiment of it for the case of sequence data. The protocols and correctness proofs can be extended to suit other privacy-preserving data mining tasks, such as classification and outlier detection.

Secure Similar Document Detection

Wei Jiang, Mummoorthy Murugesan, Chris Clifton, and Luo Si

Similar document detection plays important roles in many applications, such as file management, copyright protection, and plagiarism prevention. Existing protocols assume that the contents of files stored on a server (or multiple servers) are directly accessible. This assumption limits more practical applications, e.g., detecting plagiarized documents between two conferences, where submissions are confidential. We propose a novel scheme to detect similar documents between two entities where documents cannot be openly shared with each other. We also present experimental results to show the practical value of the proposed protocols.

Security Services for Healthcare Applications

Lorenzo D. Martino, Suchit Ahuja, and Elisa Bertino

The federal government has mandated an Electronic Medical Record (EMR) initiative that will convert paper medical records into electronic records for all citizens by 2012. But, this will pose a challenge to the maintenance of security and privacy of medical data. In order for compliance with regulatory laws like HIPAA and GBLA, it is important to adhere to strict security and privacy controls.

Now patients are demanding more control over their own health records and this has initiated researchers and healthcare organizations to develop the Personal Health Record (PHR). PHRs will provide patients with control over parts of their own medical information as well as the ability to control accessibility to it. An example of a PHR would be the recently launched Microsoft HealthVault application. Again, PHRs would provide greater security and privacy challenges for patients, healthcare organizations and PHR vendors.

This NSF sponsored research project proposes to use Service Oriented Architecture (SOA) and Software As A service (SAAS) principles and apply them to security and privacy practices. This would ensure that the security and privacy is controlled by services that are related to a patient-centric and patient-customized access policy that is determined for his/her PHR. There are significant and complex issues like eConsent from the patient and the healthcare provider, access policy setting by the patient, adherance to regulatory compliance laws, aligning the PHR service to business functions of the vendor, etc.

There are many challenges for the research in terms of the following:
1) There are NO STANDARDS for PHR data. Hence, establishing PHR data as a sub-set of EMR data is the first step.
2) There are NO STANDARDS that define PHR privacy and security. HIPAA laws are very general and cannot be applied to specific technology components. Hence, alternative approaches to devise and govern PHR security and privacy must be researched.
3) Establishing a relationship between the PHR data and the policies that govern access to the data must be researched.
4) Application of SOA and SAAS principles to govern security and privacy policy

These and other issues make this project very complex and applicable in the real world.

Solicitation Token Authenticated Mail Protocol

Kurt Ackermann, Camille Gaspard, Ramana Kompella, and Cristina Nita-Rotaru

Email has grown into one of the dominant forms of communication in the 21st century. However, email systems were designed without security in mind, thus allowing attackers to abuse the system and send unsolicited email (or spam). Most current solutions to spam center on content-based filtering or domain blacklisting approaches, both of which are inaccurate and slow to adapt to the changing face of spam. Moreover, these schemes do not allow for the accountability of email address leakage, which would allow a user to know which untrustworthy parties divulged his address. We propose STAMP, the Solicitation Token Authenticated Mail Protocol, as a server-side solution to filter unsolicited mail from ever reaching the end-user¿s inbox, as well as allowing the user to revoke inbox access from solicited parties who prove to be untrustworthy with their email access. STAMP employs distributed access control, making use of transitive trust to reduce email solicitation overhead and allow the user¿s address book to grow organically through trusted entities. We implement a prototype of our scheme as an extensible mail filter plug-in for an industry standard mail server. We compare performance and server overhead of STAMP against a popular content-based filter and show that our scheme attains a 43% reduction in message delivery latency and achieves perfect message classification with a processing cost that is lower by more than 3 orders of magnitude.

The MicroOppnet Testbed for Trust, Security, and Privacy Experiments in Heterogeneous Environments

VarunKrishna Kundoor, Vikash Achutaramaiah, Leszek Lilien, Zille Huma Kamal, and Ajay Gupta

Oppnets or class 2 opportunistic networks are a new paradigm and technology for Pervasive Computing and Collaborative Computing. Oppnets utilize growth for opportunistic leveraging of resources and services. They achieve their goals through the collaboration of helper nodes, which join oppnets dynamically after being invited or ordered to help.

We present MicroOppnet v.2.3, a proof-of-concept and a small testbed for experimenting with oppnets. We describe the design and implementation of MicroOppnet v.2.3. In version 2.3, we integrate the following disjoint communication media: Bluetooth, wireless and wired Internet, a sensornet, and a cellular network. We are currently working on extending this version to a larger MicroOppnet v.3.0 which will additionally integrate RFID, IrDA and WiMax technologies.

MicroOppnet is useful for running experiments on trust, security, and privacy solutions in a highly heterogeneous environment; among others, for testing mechanisms for trust-based routing, authentication , masquerading, and privacy preservation.

Oppnets have a strong impact on Emergency Preparedness and Response (EPR). We illustrate with a scenario how even a small MicroOppnet can assist in rescuing workers trapped in an office building on fire.

Incident Detection, Response, and Investigation

A game theoretic framework for adversarial learning

Murat Kantarcioglu, Bowei Xi, and Chris Clifton

Many data mining applications, ranging from spam filtering to intrusion detection, are faced with active adversaries. In all these applications, initially successful classifiers will degrade easily. This becomes a game between the adversary and the data miner: The adversary modifies its strategy to avoid being detected by the current classifier; the data miner then updates its classifier based on the new threats. In this paper, we investigate the possibility of an equilibrium in this seemingly never ending game, where neither party has an incentive to change. Modifying the classifier causes too many false positives with too little increase in true positives; changes by the adversary decrease the utility of the false negative items that aren’t detected. We develop a game theoretic framework where the equilibrium behavior of adversarial learning applications can be analyzed, and provide a solution for finding the equilibrium point. A classifier’s equilibrium performance indicates its eventual success or failure. The data miner could then select attributes based on their equilibrium performance, and construct an effective classifier.

Botnet Behavior Analysis

James E. Goldman, Sean C. Leshney, Bradley J. Nabholz, Deepak R. Nuli, and Nicklas R. Peelman

As part of current research into malware behavior, the Botnet Analysis Team is developing standardized architectures and processes with which to isolate, observe, and analyze botnets. Botnets are typically used for illegal activities, and are often made up of thousands of compromised computers. Botnet simulation will use a cluster of PCs configured with typical operating system and software configurations used by homes and businesses today.

Controlled Malware Behavior Analysis

James E. Goldman, Sean C. Leshney, Bradley J. Nabholz, Deepak R. Nuli, and Nicklas R. Peelman

The Malware Analysis Team is developing standardized architectures and processes with which to isolate, observe, analyze, contain, and eradicate malware of various types. Specialized tools will be developed to support the various phases of this mission. Of particular interest are complex trojans that can be installed on victims’ computers and used at will in the execution of a variety of crimes.

Verifying Case File Integrity in Mobile Phone Forensics

Sean Sobieraj and Rick Mislan

To verify the methods implemented by various forensic software tools to protect the integrity of data obtained from mobile phones.

Risk Management, Policies and Laws

Cyber warfare as a form of low intensity conflict

Samuel Liles and Marcus Rogers

Cyber warfare describes the technologies, techniques, concepts, and strategies of using computers as agents of conflict and combat. Hidden behind the ultimate ubiquity of computing and computer communications is the pervasive and strategic utilization of computers. Computers facilitate and make possible everything from the unmanned aerial vehicle (UAV) to the high speed rapid response of troops responding within the network centric warfare paradigm. Often maligned as non-kinetic and of limited strategic value cyber warfare is the utilization of the computing and communications infrastructure to coordinate, communicate, and collaborate in the active pursuit of issues of national interest. Unlike other shifts in the paradigm of national power only cyber warfare has the capability to empower insurgencies, destabilize economies, and negate national force projection strategies at little cost to the adversary and with a high preponderance of success and little risk to the actors.

Gandhigiri in the Infosphere

Vaibhav Garg and Melissa Dark

Gandhi is the father of the modern thought of non-violence. His thought has been by some as a political treatise, by some as a religious one and by others as an economic theory for sustainable growth. The goal of this paper is to harmonize these different, sometimes contrasting, sometimes apparently conflicting Gandhian notions into a framework that would help us to analyze our modern day dilemmas in Information Ethics.

Nagios Installation: A Living Lab Project

Kevin Rickard, Lindsay Friddle, and Connie Justice

Experiential learning: installing a nagios server.

Perceptions of Information Security and Privacy Risks

Fariborz Farahmand, Melissa Dark, Eugene Spafford, Sydney Liles, and Brandon Sorge

Psychometric models have been developed and refined for measuring perception of risk of new technologies, these models, or a variation of them, have not been applied to understand the perceptions of information security and privacy risks. This work explores a research model for investigating information security and privacy security risks.

Reading the Disclosures with New Eyes: Bridging the Gap between Information Security Disclosures and Incidents

Ta-Wei “David” Wang

This paper investigates the relationship between information security related disclosures in financial reports and the impacts of information security incidents through cross-sectional and cluster analysis. First, by drawing upon the theories of disclosures in the accounting literature, we examine the effect of the number of disclosures on stock price reactions to information security incidents from 1997 to 2006. Our findings suggest that first-time disclosed information security risk factors in financial reports can mitigate the impact of information security incidents on business value. Second, a cluster analysis is performed on the disclosures in financial reports before and after the incidents. The results demonstrate that companies react to information security incidents by disclosing additional and more specific risk factors in subsequent financial reports. A classification model is also built to classify disclosures based on stock price reactions to information security incidents. The model provides insights to help firms lower stock price reactions to such incidents through disclosures. This paper not only contributes to the literature in information security and accounting but also sheds light on how managers can evaluate their information security policies and convey information security practices more effectively to the investors.

Risk Assessment: A Living Lab Project

Lindsay Friddle, John Wikman, and Connie Justice

Performing Risk Assessments in an academic environment.

Software Properties and Behaviors

Pascal Meunier

Software has moved beyond the encoding of algorithms, to enforcing moral, ethical and legal values, implementing tactics, strategies and essentially the will of designers, coders and organizational (e.g., corporate) entities, or even laws. Buyers, users and communities incur risk due to the deployment of foreign or inappropriate behaviors. Due to code complexity, obfuscation and emergent behaviors, I posit that the systematic study of software behaviors is an important and sometimes the main source of reproducible and objective information on what an artifact (including infrastructure) will and will not do. I contribute definitions of some desirable software properties useful in the context of studying the risks posed by software behaviors: software transparency, purity, obedience and loyalty.

Trusted Social and Human Interactions

Design & Evaluation of the Human-Biometric Sensor Interaction Method

Eric Kukula, Stephen Elliott, Mathias Sutton, Niaz Latif, and Vincent Duffy

This research examines how humans interact with biometric devices to provide the biometrics community with a comparative evaluation method that uses ergonomics, usability, and image quality criteria as explanatory variables of performance, which is based on the design of the sensor. Specifically, this research in the Human-Biometric Sensor Interaction has four primary objectives. First, analyze literature in the fields of: biometrics, ergonomics, HCI, and usability to determine what influences the interaction between the human and the biometric device and what aspects of these fields can be applied to the design of biometric devices. Second, develop a conceptual model for the design of biometric devices, specifically swipe-based fingerprint recognition and propose an evaluation method to assess the created form factors. Third, create two alternate swipe-based fingerprint form factors based on the conceptual model that includes: biomechanics and anthropometry of the hand and fingers, biometric literature, and focus groups and interviews to gather personal perceptions and common interaction problems for swipe based fingerprint recognition devices. Lastly, evaluate the commercially available and new form factor devices in a comparative performance evaluation using the proposed HBSI evaluation method. Results to date are revealed in the poster.

Specification and Enforcement of Flexible Security Policy for Active Cooperation

Yuqing Sun, Bin Gong, Xiangxu Meng, Zongkai Lin, and Elisa Bertino

Interoperation and services sharing among different systems are becoming new paradigms for enterprise collaboration. To keep ahead in strong competition environments, an enterprise should provide flexible and comprehensive services to partners and support active collaborations with partners and customers. Achieving such goals requires enterprises to specify and enforce flexible security policies for their information systems. Although the area of access control has been widely investigated, current approaches still do not support flexible security policies able to account for different weighs that typically characterize the various attributes of the requesting parties and transactions and reflect the access control criteria that are relevant for the enterprise. In this paper we propose a novel approach that addresses such flexibility requirements while at the same time reducing the complexity of security management. To support flexible policy specification, we define the notion of restraint rules for authorization management processes and introduce the concept of impact weight for the conditions in these restraint rules. We also introduce a new data structure for the encoding of the condition tree as well as the corresponding algorithm for efficiently evaluating conditions. Furthermore, we present a system architecture that implements above approach and supports interoperation among heterogeneous platforms.

The Role of Information Technology in Providing Patient Safety

James G. Anderson, Rangaraj Ramanujam, Devon Hensel, and Marilyn Anderson

Data-sharing systems - where healthcare providers jointly implement a common reporting system to promote voluntary reporting, information sharing and learning - are emerging as an important regional, state and national strategy for improving patient safety. This study analyzed reporting trends with data from 25 hospitals that were members of a regional data sharing system .The results were used to develop a computer simulation model designed to assist hospitals to improve patient safety.


Get Your Degree with CERIAS