The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Posters and Presentations

Page Content

A Multimodal Biometric System using Palm Vein and 10-print Fingerprint Sensors

Gregory Hales, Marvin Michels, Jonathan Hight, and Dr. Steven Elliott

Biometric authentication systems using a single modality perform well and are mostly universal. With heightening security concerns at airports the Department of Homeland Security currently utilizes a 10-print fingerprint sensor to identify international travelers. Multimodal biometric systems allow for even better security and more universality since, in this case two modalities are captured simultaneously. Using the palm vein sensor with the 10-print sensor allows the user to still interact with the 10-print sensor as they normally would while the palm vein sensor located underneath the palm and is captured at the same time. The benefits of this is that the palm vein sensor is contact-less and the interaction overall does not change for the user. The researchers will be collecting data to determine usability factors as well as performance metrics compared to using the palm vein and 10-print sensors alone.

A Review of Computer Testbeds

Jason D. Ortiz, Pascal Meunier

Cyber security has become a hot topic in computer research. There is a multitude of security experiments occurring in a variety of different computer testbeds. However, we must ask if these experiments are occurring in an environment that will maximize the researchers’ ability to reach experimental goals. Each testbed currently being utilized for security experimentation has unique properties that make it ideal for some experiments and not for others. The following is an evaluation of several well-known, often used computer testbeds. The goal of the following information is to support researchers in choosing the most appropriate testbed based on their individual experimental goals.

A Selective Encryption Approach to Fine-Grained Access control for P2P File Sharing

Aditi Gupta, Salmin Sultana, Michael Kirkpatrick and Dr. Elisa Bertino

Peer-to-peer (P2P) systems offer great resilience for distributed file sharing services. As the use of these services has grown, the need for fine-grained access control (FGAC) has emerged. Existing access control frameworks for P2P file sharing use an all-or-nothing approach that is inadequate for sensitive content that may be shared by multiple users. On the other hand, it would be desirable to provide a flexible mechanism that allowed content owners to grant access to portions of files independently.

In this work, we propose a FGAC mechanism based on selective encryption techniques. Using this approach, the owner of a file specifies access control policies over various byte ranges in the file. The separate byte ranges are then encrypted with different keys. Users of the file only receive the keys encrypting the ranges they are authorized to access. Our approach includes a key distribution scheme based on a public key infrastructure (PKI) and access control vectors. We have integrated our FGAC mechanism with the Chord structured P2P network. In the work we discuss relevant issues concerning the implementation and integration with Chord and present the performance results for our prototype implementation.

Automatic Enforcement of Multiple Policies in Healthcare Domain

Zahid Pervaiz, David Ferraiolo, Serban Gavrila, Arif Ghafoor

Enforcement of security policies on sensitive documents in an organizational setup is a critical task. This function can be performed manually by administrators but is not scalable as number of documents increases. Permission assignment can be automated by assigning permission on type of document to roles like clinical documents to medical staff and financial documents to accounts staff. But it becomes a complex task for fine grained requirements like sensitive clinical information be only available to primary doctors.

We propose automatic enforcement of fine grained multiple polices on clinical documents based on security requirements using semantic matching. The first step in this process in to construct domain specific ontology. Subsequently, attributes of clinical documents are extracted by data mining and rule-based semantic matching is employed using roles attributes. Such matching allows objects with sensitive healthcare information to be placed under Discretionary Access Control (DAC) policy. Under DAC policy sensitive documents are only available to the primary doctor treating the patient. The normal documents are assigned to all doctors using Role Based Access Control (RBAC).

Currently, a secure healthcare system is being implemented using the Policy Machine (PM) architecture proposed by National Institute of Standards and Technology (NIST). PM allows enforcement of arbitrary and organization specific attribute based access control policies.

Automatic Reverse Engineering of Data Structures from Binary Execution

Zhiqiang Lin, Xiangyu Zhang, Dongyan Xu

With only the binary executable of a program, it is useful to discover the program’s data structures and infer their syntactic and semantic definitions. Such knowledge is highly valuable in a variety of security and forensic applications. Although there exist efforts in program data structure inference, the existing solutions are not suitable for our targeted application scenarios. In this paper, we propose a reverse engineering technique to automatically reveal program data structures from binaries. Our technique, called REWARDS, is based on dynamic analysis. More specifically, each memory location accessed by the program is tagged with a timestamped type attribute. Following the program’s runtime data flow, this attribute is propagated to other memory locations and registers that share the same type. During the propagation, a variable’s type gets resolved if it is involved in a type-revealing execution point or type sink. More importantly, besides the forward type propagation, REWARDS involves a backward type resolution procedure where the types of some previously accessed variables get recursively resolved starting from a type sink. This procedure is constrained by the timestamps of relevant memory locations to disambiguate variables re-using the same memory location. In addition, REWARDS is able to reconstruct in-memory data structure layout based on the type information derived. We demonstrate that REWARDS provides unique benefits to two applications: memory image forensics and binary fuzzing for vulnerability discovery.

Biometric Device Training and its Impact on Usability and User Performance

Stephen Cargo

The United States Department of Homeland Security currently requires a 10 fingerprint-capture process at immigration checkpoints, implementing a biometric fingerprint enrollment device. The current device has some design deficiencies leading to system usability issues, and users of the device at checkpoints emigrate from many different countries around the globe and speak a number of various languages. Thus, there is a need for a universally understood instructional panel that is able to train all users, increasing device/system usability, user satisfaction, and overall system effectiveness. Problematic training methods and instruction panel designs were identified from previous research, and this information was used to redesign two new panels: a detailed version addressing all of the identified usability issues; a simplified, minimalistic version designed with the additional concern of minimizing cognitive load. In the present study, these two designs are being tested (detailed design versus minimalistic design), and evaluated to determine which provides the most improvement over previous designs and the best overall usability of the biometric system.

CSRF Attacks against a Linksys Wireless Router

Ryan Poyar

CSRF attacks have been called the “sleeping giant” of web-based vulnerabilities. These attacks take advantage of the trust that a website has in a user’s browser. It can also take advantage of something inherent about the victim such as the private network that the victim is in or any other trust that the victim machine has. The CSRF attacks used in this research against wireless routers takes advantage of all of these inherent properties of the victim’s machine. In doing so, an attacker can gain complete control of the wireless router. In order to help prevent against this type of attack browsers implement a Same Origin Policy (SOP) which only allows client side code to interact with the origin (website) which it came from. Browsers also use a technique called DNS pinning in order to further prevent scripts from evading the SOP. On the other side of the fence, attackers attempt to use a technique called DNS rebinding in order to break the DNS pinning within the browser. This research explored the feasibility of performing these types of attacks in a number of different technologies as well as analyzed the consequences and significance of the attacks.

Cybersecurity: Aerospace System-of-Systems

Ethan Puchaty, Dr. Dan DeLaurentis

Assuring cybersecurity of mobile platforms such as unmanned aerial vehicles (UAVs) is a challenging problem in a dynamic environment. The System-of-Systems (SoS) group at Purdue seeks to meet these challenges by using an SoS approach to analyze and address cybersecurity issues for UAVs in both civil and defense applications. Current issues include protecting UAVs from cyberattacks in defense intelligence, surveillance, and reconnaissance (ISR) networks, as well as safely and securely integrating UAVs into the National Airspace System. In both cases, assuring cybersecurity is critical to maintaining a high level of net-centricity, network robustness, and overall SoS performance.

Dynamic Parallel Correlation Model for Intrusion Detection alerts

Ayman E. Taha, Ismail Abdel Ghafar, Ayman M. Bahaaeldin, Hani M.K. Mahdi

Alert correlation is a promising technique in intrusion detection. It analyzes the alerts from one or more intrusion detection systems and provides a compact summarized report and high-level view of attempted intrusions which highly improves security performance. Correlation component is a procedure which aggregates alerts according to certain criteria. The aggregated alerts could have common features or represent steps of pre-defined scenario attacks. Correlation approaches could be composed of a single component or a comprehensive set of components. The effectiveness of a component depends heavily on the nature of the dataset analyzed. The order of correlation components highly affects the correlation process performance; moreover not all components should be used for different dataset. This poster presents a dynamic parallel alert correlation model; the proposed model improves the performance of correlation process by dynamically selecting the proper components to be used and the optimal components order. This model assures minimum alerts to be processed by each component and minimum time for whole correlation process whatever the nature of the analyzed datasets.

Effects of Handedness on Fingerprint Quality and Peformance

Mitch Mershon, Christy Blomeke, Shimon Modi, Stephen Elliott

As the most prevalent of the biometric technologies, fingerprint recognition been thoroughly tested over the years. Many variables related to the user of the system, like gender and age, have been shown to have an impact on the quality and matching performance of the fingerprint images. However, the issue of the handedness, which is both a demographic and interaction variable, has received little attention. This study compared the NFIQ image quality scores and matching performance, through False Accept and False Rejects Rates. Forty participants, 20 left-handed and 20 right-handed, presented six images from their index and middle fingers from each hand. Participants presented their fingers to three sensor technologies (capacitive touch, optical touch, and thermal swipe). To negate the effects of learning, each participant interacted with the fingerprint sensors in a random order. The results indicated that there was not a statistically significant difference in image quality based on handedness when comparing sensor technologies. However, when analyzed by both finger and sensor, significant differences were found in the right index finger of the capacitive sensor and the left middle finger of the optical sensor. The NFIQ distributions showed little to no variance between fingers on the same sensor, however, the means were closest to 1 for the thermal swipe sensor. As the means shifted away from 1, the standard deviation noticeably increased. The matching error rates for the thermal swipe were all 0% except for the False Accept Rate (FAR) of the left-handed, left hand which had a rate of 1.981%, all attributed to one participant. The optical touch rates were also all 0% except for a False Reject Rate (FRR) of 0.0589% for the left hand of right-handed individuals. The capacitive touch sensor showed consistent results with the dominant hand outperforming the non-dominant hand in FAR, but the non-dominant hand outperforming the dominant hand in FRR, but both instances involved differences of only thousandths of a percent.

Enforcing Spatial Constraints for Mobile RBAC Systems

Michael S. Kirkpatrick and Elisa Bertino

Multiple models have been proposed for spatially-aware extensions of role-based access control (RBAC). These models combine the administrative and security advantages of RBAC with the dynamic nature of mobile and pervasive computing systems. However, implementing systems that enforce these models poses a number of challenges. As a solution, we propose an architecture for designing such a system. The architecture is based on GEO-RBAC, an enhanced RBAC model that supports location-based access control policies by incorporating spatial constraints. Enforcing spatially-aware RBAC policies in a mobile environment requires addressing multiple challenges. The first challenge is how to guarantee the integrity of a user’s location during an access request. Our solution involves the use of Near-Field Communication (NFC) technology. Another difficulty is the need to verify the user’s position continuously satisfies the location constraints. To capture these policy restrictions, we incorporate elements of the UCON_ABC usage control model in our architecture.

Exploiting Overlays for Practical and Profitable Traffic Attraction

Jeff Seibert, Cristina Nita-Rotaru

The Internet is heading towards efficiency with the continued deployment of technologies such as Content Distribution Networks (CDN) and with the introduction of Peer-to-Peer (P2P) localization. These technologies reduce the amount of costly traffic that ISPs must send to other ISPs. While these advances will undoubtedly cause many residential ISPs to gain profit, the transit ISPs that carry traffic for them will actually lose revenue.

In this work, we identify different means by which transit ISPs can increase revenue by subverting CDN and P2P systems and thus causing traffic to flow through them on profitable paths. We evaluate how profitable such techniques are by collecting datasets from the popular BitTorrent P2P system and the world-wide Akamai CDN. We find that many ISPs are able to at least double their profitable P2P or CDN traffic, while some can increase it many more times.

Infection Quarantining for Wireless Networks Using Power Control

Rahul Potharaju, Cristina Nita-Rotaru, Saswati Sarkar, Santosh Venkatesh

Malware propagation is a serious threat in wireless networks. We investigate a defense strategy based on optimal control that quarantines the malware by reducing the communication range of mobile nodes. We further characterize how such a solution affects performance of a wireless network through simulations. This will lead to a better understanding and prediction of defense protocols that would reduce the speed of malware propagation within wireless networks.

Information Risk Management and IT Executives’ Structural Status in a Top Management Team

Juhee Kwon, Jackie Rees, Tawei Wang

We examine the effects of an IT executive’s structural status in Top Management Teams (TMTs) on information risk management. An organization’s increased dependency on IT has made it imperative for IT executives to adopt cross-functional roles across the enterprise. Therefore, IT executive representation and status in TMTs is necessary to strategically and operationally conduct liaison activities between IT units and other business units. However, there is little empirical research examining the effects of an IT executive’s structural status on information risk management. We employ conditional and unconditional logistic regressions to examine 1,462 firms from 2003 to 2008 with information breach reports and executive compensation data. We augment this data with IT internal controls information provided by external auditors. The results demonstrate that high IT executive engagement and fair compensation based on behavior-based and outcome-based contracts are associated with reduced levels of both IT internal controls weaknesses and reported information security breaches. Second, we find that outcome-based pay difference in TMTs has a significantly positive effect on effective information risk management. In terms of IT strategy discontinuity, IT executive turnover has a significantly negative effect. As a comprehensive analysis across the accounting, human resources, and information systems literature, this study gives firms new insights into how to set IT executive compensation strategies as well as delegate authority and responsibility for ensuring confidentiality, integrity, and availability of information assets.

JTAM: A Joint Threshold Administration Model

Ashish Kamra and Elisa Bertino

We propose a Joint Threshold Administration Model (JTAM) for performing certain critical and sensitive system operations such as user/role creation, user/role permission assignment, and so forth. The key idea is that a JTAM enabled operation is incomplete unless it is authorized by at least kˆ1 additional administrators. We present the design details of JTAM based on a cryptographic threshold signature scheme. We implement JTAM in the PostgreSQL DBMS, and demonstrate the execution of the JTAM enabled Create User SQL command. We also show how to prevent malicious modifications to the JTAM enabled operations.

Network Security Support

Derril Lucci

The problem with most security tools is that they provide status information for a specific point in time, but no way to compare that status with previous states so that changes can be identified. Nmap is a network scanning utility that provides detailed information about the hosts and services available on a network. It is used to automatically map hosts and services at Purdue and at other universities as part of S3P, or Security Support Server Project. An ideal tool to compare two nmap outputs is ndiff, which returns the differences in a format similar to that of the Unix command diff. However, the output can be cumbersome because the differences can be significant. The aim is to simplify this output so that the system administrator can quickly identify changes in the network over a period of one, seven, and thirty days. The results can be summarized for a network security status webpage that would display the results of the analysis. The summary data can also be emailed to the administrator.

Nudging the Digital Pirate

Matthew Hashim, Karthik Kannan, Duane Wegener

Anecdotal evidence from the software industry shows us that digital pirates sometimes change their illegal behavior on a case-by-case basis and become paying customers of the same good they already possess. Pilot data for this project suggests that consumers are more likely (significant mean-shift) to remain a pirate rather than convert to a paying customer if the subject is reminded of their prior non-piracy behavior. Conversely, if a consumer is reminded of past piracy behavior, they are equally likely to pirate a new good in the future, consistent with the likelihood of piracy in the case mentioned prior. This suggests consumers may be lured away by the momentary temptation of piracy, overriding their larger goal of not being a pirate. This is a work in progress. Our continued objective is to explain the underlying decisions that subjects make as they are nudged away from being pirates, and reinforce policies that should be used by industry to encourage this behavior.

On the Practicality of Cryptographic Defenses Against Pollution Attacks in Wireless Network Coding

Andrew Newell, Jing Dong, Cristina Nita-Rotaru

Network coding introduced a new paradigm for designing network protocols for multi-hop wireless networks. Numerous practical systems based on network coding have been proposed in recent years demonstrating the wide range of benefits of network coding such as increased network throughput, reliability, and energy efficiency. However, network coding systems are inherently vulnerable to a severe attack, known as packet pollution, which presents a key obstacle to the deployment of such systems. Consequently, numerous schemes have been proposed to defend against pollution attacks. A major class of such defense mechanisms relies on cryptographic techniques.

We provide the first systematic evaluation of the existing cryptographic techniques for defending against pollution attacks. We first classify the cryptographic schemes based on their underlying cryptographic primitives (signature-based, hash-based, and MAC-based), security basis (DLP over a multiplicative group, DLP over an ECC group, and PRF), and security steps (sign, verify, and combine). Then, we define a unifying metric framework to compare the schemes. Lastly, we perform detailed analytical and experimental evaluations of a representative set of the schemes. Our results show that all of the schemes examined have serious limitations: they incur prohibitive computation overhead, high communication, or are insecure in the presence of multiple attackers. We conclude that while many cryptographic proposals for addressing pollution attacks exist, none of them are practical for use in wireless networks.

Optimal Finger Combinations in Multi-Finger Biometric Systems

Marvin Michels, Shimon Modi, Stephen Elliott

Single-finger fingerprint biometric systems require less time to verify a user’s identity when compared to a ten-print fingerprint system since there are more fingers to verify. There is a need to examine understand how fingerprint recognition systems can balance the speed of a single-print system with the robustness of a ten-print system by using a combination of fingers. The goal of this research was to find the optimal number of fingers to use in a biometric system that provides the best trade-off between verification speed and fewest matching errors. In this study, images of all ten fingers were used from seventy individuals. The combinations of images were judged on matching performance using both the False Accept Rate (FAR) and False Reject Rate (FRR). It was found that the thumb, index, and middle fingers of both hands had the highest quality scores. All fingers had an FRR of 0.0% except for the right ring finger which had an FRR of 0.5556%. The FAR was 0.0% in all cases except for the left index (0.524%) and the right ring (0.0087%). Ten different combination levels were used (1 finger, 2 fingers, up to 10). For each level the optimal combinations were identified. The least desirable combinations had FRRs between 0.005% and 0.010%. The ideal number of fingers to use in a biometric system falls inclusively between four and six fingers.

Partitioning Network Experiments for the Cyber-Range

Wei-Min Yao, Sonia Fahmy

Understanding the behavior of large-scale systems is challenging, but essential when designing an Internet protocol or application. Since it is often infeasible or undesirable to perform experiments directly on the Internet, simulation, emulation, and testbed experiments are important techniques for researchers to investigate large-scale systems.

In this paper, we consider data-plane experiments without dynamic routing. We explore platform-independent mechanisms to partition a large network experiment into a set of smaller experiments that can be sequentially executed. Each of the smaller experiments is conducted on a given number of experimental nodes, e.g., the available machines on an emulation or real testbed. The results of the smaller experiments are integrated to approximate the results that would have been obtained from the original experiment. We propose the use of greedy minimum weight partitioning on a flow dependency graph, with mechanisms to identify uncongested links, and an iterative experimentation process in case of interactions among partitions. We quantify the loss of fidelity using ns-2 experiments on Rocketfuel topologies with different traffic mixes. We show that our techniques can preserve performance characteristics in cases where the network is lightly loaded or traffic is open-loop. In experiments with closed-loop traffic and many congested links, we find that a tradeoff exists between the simplicity of the partitioning and experimentation process, and the loss of experimental fidelity.

Preserving Privacy in Policy-Based Content Dissemination

Mohamed Nabeel, Ning Shang, Elisa Bertino

We propose a novel scheme for selective distribution of content, encoded as documents, that preserves the privacy of the users to whom the documents are delivered and is based on an efficient and novel group key management scheme. Our document broadcasting approach is based on access control policies specifying which users can access which documents, or subdocuments. Based on such policies, a broadcast document is segmented into multiple subdocuments, each encrypted with a different key. In line with modern attribute-based access control, policies are specified against identity attributes of users. Our approach is the first to achieve the following three strong privacy and security guarantees in such a push based system: (1) Users are granted access to a specific document, or subdocument, according to the policies. (2) The document publisher does not learn the values of the identity attributes of users. (3) The document publisher does not learn which policy conditions are verified by which users. The latter two guarantees together prevent even inferences about the values of identity attributes. Moreover, our key management scheme on which the proposed broadcasting approach is based is efficient in that it does not require to send the decryption keys to the users along with the encrypted document. Users are able to reconstruct the keys to decrypt the authorized portions of a document based on subscription information they have received from the document publisher. The scheme also efficiently handles new subscription of users and revocation of subscriptions.

There is a real need for such a scheme especially with the government backed plans to make health care records and other sensitive information online. Further, it is critical to protect identity attributes which encode privacy-sensitive information in the face of increasing insider threats and third-party cloud computing initiatives.

Preventing Technology-Induced Errors in Healthcare: Usability Engineering

A.W. Kushniruk, E.M. Borycki, J.G. Anderson, M.M. Anderson

This poster describes how usability engineering can be used to study and predict technology-induced error in healthcare. The approach involves (1) small scale clinical simulations to assess the use of new information technology, (2) the analysis of data from the simulations to identify relationships between usability issues and the occurrence of errors, and (3) the construction of a computer simulation model to explore the potential impact of the technology under study over time and across user populations.

Printer Security and Forensics

Aravind K. Mikkilineni, Nitin Khanna, Edward J. Delp

Several methods exist for printer identification from a printed document. We have developed a system that performs printer identification using intrinsic signatures of the printers. Because an intrinsic signature is tied directly to the electromechanical properties of the printer, it is difficult to forge or remove. These signatures are shown to be robust against several attacks models. In addition we have developed methods for embedding information into a text document using extrinsic signatures. Using these techniques we have been able to embed over 12kbits of information imperceptibly into a page of 12-point text.

Privacy in Sociotechnical Realms - Semantic Network Analysis of Discourses

Preeti Rao, Dr. Lorraine Kisselburgh, Purdue University

Ubiquitous 21st century technologies create an increasingly transparent society, leading some to suggest that the meanings of privacy are less relevant in these sociotechnical realms, particularly among millenial youth. The design of sociotechnical systems is clearly influenced by shifting social norms and values of privacy. Discursive research provides unique insights into the shifting meanings of privacy in 21st century sociotechnical realms. The goal of this project is to to provide an empirical examination of the discourses about privacy’s meaning to young adults, particularly college-aged youth, using semantic network analysis methods. In particular, we examine whether there are differential values of privacy depending on gender, and also in developmental shifts during college-age years. Participants (N=61) were 33 female (54%) and 28 male (46%) undergraduate students (average age 20.17, SD = 1.32). In-depth, semi-structured interviews were conducted using open-ended questions and hypothetical scenarios. A total of 28 hours of recorded text were transcribed, yielding 371 pages of single-spaced text. Analysis of the text was conducted using Leximancer semantic network software to generate conceptual maps, knowledge pathways, and concept co-occurrences. Results indicate that privacy is meaningful to young adults, yet in varied ways: Males more typically articulate privacy in material and informational terms, while females tend to articulate privacy in relational terms. A shift in the meaning of privacy is also seen in young adults’ developmental phases: Younger adults showed more concern for relational privacy aspects, whereas upperclassmen defined privacy in material and informational terms. Thus as students near graduation, material consequences become more salient and shift concerns of privacy away from the relational issues more common in early college years. This study expands understanding of discursive meanings of privacy in sociotechnical realms, providing key insights that can be incorporated in the design of privacy-enhanced sociotechnical systems. This study also opens up a new perspective for technology and policy design to consider relational aspects of privacy along with material privacy concerns.

PSAC: Privilege State-based Access Control

Ashish Kamra and Elisa Bertino

We propose an access control model specifically developed to support fine-grained decision semantics for the results of access control decisions. To achieve such semantics, the model introduces the concept of privilege states and orientation modes in the context of a role-based access control system. The central idea in our model is that privileges, assigned to a user or role, have a state attached to them, thereby resulting in a privilege states based access control (PSAC) system. In this paper, we present the design details and a formal model of PSAC tailored to database management systems (DBMSs). PSAC has been designed to also take into account role hierarchies that are often present in the access control models of current DBMSs. We have implemented PSAC in the PostgreSQL DBMS and in the paper, we discuss relevant implementation issues. We also report experimental results concerning the overhead of the access control enforcement in PSAC. Such results confirm that our design and algorithms are very efficient.

Reliability on the Poly^2 network

Ankur Chakraborty

Availability is not often a primary concern for frameworks meant to provide security. Poly^2 is one such framework. It provides us with a hardened foundation based on secure design principles to run mission-critical services. While the primary focus of Poly^2 till now seems to have been fault isolation, we will now attempt to add recovery as well.

Risk Management in a Strategic Information Security Management (ISM) framework

James Goldman & Suchit Ahuja

An integrated framework for strategic Information Security Management (ISM) was developed in Phase I of this research study. This integrated framework has been presented at the IEEE Symposium on Security & Privacy (2009 - Oakland, CA) and the 4th International Workshop on Business/IT Alignment & Interoperability (2009 - Amsterdam). The framework addresses the need for organizational information security requirements as well as alignment between business, IT and information security strategies. This is achieved via the integrated use of Control Objectives for Information Technology (COBIT) and Balanced Scorecard (BSC) frameworks, in conjunction with Systems Security Engineering Capability Maturity Model (SSE-CMM) as a tool for performance measurement and evaluation, in order to ensure the adoption of a continuous improvement approach for successful sustainability.

The lack of a flexible risk management methodology that can be integrated with Enterprise Risk Management (ERM) processes of an organization was identified as a major gap or drawback of the strategic ISM framework. In order to address this issue, the potential use of the Risk IT framework is proposed. Risk IT is an emerging risk management framework (by ISACA) that is based on COBIT processes. It consists of 3 domains - Risk Governance (RG), Risk Evaluation(RE) and Risk Response(RR). Risk IT focuses on business objectives, top-down traceability, performance maturity, and flexibility for integration. This study highlights some areas of potential integration between Risk IT and the previously designed strategic ISM framework, thereby addressing the risk management gap, while increasing its strategic value and improving usability for ISM purposes.

Role Mining and Policy Misconfigurations

Ian Molloy, Ninghui Li, Jorge Lobo, Yuan Qi, Luke Dickens

There has been increasing interest in automatic techniques for generating roles for role-based access control, a process known as role mining. Most role mining approaches assume the input data is clean, and attempt to optimize the RBAC state. We examine role mining with noisy input data and suggest dividing the problem into two steps: noise removal and candidate role generation. We introduce an approach to use (non-binary) rank reduced matrix factorization to identify noise and experimentally show that it is effective at identifying noise in access control data. User- and permission-attributes can further be used to improve accuracy. Next, we show that our two-step approach is able to find candidate roles that are close to the roles mined from noise-less data. This method performs better than the approach of mining noisy data directly and offering the administrator increased control in the noise removal and candidate role generation phases. We note that our approach is applicable outside role engineering and may be used to identify errors or predict missing values in any access control matrix.

The Impact of Instructional Methods on the Biometric Data Collection Agent

Benny Senjaya, Stephen Elliott, Eric Kukula

Biometric system deployments can be classified in two ways: attended and non-attended. This study focuses only on attended systems, which rely on skills and knowledge of the data collection agent to operate the user interface as well as the biometric device in the capture process. In contrast, non-attended systems only rely on the subject to provide the sample(s) without any supervision from an agent. An example of data collection agent is military personnel using a mobile iris recognition system for identifying people of interest in areas of conflict. The objective of the study is to evaluate the impact of instructional methods on the agent collecting data with a mobile iris recognition system. Participants will serve as the agent and will be screened in the beginning of a study to assess the learner’s type (verbal or visual) then they will be randomly given one type of instructional methods for training. Time will be recorded during the data collection process and will be used for measurement for the study. This research aims to provide insight into the following two questions:

  1. Does the instructional method have any effect on the training?
  2. Does one type of learner perform better in the data collecting process given a particular instructional method?*

TIAMAT: a Tool for Interactive Analysis of Microdata Anonymization Techniques

Chenyun Dai, Gabriel Ghinita, Elisa Bertino, Ji-Won Byun, Ninghui Li

Releasing detailed data (microdata) about individuals poses a privacy threat, due to the presence of quasi-identifier (QID) attributes such as age or zip code. Several privacy paradigms have been proposed that preserve privacy by placing constraints on the value of released QIDs. However, in order to enforce these paradigms, data publishers need tools to assist them in selecting a suitable anonymization method and choosing the right system parameters. We developed TIAMAT, a tool for analysis of anonymization techniques which allows data publishers to assess the accuracy and overhead of existing anonymization techniques. The tool performs interactive, head-to-head comparison of anonymization techniques, as well as QID change-impact analysis. Other features include collection of attribute statistics, support for multiple information loss metrics and compatibility with commercial database engines.

Using Forms for Information Hiding and Coding in Electrophotographic Documents

Maria V. Ortiz Segovia, George T.-C. Chiu, Jan P. Allebach

Common forensics tasks such as verifying ownership, authenticity, and copyright of a document can be accomplished through the use of imperceptible marks or signatures inserted during the printing process. Prior research has investigated techniques to embed and recover extrinsic information from electrophotographic text documents and halftone images. But in the absence of suitable halftone patches or text characters, another strategy to embed signatures in the document is needed. In this study, the use of the frames or borders that surround the contents of security documents such as bank statements, event tickets and boarding passes is proposed. While this new embedding context broadens the embedding domain, it also offers the possibility of using error-correcting coding techniques from the area of communications. Experimental results show that the addition of coding methods to the embedding scheme improve the embedding capacity and provide more robustness to our system.

Using User-Specific Behavioral Patterns to Describe Users in a Closed Web Environment

Jurica Seva, Ji Won Kim, Robert W. Proctor

The purpose of the experiment is to investigate specific behavioral patterns in a closed web environment, for example, “cnn.com”. The goal is to see if there is a user-specific behavioral pattern and if that pattern could be used in a web environment to identify an individual. Participants were asked to sit in front of a computer and browse through the information presented on cnn.com. The following data were recorded: date, time, milliseconds between actions, the active application, the window name, system message, mouse coordinates, and distance between current and past position in coordinates of the mouse/cursor. The experiment consists of 10 sessions of approximately 30 minutes each.

Web-Based Malware Propagation

James E. Goldman, Cory Q. Nguyen

The Internet is becoming an increasingly popular attack vector used by cyber criminals to infect computers for malicious purposes. It is estimated that over 10% of legitimate websites are infected with malware. The purpose of studying web-based malware is to understand how malware propagates through the web and the techniques and tools cyber criminals use to successfully infect computers. By understanding the varying attack vectors, the appropriate detection and prevention mechanisms can be employed to eliminate or reduce the threat of malware infections.

Successful web-based malware propagation consists of three elements: Propagation Vector, Propagation Apparatus, and Propagation Technique. For malware to propagate, it requires an infrastructure or “pipeline” that would allow the malware propagation vehicle to travel through. The malware propagation vehicle is made up of the tools and techniques that would get a victim to the malicious site to successfully infect their computer. The absences of any one of these three elements would render the web-based malware impotent.

The ultimate goal of this study is to eliminate and prevent the successful propagation of web-based malware. However, cyber criminals currently have the advantage due to the lack of detection tools and understanding of web-based malware propagation.


Get Your Degree with CERIAS