CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
Purdue University - Discovery Park
Center for Education and Research in Information Assurance and Security

Posters & Presentations 2015

Page Content

Assured Identity and Privacy

A Taxonomy of Privacy-protecting Tools to Browse the World Wide Web

Kelley Misata, Raymond Hansen, Baigan Yang


There is a growing public concern regarding big data and intelligence surveillance on unsuspecting Internet users, and an increase in public conversation around what does privacy really mean in the digital realm. Although technologies have been developed to help generate public protect their privacy, average users found the tools complex and difficult to decipher. This research aims to weed through some of these complexities by reviewing 6 publicly recognized technologies promoted to help users protect their privacy while browsing the web. The scope will be broad in order to touch on the important aspects each technology including promises, privacy realities, technical construct, ease of use and drawbacks average users should be aware of before using.

Data Spillage in Hadoop Clusters

Joe Beckman, Tosin Alabi, Dheeraj Gurugubelli


Data spillage is the undesired transfer of classified information into an unauthorized compute node or memory media. The loss of control over sensitive and protected data can become a serious threat to business operations and national security (NSA Mitigation Group, 2012. We seek to understand if classified data leaked, by user error, into an unauthorized Hadoop Distributed File System (HDFS), be located, recovered, and removed completely from the server.

Deception in Computing - Where and how it has been used

Jeffrey Avery, Chris Guterriez, Mohammed Almeshekah, Saurabh Bagchi, Eugene H. Spafford


Deception is defined as “presenting an altered view of reality” and has been used by mankind for thousands of years to influence other’s behavior and decision making. More recently, deception has also been applied to computing in a variety of areas, such as human computer interaction and digital communities. This work surveys different areas of computing to determine where and how they use deception. One area we study in particular is how deception is applied to security practices. This work also shows that while security is a growing field, deceptive practices have not been as readily adopted to improve defense.

FIDO Password Replacement: Spoofing a Samsung Galaxy S5 and PayPal Account Using a Latent Fake Fingerprint

Rylan Chong, Chris Flory, Jim Lerums, David Long, Prof Melissa Dark, and Prof Chris Foreman


Fingerprints are the most common biometric means of authentication. This project was to de-termine if the Samsung Galaxy S5 and PayPal FIDO Ready implementation was vulnerable to latent fake fingerprint spoofing using Brown’s (1990) and Smith’s (2014) approaches. Latent fake fingerprints could allow an illegitimate user access to secure information.


Melissa Dark


The INSuRE project is an attempt to pilot and scale a sustainable research network that: 1. Connects institution-level resources, University enterprise systems, and national research networks; 2. Enables more rapid discovery and recommendation of researchers, expertise, and resources; 3. Supports development of new collaborative science teams addressing new or existing research challenges; 4. Exposes and engages graduate students in research activity of national priority at participating institutions; 5. Provides development and sharing of tools that support research, and, 6. Facilitates evaluation of research, scholarly activity, and resources, especially over time.

Malware in Medical Devices

Susan Fowler


Health care facilities are increasingly adopting computers and medical devices into patient care regimens and therapies. Medical devices have evolved to become popular for many purposes, including prolonged managed care including implantable medical devices. Wireless communications are becoming popular for these IMDs as well as for networking medical devices in a clinical setting. Along with these progressions in technology, security and privacy must be considered to ensure patient privacy and safety. Malware can be introduced in many of the same ways traditional computer systems suffer compromises, with wireless technology compounding these vulnerabilities. Regulations and practices must recognize these threats to security, availability and privacy to both health care entities and patients. Keywords: Medical device, malware, information security

Monitoring DBMS Activity for Detecting Data Exfiltration by Insiders

Elisa Bertino, Lorenzo Bossi, Syed Rafiul Hussain, Asmaa Sallam


Data represents one of the most important assets of an organization. The undesired release (exfiltration) of sensitive or proprietary data outside of the organization is one of the most severe threats of insider cyber-attacks. A malicious insider who has the proper credentials to access organizational databases may, over time, send data outside the organizations network through a variety of channels, such as email, file transfer, web uploads, or specialized HTTP requests that encapsulate the data. Existing security tools for detecting cyber-attacks focus on protecting the boundary between the organization and the outside world. While such tools may be effective in protecting an organization from external attacks, they are less suitable if the data is being transmitted from inside the organization to the outside by an insider who has the proper credentials to access, retrieve, and transmit data. The “Monitoring DBMS Activity for Detecting Data Exfiltration by Insiders” (MDBMS) project is a research effort developing mechanisms to detect and counter efforts on the part of insiders to extract and exfiltrate sensitive data from government and enterprises.

Privacy-Enhancing Features of Identidroid

Daniele Midi, Oyindamola Oluwatimi, Bilal Shebaro, Elisa Bertino


As privacy today is a major concern for mobile systems, network anonymizers are widely available on smartphones systems, such as Android. However, in many cases applications are still able to identify the user and the device by means different from the IP address. Our work provides two solutions that address this problem by providing application-level anonymity. The first solution shadows sensitive data that can reveal the user identity. The second solutions dynamically revokes Android application permissions associated with sensitive information at run-time. In addition, both solutions offer protection from applications that identify their users through traces left in the application’s data storage or by exchanging identifying data messages. We developed IdentiDroid, a customized Android operating system, to deploy these solutions, and built IdentiDroid Profile Manager, a profile-based configuration tool for setting different configurations for each installed Android application.

Private Information Retrieval

Michael Kouremetis, Craig West


Private Information Retrieval(PIR) is an important subject in the field of Information Retrieval. PIR allows two parties to communicate without revealing the information to one of the parties. The goal of our project is to implement a Private Information Retrieval proof of concept utilizing a robust protocol by I. Goldberg (Goldberg’s Protocol). By implementing a proof of concept we will look at the underlying structures and cryptographic protocols used in Private Information Retrieval. With a greater understanding of Private Information Retrieval, and the underlying protocols, we would potentially be able to help develop systems which need certain privacy based queries, an extension beyond just index retrieval.

The Deep Web: An Exploratory Study of Social Networks

Rachel Sitarz and Kelly Cole


The purpose of the current study was to investigate the reason one would use an anonymous .onion social network. The current study surveyed users on various Tor social networks (n=200), through the use of an unstructured, open ended questionnaire. Data was analyzed using a Thematic Analysis method. The top 5 themes and demographics were recorded and presented below.

End System Security

Car Hacking: Determining the Relative Risk of Vehicle Compromise

David Hersh


In recent years, cars have gone through a technological renaissance, with each generation containing more features than the previous one. One of the features becoming increasingly common is built-in wireless connectivity, such as Bluetooth, Wi-Fi and 3G. While this added functionality is beneficial to the consumer, this opens up a new avenue of attack for hackers and criminals. But unlike a personal computer, if a car is hacked, the potential negative consequences are much higher. If an adversary can wirelessly exploit a car, they may be able to eavesdrop on conversations, turn off warning lights, and even control brakes and steering. Although multiple groups of researchers have shown that there are major security problems in common consumer vehicles, there is little experimental research on vehicle security. To encourage further research in this area, this work introduces a methodology for assessing the relative risk level of a vehicle (i.e., the risk associated with adding specific features to a vehicle and how they’re implemented).

Data Confidentiality and Integrity

Scott Carr, Mathias Payer


The root cause of most security vulnerabilities is memory corruption. Previous research focused on preventing memory corruptions attackers use to change the program’s intended control-flow. As these protections become more refined and widely deployed, attackers will resort to non-control data attacks. Non-control data attacks do not divert the intended control-flow, but simply read or write data in unintended ways by abusing a temporal or spatial memory safety error or a type error. A recent example of this is the HeartBleed bug where a buffer overflow allows an attacker to read the server’s private key. This example shows that non-control data attacks can be just as damaging as control-flow hijack attacks. Data Confidentiality and Integrity (DCI) augments the C programming language with a small set of annotations which allow the programmer to select protected data types. The compiler and runtime system prevent illegal reads and writes to variables of these types. The programmer selects types that contain information such as password lists, cryptographic keys, or identification tokens. Allowing the programmer to choose the protected data reduces overhead. Total memory protection mechanisms have been proposed, but have not been widely adopted due to prohibitively high overhead. With DCI, the programmer can specify the subset of security critical data and only pay the protection overhead cost of that subset – rather than all the data in the program. Our prototype shows the practicality of our approach. It effectively protects benchmarks and large programs.

PD3: Policy–based Distributed Data Dissemination

Rohit Ranchal, Denis Ulybyshev, Pelin Angin, Bharat Bhargava


Modern distributed systems (such as composite web services, cloud solutions) comprise of a number of hosts, which collaborate, interact and share data. One of the main requirements of these systems is policy-based distributed data dissemination (PD3). In the PD3 problem, the data owner wants to share data with a set of hosts. Each host is only authorized to access a subset of data. Data owner can directly interact only with a subset of hosts and relies on these hosts to disseminate data to other hosts. In order to ensure correct delivery of appropriate data to each host, it is necessary that each host shares entire data even though the hosts are only authorized for certain subset of data. We provide a formal description of the problem and propose a data-centric approach to address PD3. The approach enables policy-based secure data dissemination and protects data throughout their lifecycle. It is independent of trusted third parties, does not require source availability and has the ability to operate in unknown environments. The approach is demonstrated through its application to composite web services.

SNIPE: Signature Generation for Phishing Emails

Jeff Avery, Christopher Gutierrez, Paul Wood, Raffaele Della Corte, Jon Fulkerson Gaspar Modelo-Howard, Brian Berndt, Keith McDermott, Saurabh Bagchi, Dan Goldwasser, Marcello Cinque


Phishing attacks continue to pose a major headache for defenders of computing systems, often forming the first step in a multi-stage attacks. There have been great strides made in phishing detection and email servers have gotten good at flagging potentially phishing messages. However, some insidious kinds of phishing messages appear to pass through filters by making seemingly simple structural and semantic changes to the messages. We tackle this problem in this paper, through the use of machine learning algorithms operating on a large corpus of phishing messages and legitimate messages. By understanding common phishing features, we design a system to extract features and extrapolate out values of such features. The algorithms are specialized for phishing detection, such as, the use of synonyms or change in sentence structure. The insights and algorithms are instantiated in a system called SNIPE (Signature geNeratIon for Phishing Emails). To evaluate SNIPE, we collect the largest known corpus of phishing messages (used in any publicly known study) from the central IT organization at a tier-1 research university. We run SNIPE on the dataset and it exposes some hitherto unknown insights about phishing campaigns directed at university users. SNIPE is able to detect 100% of phishing messages that had eluded our production deployment of Sophos, a state-ofthe-art email filtering tool today.

Human Centric Security

Improving the Biometric Data Collection Process through Six Sigma

Rylan C. Chong, T. Grant Goe, Dr. Chad Laux


Since Six Sigma’s applications have been maturing and expanding into other industries, can Six Sigma be applied to the biometric industry? An area Six Sigma could be applied too is the process of improving quality of data collection. An example utilized to discuss Six Sigma application was through a case study approach using Brockly’s study (2013). Brockly’s study investigated what effect biometric multimodal data collection procedures and the test administrators had on the quality of data collected.

Information Alignment and Visualization for Security Operations Center Teams

Omar Eldardiry, Mallorie Bradlau, Barrett Caldwell


The development of cyber network operations centers (NOC) has created new needs to support human sensemaking via improved information alignment and visualization. This poster focuses on information needs and gaps involving network operations centers (NOCs) and security operations centers (SOCs) analyst personnel. Our goal is to enhance analyst sensemaking and usability of tools to assist security analysts in monitoring, managing and protecting their networks from suspicious activities. This project has proceeded in several stages. Based on previous interview findings, an in depth investigation and job shadowing was conducted with different SOC teams. The findings highlighted three promising areas of improvements for NOC and SOC tools to improve network operations sensemaking, team performance, and organizational information alignment.

Meaning-Based Machine Learning

Courtney Falk, Lauren Stuart


Meaning-Based Machine Learning (MBML) is a research program intended to show how training machine learning (ML) algorithms on meaningful data produces more accurate results than that of using unstructured data.

Natural Language IAS: Style Metrics from Semantic Analysis

Lauren M. Stuart, Julia M. Taylor, Victor Raskin


Stylometry is the quantification of author style such that authorship of a text can be posited, verified, or obfuscated. Style features currently in use capture the surface features of texts (such as punctuation use, misspellings, words or parts of words, and morphology), but some qualities of author style may be better captured by, or in conjunction with, meaning-based features. This poster outlines ongoing work in positing and evaluating author style quantification using meaning representation structures.

Password Coping Mechanisms

Austin Klasa, Dr. Melissa Dark


Passwords are the most common means of authenticating users, and the number of passwords a user must remember is increasing. This leads to the need to classify and study password coping mechanisms. This research project is a literature review and analysis of past research to classify password coping mechanisms and create a password coping mechanism taxonomy.

Network Security

A Visual Analytics based approach on identifying Server Redirections and Data Exfiltration

Weijie Wang, Baijian Yang, Yingjie Chen


How to better find potential cyber attacks is the billion question facing security researchers and practitioners. In recently years, visualization have being applied in the field of information technology but most work have not being able to provide better than non-visualization based techniques. In this work, we innovatively designed a graphic based system overview that can make suspicious activities related to server redirection attack and data exfiltration easier to identify. Due to the nature of the problem, the overview design must be scalable, accurate, and fast. This demands the system to visualize data that can reveal security events rather than simply plotting the raw data. The approach adopted in this work is to visualize aggregated traffic characteristics. The system is evaluated with the test data sets from VAST 2013 mini-challenge 3. The results are very encouraging and shed more positive lights on applying visual analytics in information Security.

Evaluating Public Cloud Providers

Courtney Falk


Security for public cloud providers is an ongoing concern. Programs like FedRAMP look to certify a minimum level of compliance. This project aims to build a tool to help decision makers compare different clouds solutions and weigh the risks against their own organizational needs.

Fast and Scalable Authentication for Vehicular Internet of Things

Ankush Singla, Anand Mudgeri, Ioannis Papapanagiotou, Atilla Yavuz


Modern vehicles are being equipped with advanced sensing and communication technologies, which enable them to support innovative services in the Internet of Vehicles (IoV) era such as autonomous driving. These services can be effective through the spatial and temporal synchronization of the vehicle with the other entities in the environment. Hence, the communication in IoVs must be delay-aware, reliable, scalable and secure to (a) prevent an attacker from injecting/manipulating messages; (b) minimize the impact (e.g., delay, communication overhead) introduced by crypto operations. For instance, consider a group of vehicles driving on a highway with high speed. Once a vehicle brakes suddenly, this is broadcasted to other vehicles to avoid collision. If the delay introduced by the crypto operations negatively affects the braking distance, then a car may not be able to stop in time. The current vehicular communication standards mandate the use of Public Key Infrastructures (PKI) to protect critical messages. However, existing crypto mechanisms introduce significant computation and bandwidth overhead, which creates critical safety problems. It is a vital research problem to develop security mechanisms that can meet the requirements of emerging IoVs. The overall goal of this research is to develop a new suite of cryptographic mechanisms, supported with time-valid framework and hardware-acceleration, to ensure secure and reliable operation IoVs. This project develops, analyzes and implements new authentication methods and then pushes the performance to the edge via cryptographic hardware-acceleration.

Hardware to Virtual Firewall Migration Heuristic Rules

Ibrahim Waziri Jr


In this era of cloud computing, many data centers rely on a composite security framework consisting of hardware and virtual firewalls. Hardware firewalls are optimized for greater throughput while virtualized firewalls can only scale to match DoS attempts. To maximize the utility of each form factor, we developed an in-line firewall scheme with variable filtering point. The primary filtering point changes between hardware and virtual firewalls based on realtime conditions. The architecture incorporates heuristic-based migration logic. To define the heuristics, a performance evaluation was conducted following two test scenarios: spike tests and endurance test. Packet throughput was also assessed using JMeter. The results indicate that a threshold approach to filter-point migration maximizes network throughout while offering the insurance of on-demand scalability.

How Secure and Quick is QUIC? Provable Security and Performance Analyses

Robert Lychev, Samuel Jero, Alexandra Boldyreva, and Cristina Nita-Rotaru


QUIC is a secure transport protocol developed by Google and implemented in Chrome in 2013, currently representing one of the most promising solutions to decreasing latency while intending to provide security properties similar with TLS. In this work we shed some light on QUIC’s strengths and weaknesses in terms of its provable security and performance guarantees in the presence of attackers. We introduce a security model for analyzing performance-driven protocols like QUIC and prove that QUIC satisfies our definition under reasonable assumptions on the protocol’s building blocks. Our analyses also reveal that with simple replay and manipulation attacks on some public parameters exchanged during the handshake, an adversary could easily prevent QUIC from achieving minimal latency by causing connection failure, probably resulting in fallback to TLS.

MIRROR: Automated Race Bug Detection for the Web via Network Events Replay

Sze Yiu Chau, Hyojeong Lee, Byungchan An, Julian Dolby and Cristina Nita-Rotaru


Many web applications are written in an asynchronous style, in which logic is triggered in response to network and user events. While this approach has performance benefits and can provide improved user experience, it also makes applications more error prone since the most used languages such as HTML and JavaScript do not provide any explicit support for concurrency control. We present MIRROR, a minimally-invasive race detector for client-side web applications which leverages recording and automated replaying of network events. Our tool uses a static approximation of happens-before ordering to automatically generate different testing scenarios by changing the order of these network events. Our tool is browser agnostic and can be used for both debugging and race finding as it does not require repeated interaction with the production server. We evaluate MIRROR using a benchmark of eight applications, where each captures a representative buggy coding pattern. Out of the eight applications, MIRROR was able to manifest and detect the bug for seven of them.

Network Forensics of Covert Channels in IPv6

Lourdes Gino D and Prof. Raymond A Hansen


According to Craig H. Rowland, “A covert channel is described as, any communication channel that can be exploited by a process to transfer information in a manner that violates the systems security policy. Essentially, it is a method of communication that is not part of an actual computer system design, but can be used to transfer information to users or system processes that normally would not be allowed access to the information”. Covert channels in IPv4 has been existing for a while and there has been various detection mechanisms. But the advent of IPv6 requires new research to identify covert channels and be able to perform forensics on such attacks. The current study aims at exploring the possibilities of performing forensics on such covert channels in IPv6.

Security Business Intelligence (SBI) Curriculum - Blazing the Trail

Kelley Misata, Dr. Marcus Rogers


The vision for this project was to create an undergraduate, multi-disciplinary security business intelligence (SBI) curriculum aimed at preparing students for the future of security business intelligence in enterprises. Students will navigate through basic processes, life cycles and data gathering and analysis tools in alignment with SBI critical in an organizational setting. Learning for this course will be conducted through lectures, lab based homework assignments, examinations and a presentation project.

Policy, Law and Management

Cyber Forensics: The Need For An Official Governing Body

Ibrahim Waziri Jr, Rachel Sitarz


In this study we identified and addressed some of the key challenges in digital forensics. An intensive review was conducted of the major challenges that have already been identified. At the end, the findings proposed a solution and how having a standardized body that governs the digital forensics community could make a difference.

Digital Forensics in Law Enforcement: A Needs Based Analysis of Indiana Agencies

Teri Flory, Rachel Sitarz


Many national needs assessments were conducted in the late 1990’s and early 2000’s by the Department of Justice and the National Institute of Justice, which all indicated that State and Local Law Enforcement did not have the training, tools, or staff to effectively conduct digital investigations (Institute for Security and Technology Studies [ISTS], 2002; National Institute of Justice [NIJ], 2004). Some of these needs assessments have also been conducted at a state level, but Indiana is not one of those states (Gogolin & Jones, 2010). Further, there are multiple training opportunities and publications that are available at no cost to state and local law enforcement, but it is not clear how many agencies use these resources (https://www.fletc.gov/ state-local-tribal; https://www.ncfi.usss.gov). This pilot study will provide a more up to date and localized assessment of the ability of Indiana Law Enforcement Agencies to effectively investigate when a crime that involves digital evidence is alleged to have occurred.

U.S. Bank of Cyber

Danielle Crimmins, Courtney Falk, Susan Fowler, Caitlin Gravel, Michael Kouremetis, Erin Poremski, Rachel Sitarz Nick Sturgeon, Yulong Zhang and Dr. Sam Liles


The technical report looked at past cyber attacks on the United States financial industry for analysis on attack patterns by individuals, groups, and nation states to determine if the industry really is under attack. An analysis explored attack origination from individuals, groups, and/or nation states as well as type of attacks and any patterns seen. After gathering attacks and creation of a timeline, a taxonomy of attacks is then created from the analysis of attack data. A Strengths, Weakness, Opportunities, and Threats (S.W.O.T.) analysis is then applied to the case study Heartland Payment Systems.

Web Based Cyber Forensics Training

Nick Sturgeon and Dr. Marcus Rogers


There is a specific need for high availability, high quality and low cost training for Law Enforcement officers in the Cyber Forensics Domain.

What Lies Beneath? The Forensics of Online Dating

Dheeraj Gurugubelli, Lourdes Gino and Dr. Marcus K Rogers


If you are an overworked, 25-year-old professional, working through the clock, even dating websites can seem uninteresting and too time consuming. Thanks to the slide, scroll and swipe-based online dating smartphone apps. One can just scroll through pictures, and connect or pass profiles with a swipe on a smartphone. Value added features like geo-location based user filtering, college-based user matches, megaflirt and user-to-user messaging are available for a small premium subscription fees. This is exactly the phenomenon behind dating apps like Tinder, CoffeemeetsBagel, DateMySchool, Zoosk and many others. Such platorms that allow information storage and sharing, open doors to cybercriminals, who pry on the users. This research aims to discover the digital evidence from such apps in smartphones.

Prevention, Detection and Response

A Tool For Interactive Visual Threat Analytics and Intelligence, based on OpenSOC Framework

Lourdes Gino D, Dheeraj Gurugubelli and Dr. Marcus Rogers


Cyber Threat Intelligence is a booming area in the field of Information Security that deals with aggregation, processing, evaluation and reporting of reliable information in real-time pertaining to threats posed on the cyber world that encompasses computers, smartphone, tablets and any device that’s connected to the Internet. The imminent need for threat intelligence is growing rapidly as the data flowing through the cyber world is growing gargantuan and as we are moving towards Internet of Things where almost any thing is connected to the Internet. Visual Threat Intelligence takes the threat intelligence to the next step where the data is presented in a human-perceivable way so as to help in making right and quick decisions to avert the cyber threat. The OpenSOC framework provides a unified platform for ingest, storage and analytics. The purpose of this research is to build a open-source visual threat intelligence tool based on the OpenSOC framework built over the Hadoop framework.

Achieving a Cyber-Secure Smart Grid through Situation Aware Visual Analytics

Dheeraj Gurugubelli, Dr. Chris Foreman and Dr. David Ebert


Utilities face enormous pressure to streamline their operations and provide consumption information to the consumers for better energy management. Smart meters have been instrumental to achieve better energy management. But alike any new deployment of technology, smart meters are prone to cyber attacks. Except, in this case they are part of critical infrastructure of the nation. The goal of this project would be to leverage visual analytics for delivering near-to-real-time visual insights on smart meter data that will help make quicker in times of a cyber response need. Cybersecurity of the Advanced Metering Infrastructure (AMI) continues to be one of the top research priorities in the industry right now. Securing the smart grid is about managing a continuum of risk across all the components in the grid within the right timeline. Performing analytics and making decisions based on large volumes of network data in real-time would boost the response time significantly. This research aims at visualizing network data obtained from processing the end-component profile data and network data from the AMI networks through a distributed data processing model.

Assessing Risk and Cyber Resiliency

Corey T. Holzer and James E. Merritt


The project is a review of existing risk assessment models and the newly created resiliency frameworks in order to assess how risk is being calculated and incorporated into cyber resiliency and to research the underlying assumptions that have been made in the forming of the current body of knowledge surrounding risk management and analysis in the field of cyber resilience. By comparing current quantitative and qualitative risk solutions we hope to identify any discrepancies, fallacies, or oversights that may have been working into the current orthodoxy of cyber risk management. We intend to use these identified short comings to adapt and strengthen the current risk management process used to analyze risk in the field of cyber resilience.

Basic Dynamic Processes Analysis of Malware in Hypervisors: Type I & II

Ibrahim Waziri Jr


This study compares, analyze and study the behavior of a malware processes within both Type 1 & Type 2 virtualized environments. In other to achieve this we set up two different virtualized environments and thoroughly analyze each malware processes behavior. The goal is to see if there is a difference between the behaviors of malware within the 2 different architectures. At the end we achieved a result and realized there is no significant difference on how malware processes run and behave on either virtualized environment. However our study is limited to basic analysis using basic tools. An advance analysis with more sophisticated tools could prove otherwise.

ErsatzPasswords - Ending Password Cracking

Christopher N. Gutierrez, Mohammed H. Almeshekah, Mikhail J. Atallah, and Eugene H. Spafford


In this work we present a simple, yet effective and practical, scheme to improve the security of stored password hashes rendering their cracking detectable and insuperable at the same time. We utilize a machine-dependent function, such as a physically unclonable function (PUF) or a hardware security module (HSM) at the authentication server. The scheme can be easily integrated with legacy systems without the need of any additional servers, changing the structure of the hashed password file or any client modifications. When using the scheme the structure of the hashed passwords file, etc/shadow or etc/master.passwd, will appear no different than in the traditional scheme.1 However, when an attacker exfiltrates the hashed passwords file and tries to crack it, the only passwords he will get are the ersatzpasswords — the “fake passwords”. When an attempt to login using these ersatzpasswords is detected an alarm will be triggered in the system that someone attempted to crack the password file. Even with an adversary who knows the scheme, cracking cannot be launched without physical access to the authentication server. The scheme also includes a secure backup mechanism in the event of a failure of the hardware dependent function. We discuss our implementation and provide some discussion in comparison to the traditional authentication scheme.

Increasing robustness and resilience: assessing disruptions and dependencies in analysis of System-of-Systems alternatives

Daniel Delaurentis, Karen Marais, Navindran Davendralingam, Zhemei Fang, Cesare Guariniello, Payuna Uday


This poster describes a multi-disciplinary effort, funded by the DoD’s Systems Engineering Research Center (SERC), towards establishing a System of Systems Analytic Workbench of computational tools to facilitate better-informed decision-making on SoS architectures. The work seeks to map relevant questions in system-of-system architectural decision to an appropriate set of quantitative methods that can provide analytical outputs to directly support decisions. Such an integrated approach is suitable to address the problem of increasing robustness and resilience in complex systems, with the goal of preventing or mitigating the effect of disruptions on the overall behavior of the system.

JagWarz Junior: Cyber Security for Young Adolescents​

Jasmine Herbert, Rushabh Vyas, Connie Justice, Vicky Smith


Currently there are few methodologies for introducing cyber security to young adolescents. This area of research will examine the importance of teaching cyber security at an early age as well as the significance of introducing cyber security through the use of digital game based learning. Within this study, cyber security will be taught to a sample of young adolescents through the use of a capture the flag style game, JagWaRz Junior. The effectiveness of JagWaRz Junior will be quantitatively measured through a pretest and posttest presented to the participants. Overall, this game will encompass ways to handle many of the risks that come with Internet usage at an early age. These risks include but are not limited to cyber bullying, pornography, online predators, personal privacy, and password protection. The results of this study will contribute to our understanding of the effectiveness of digital game based pedagogic learning. ​

Malware Defense with Access Control Policy and Integrity Levels

Nicole Hands, Harish Kumaravel


With the persistent threat of cyber attacks of many, ever-changing forms, the need for computer systems to have a comprehensive protection schema that can provide security against unknown, known, and polymorphic threats becomes apparent. Working under the premise that compromise is inevitable, the system should be able to detect that it has been compromised and respond in such a way that functionality degrades incrementally. This study represents a synthesis of multiple fields of research from integrity levels of operation to malware detection methods to access control policy. The system function of FTP will be used as a model and broken down into discrete computational units which will each be assigned attributes from which access control policy can be created. Upon change in the state of the attribute based on the premise that this change was caused by malware infection, the system would respond by lowering its integrity level, with processes continuing to function under modified rules. Preliminary work from the study will be presented.

Modeling Deception In Information Security As A Hypergame - A Primer

Christopher N. Gutierrez, Mohammed H. Almeshekah, Jeff Avery, Saurabh Bagchi, and Eugene H. Spafford


Hypergames are a branch of game theory to model and analyze game theoretic conflicts between multiple players who may have misconceptions of other player’s actions, preferences, and/or knowledge. They have been used to model military conflicts such as the Allied invasion of Normandy in 1945, the fall of France in WWII, the Cuban missile crisis, and etc. Unlike traditional game theory models, hypergames give us the ability to model misperception that results from the use of deception, mimicry, and misinformation. There is little work that analyzes the use of deception as a strategic defensive mechanism in computing systems. This poster will present a hypergame model to analyze computer security conflicts. We discuss how can hypergames be used to model the interaction between adversaries and system defender. We discuss a specific example where we modele the interaction between adversaries, who wish to steal some confidential data from an enterprise, and security administrators, who protect the system. We show the advantages of incorporating deception as a defenses mechanism as part of the hypergame model.

Risk Assessment in Layered Solutions

Christopher Martinez, Robert Haverkos


The transmission of classified (or highly sensitive) data requires a high degree of assurance. This project presents a meaningful method of combining risk assessments for individual security mechanisms into a risk assessment for the overall capability package (the layered solution).

Using Syntactic Features for Phishing Detection

Students: Gilchan Park / Advisor: Julia M. Taylor


The purpose of this research is to explore whether the syntactic structures and subjects and objects of verbs can be distinguishable features for phishing detection. To achieve the objective, we have conducted two series of experiments: the syntactic similarity for sentences, and the subject and object of verb comparison. The results of the experiments indicated that both features can be used for some verbs, but more work has to be done for others. The phishing corpora is comprised of old and up-to-date phishing emails, and the gap between them is over 10 years. To observe whether the pattern in phishing emails have changed over time with respect to subject and object of the verbs, we additionally compared between the two phishing corpora. The results showed us that most of subjects and objects were still identical, or similar from semantic perspective.

Coming Up!

Our annual information security symposium will take place on April 3rd and 4th, 2018.
Purdue University, West Lafayette, IN

More Information

Get Your Degree with CERIAS