Assured Identity and Privacy
Consumer Privacy Architecture for Power Grid Advanced metering infrastructure
Dheeraj Gurugubelli, Dr. Chris Foreman and Dr. Melissa Dark
Utilities install smart meters in homes. These smart meters allow the tracking and management of the energy consumption of the consumers. This will enable the utility companies to increase increase efficiency, lower costs, and reduce pollution. But the advanced meters, which use wireless and digital technologies to send frequent consumption data to utilities, face opposition from customers and others who see them as a threat to health, privacy, and security. From a utility company perspective, collection and management of such huge volumes of data at an individual level is not an essential business function. The goal of this research is to create an architecture preserving privacy of the consumer in the power grid advanced metering infrastructure while helping the utility company better manage data.
Privacy Preserving Access Control in Service Oriented Architecture
Rohit Ranchal, Ruchith Fernando, Zhongjun Jin, Pelin Angin, Bharat Bhargava
Service Oriented Architecture (SOA) comprises of a number of loosely-coupled services, which collaborate, interact and share data to accomplish a task. A service invocation can involve multiple services, where each service generates, shares, and interacts with the client's data. These interactions may share data with unauthorized services and violate client's policies. The client has no means of identifying if a violation occurred and has no control or visibility on interactions beyond its trust domain. Such interactions introduce new security challenges which are not present in the traditional systems. We propose a data-centric approach for privacy preserving access control in SOA based on Active Bundles. This approach transforms passive data into an active entity that is able to protect itself. It enables dynamic data dissemination decisions and protects data throughout its lifecycle. The granularity of the data being shared with a service is determined by the client's data dissemination policy.
pSigene: Generalizing Attack Signatures
Jeff Avery, Gaspar Modelo-Howard, Fahad Arshad, Saurabh Bagchi, Yuan Qi
Intrusion detection systems (IDS) are an important component to effectively protect computer systems. Misuse detection is the most popular approach to detect intrusions, using a library of signatures to find attacks. The accuracy of the signatures is paramount for an effective IDS, still today’s practitioners rely on manual techniques to improve and update those signatures. We present a system, called pSigene, for the automatic generation of intrusion signatures by mining the vast amount of public data available on attacks. It follows a four step process to generate the signatures, by first crawling attack samples from multiple public cyber security web portals. Then, a feature set is created from existing detection signatures to model the samples, which are then grouped using a biclustering algorithm which also gives the distinctive features of each cluster. Finally the system automatically creates a set of signatures using regular expressions, one for each cluster. We tested our architecture for the prevalent class of SQL injection attacks and found our signatures to have a True and False Positive Rates of over 86% and 0.03%, respectively and compared our findings to other SQL injection signature sets from popular IDS and web application firewalls. Results show our system to be very competitive to existing signature sets.
Resilient and Active Authentication and User-Centric Identity Ecosystems
Yan Sui, Xukai Zou
Existing proxy based authentication approaches have problems (e.g., non-binding, susceptible to theft and dictionary attack, burden on end-users, re-use risk). Biometrics, which authenticates users by intrinsic biological traits, arises to address the drawbacks. However, the biometrics is irreplaceable once compromised and leak sensitive information about the human user behind it. In this research, we propose a usable, privacy-preserving, secure biometrics based identity verification and protection system. Specifically, we propose a novel biometric authentication token called Bio-Capsule (BC) which is generated by a secure fusion of user biometrics and a (selected) reference subject biometrics. The fusion process preserves the biometric robustness and accuracy in the sense that the BC can be used in place of the original user’s biometric template without sacrificing the system’s acceptability for the same user and distinguishability between different users. There are more potential applications on this research: a user-centric identity ecosystem - a highly resilient, privacy-preserving, revocable, interoperable, and efficient user-centric identity verification and protection ecosystem; and an active authentication system - a provably secure, privacy-preserving, biometric active authentication system to support continuous and non-intrusive authentication.
Semantic Anonymization of Medical Records
Tatiana Ringenberg, Julia M. Taylor, Victor Raskin
With the availability of large amounts of data in the medical industry, it is becoming necessary, due to both regulatory and ethical concerns, to find unique ways of protecting patient identities. A name and social security number are no longer the only fields in a patient's record that can identify them. Data under HIPAA requires the removal of several Protected Health Information Identifiers. Symptoms themselves can also distinctly identify an individual in a large group. To prevent this, the Purdue OST Anonymization Project is using semantics to determine the degree to which any patient record is identifiable from others in a system. Our approach combines the conceptual mapping of Ontological Semantic Technology with the anonymity principles of K-Anonymity to semantically anonymize patient data for compliance with regulatory and research policies.
The Password Wall — A Better Defense against Password Exposure
Mohammed Almeshekah, Mikhail Atallah and Eugene Spafford
We present an authentication scheme that better protects users' passwords than in currently deployed password-based schemes, without taxing the users' memory or damaging the user-friendliness of the login process. Our scheme maintains comparability with traditional password-based authentication, without any additional storage requirements, giving service providers the ability to selectively enroll users and fall-back to traditional methods if needed. The scheme utilizes the ubiquity of smartphones, however, unlike previous proposals it does not require registration or connectivity of the used phones. In addition, no long-term secrets are stored in the user's phone, mitigating the consequences of losing it. The scheme significantly increases the difficulty of launching a phishing attack; by automating the decisions of whether a website should be trusted and introducing additional risk at the adversary side of being detected and deceived. In addition, the scheme is resilient against Man-in-the-Browser (MitB) attacks and compromised client machines. Finally, we incorporate a user-friendly covert communication between the user and the service provider giving the user the ability to have different levels of access (instead of the traditional all-or-nothing), and the use of deception (honeyaccounts) that make it possible to dismantle a large-scale attack infrastructure before it succeeds (rather than after the painful and slow forensics that follow a successful phishing attack). As an added feature, the scheme gives service providers the ability to have full-transaction authentication.
Top-K Frequent Itemsets via Differentially Private FP-trees
Jaewoo Lee and Chris Clifton
Frequent itemset mining is a core data mining task and has been studied extensively. Although by their nature, frequent itemsets are aggregates over many individuals and would not seem to pose a privacy threat, an attacker with strong background information can learn private individual information from frequent itemsets. This has lead to differentially private frequent itemset mining, which protects privacy by giving inexact answers. We give an approach that first identifies top-k frequent itemsets, then uses them to construct a compact, differentially private FP-tree. Once the noisy FP-tree is built, the (privatized) support of all frequent itemsets can be derived from it without access to the original data. Experimental results show that the proposed algorithm gives substantially better results than prior approaches, especially for high levels of privacy.
VeryBioIDX: Privacy Preserving Biometrics-Based and User Centric Authentication Protocol
Hasini Gunasinghe, Elisa Bertino
We propose a privacy preserving biometric based authentication protocol by which user can authenticate to diﬀerent service providers from mobile phone, without involving identity provider in transactions, thus enhancing privacy. Authentication is based on a cryptographic identity token which embeds a unique, repeatable and revocable identiﬁer generated from the user’s biometric image and a random secret, supporting two-factor authentication based on zero-knowledge proofs of knowledge. Our approach for generating biometric identiﬁers from users’ biometric is based on perceptual hashing and SVM classiﬁcation techniques.
End System Security
A Framework for Service Activity Monitoring
Ruchith Fernando, Rohit Ranchal, Pelin Angin, Bharat Bhargava
In a service-oriented architecture (SOA) environment, a service can dynamically select and invoke any service from a group of services to offload part of its functionality. This is very useful to build large systems with existing services and dynamically add services to support new features. One of the main problems with such a system is that, it is very difficult to trust the service interaction lifecycle and assume that the services behave as expected and respect the system policies. We propose a centralized service monitor, that audits and detects malicious activity or compromised services by analyzing information collected via monitoring agents. The service monitor includes two modes of operation - active and passive - where one can evaluate service topologies with various policies.
A Key Management Scheme in BYOD Environment
Di Xie, Baijian Yang
Bring-Your-Own-Device (BYOD) refers to an IT policy that encourages and allows employees to use their personal devices to access privileged corporate network resources. Current BYOD practices are not sufficient to provide both flexible and secure access to data stored on personal devices and are likely to cause privacy infringement issues and incur high management cost. This research presents an Innovative Key Management Scheme (IKMS) approach that employs a hierarchical and time-bounded key management system to battle the security and privacy issues in BYOD deployment.
FPGA Password Cracking
Max DeWees, Michael Kouremetis, Matthew Riedle, Craig West
Field Programmable Gate Arrays (FPGAs) are a unique hardware component that allows for dynamic prototyping design and implementation of hardware logic. FPGAs provide the advantages of dedicated hardware functionality and parallelization for specific tasks. In this research, we look to apply these advantages of FPGAs to breaking cryptographic functions, primarily hash functions and encryption passwords. While this has been done successfully in the past to older functions like MD5, it has not been thoroughly analyzed for more complex systems such as TrueCrypt, Windows BitLocker, or Mac OS X FileVault. Our focus is to analyze the feasibility, scalability, and success of using one or more FPGAs to crack these systems.
Human Centric Security
A Study of Probabilistic Password Models
Jerry Ma, Weining Yang, Min Luo, Ninghui Li
A probabilistic password model assigns a probability value to each string. Such models are useful for research into understanding what makes users choose more (or less) secure passwords, and for constructing password strength meters and password cracking utilities. Guess number graphs generated from password models are a widely used method in password research. In this paper, we show that probability-threshold graphs have important advantages over guess-number graphs. They are much faster to compute, and at the same time provide information beyond what is feasible in guess-number graphs. We also observe that research in password modeling can benefit from the extensive literature in statistical language modeling. We conduct a systematic evaluation of a large number of probabilistic password models, including Markov models using different normalization and smoothing methods, and found that, among other things, Markov models, when done correctly, perform significantly better than the Probabilistic Context-Free Grammar model, which has been used as the state-of-the-art password model in recent research.
Analysis of Coping Mechanisms in Password Selection
Brian Curnett, Paul Duselis, Teri Flory
Do more stringent password policies actually create stronger and more secure passwords? Do humans reach a threshold when creating passwords that follow policies but fail to provide an adequate level of protection? Previous work has focused on password strength and the effectiveness of password defeating tools, but has only briefly touched on user frustration with policies, or the coping mechanisms that may be employed by the users to satisfy those stringent policies. Our work will utilize the information available from previous studies and expand on that to include user frustration and coping methods. Our examination will include multiple policies that are currently accepted and in use by organizations and companies from a wide variety of backgrounds. This will attempt to show the true measure of protection that the industry standard policies provide. It will be necessary to review processes of data collection, and determine the most effective procedures to gather this information. We will then develop a method, utilizing this plan, and propose this to the partners for future review and use. We will propose an analytic procedure to be used in determining an optimal relationship between password policy’s strength and coping mechanisms. And finally a set of repeatable statistical procedures that can be applied toward data sets of passwords to ensure the policy’s strength.
Detecting Tic-Tac-Stego: Anomaly Detection for Steganalysis in Games
Philip C. Ritchey, Vernon J. Rego
Motivated by the identification of potential areas in the broader field of information security where the study of human behavior can be used to enhance and improve information security, we investigated methods for detecting information hiding in games. This work builds on previous work which presented Tic-Tac-Stego, a general methodology for hiding information in games. The focus of this work is to understand and experiment with three steganalysis techniques for detecting steganography in games: rules-based, feature-based, and probabilistic model-based detectors. Under the assumption that the adversary is unable to predict the play style of the stego-agent, we find that a feature-based steganalysis method performs the best at detecting usage of the covert channel, capable of achieving accuracy greater than 97% against all stego-agents tested. On the other hand, under the assumption that the adversary is able to predict the play style of the stego-agent, the rules-based method is more accurate and requires fewer games per example than the feature-based method. The probabilistic-based method is found to be overall less accurate than both the feature-based and rules-based methods.
Enhancing Analyst Situation Awareness and Event Response in Cyber Network Operations Centers
Omar Eldardiry, Barrett Caldwell
The development of cyber network operations centers has created new needs to support human sense-making and situation awareness in a cyber network common operating picture (CNCOP). The goal of this research is to identify critical features that support expert analysts in event detection, identification, and response to cyber events (emergency scenarios, hardware breakdowns or other sources of degraded performance). The goal is to improve information visualization to support recognition and response to cyber- and cyber-physical network events. The results of this research project will be used to improve operational capability and analyst situation awareness in NOC environments and provide design guidance to improve analyst event monitoring and response in other cyber-physical infrastructure operations centers.
Finland's Cyber Warfare Capabilities
In light of the discussion on cyber intelligence, the content of this paper includes analysis of open source data in respect to a methodical assessment of Finland’s cybersecurity and cyberwarfare capabilities. The information related to Finland’s cyber preparedness and cybersecurity awareness is analyzed together with the relevant statistical factors in order to outline the relative stage of cyber capability development in the military context. Finland’s cybersecurity strategy, Finnish security and defense policy, and Finland’s academia perspectives on cyber operations realms are elaborated in parallel with the conceptualization on military doctrine adaptation in the cyber domain in order to describe Finland’s posture relative to potential cyberwarfare conflict engagements. In addition to this, the key stakeholders in cybersecurity governance are also enlisted, providing insight into the practical aspects of the nations’ efforts for cybersecurity maintenance and constant improvement.
Mutual Restraining Voting Involving Multiple Conflicting Parties
Dr. Xukai Zou (email@example.com), Yan Sui, Huian Li, Wei Peng, and Dr. Feng Li
Scrutinizing current voting systems including existing e-voting techniques, one can discern that there exists a gap between casting secret ballots and tallying & verifying individual votes. This gap is caused by either disconnection between the vote-casting process and the vote-tallying process or opaque transition (e.g., due to encryption) from vote-casting to vote-tallying and damages voter assurance, i.e., any voter can be assured that the vote he/she has cast is verifiably counted in the final tally. We propose a groundbreaking e-voting protocol that fills this gap and provides a fully transparent election. In this fully transparent internet voting system, the transition from vote-casting to vote-tallying is seamless, viewable, verifiable, and privacy-preserving. As a result, individual voters will be able to verify their own votes and are technically and visually assured that their votes are indeed counted in the final tally, the public will be able to verify the accuracy of the count, and political parties will be able to catch fraudulent votes. And all this will be achieved while still retaining what is perhaps the core value of democratic elections--the secrecy of any voter's vote. The new protocol is the first fully transparent e-voting protocol which technologically enables open and fair elections and delivers full voter assurance, even for the voters of minor or weak political parties.
Natural Language IAS: The Problem of Phishing
Lauren M. Stuart, Gilchan Park, Julia M. Taylor, Victor Raskin
Phishing emails solicit personal and sensitive information while masquerading as legitimate messages from financial institutions. Automatic detection of phishing emails will help reduce the financial losses incurred by their victims. Computer understanding of message meaning and other hallmarks of legitimate and illegitimate emails can improve detection, and continue the expansion of natural language understanding techniques and processes into information assurance and security applications.
Using social network data to track information and make decisions during a crisis
Student: David Hersh Advisors: Julia Taylor, Victor Raskin
Social network use has dramatically increased in recent years, causing a surge in the amount of data people publicly share. Many share events of their lives on a daily basis, and get much of their news from social networks. So when a crisis occurs, such as a school shooting, many people in the affected area report what is going on through their social networks, allowing others to get firsthand accounts of the situation as it progresses. This information is often available before official information is, making it a valuable resource for anyone who needs to know the most up-to-date information on the crisis. In this research, we take the first steps toward the development of a system that extracts crisis information from social networking data in real time, allowing the system’s users to have a consistently up-to-date version of the situation.
A Framework to Find Vulnerabilities Using State Characteristics in Transport Protocol Implementations
Sam Jero, Hyojeong Lee and Cristina Nita-Rotaru
We propose a platform for automatically finding attacks in transport protocol implementations. Our platform uses virtual machines connected with a network emulator to run unmodified target implementations, ensuring realism. We focus on attacks involving the manipulation or injection of protocol messages and build a framework to perform these basic malicious actions. To mitigate state-space explosion resulting from numerous combinations of malicious actions and protocol messages, we leverage protocol states. First, we build a state tracker that can infer the current state of the target system from message traces. Using the state tracker and a benign execution, we classify states based on observable characteristics. We then associate basic attack actions with characteristics of states and compose attack strategies based on this information. We monitor the effect of these attack strategies and determine which actions are effective for which states. We use this information to focus or prune our attack strategies for states with similar characteristics.
Divide & Recombine for Big Data Analysis for Cybersecurity - Application of DNS Blacklist Query Study
Ashrith Barthur, Dr. William S. Cleveland, John Gerth
D&R is a statistical approach to big data that provides comprehensive, detailed analysis. This is achieved because almost any analytic method from machine learning, statistics, and visualization can be applied to the data at their finest level of granularity. D&R also enables feasible, practical computation because the computations are largely embarrassingly parallel. Our work has two core threads. 1. Tailor the D&R environment to analyse big data in cybersecurity. 2 Apply this tailored environment the Spamhaus traffic at the Stanford University mirror.
Policy, Law and Management
Confidentiality Guidelines for Cloud Storage
Joseph Beckman, Matthew Riedle, Hans Vargas
As cloud computing is becoming more popular among the average user, and even governments, the question arises of how secure the data stored in the cloud. Guidelines have been established by FedRAMP that evaluate certain security protocols for cloud providers like Google Drive and Amazon Web Services. This project will examine the confidentiality and access control guidelines for Amazon's S3 data storage, looking to see if they are sufficient for current and future markets.
Cyber 9/12 Student Challenge: Team Purdue Cyber Forensics
Rachel Sitarz, Eric Katz, Nick Sturgeon, & Jake Kambic
The four Purdue Cyber Forensics graduate students competed in the Cyber 9/12 Student Challenge. They were asked to take on the role of the Cyber Security Directorate of the National Security Staff. They had to create four policy response alternatives, to a fcitional major cyber incident, that affected the US National Security. They were tasked with creating the four policies, then presenting the policies to experts in Cyber Security policy in Washington DC.
DC3 Digital Forensics Challenge
Will Ellis, Jake Kambic, Eric Katz, Sydney Liles
This poster is designed to show the accomplishments of team or11--, winners of the 2013 Defense Cyber Crime Center's Cyber Forensics Challenge. This is the largest and most prestigious cyber forensics competition in the world. Going up against over 1,200 competing teams, Purdue's team took 1st place in US and global graduate division.
Implementing Bayesian Statistics from an Analysis of Competing Hypothesis Framework
Brian Curnett and Samuel Liles
The Analysis of Competing Hypotheses system is a decision analysis tool developed by the intelligence community to aid analysts in decision making. It was first developed by Richards J. Heuer to help analysts keep their biases in check when making important decisions. This system's effectiveness can be furthered to counter forms of deception and cultural bias by implementing a Bayesian Belief Network and by quantifying cultural trends.
Netherland's Cyber Capabilities
The purpose of this study was to perform a OSINT analysis of the Netherlands capabilities to protect itself from cyber-attacks. A list of all possible and typical Actors were identified as they represent different levels of threats to this nation, the table at the left explains in detail who those actors are, what their intentions might be, the level of expertise they are expected to have, and finally the more likely targets that they might attack. The Netherlands has a population of close to 18 million people with as estimated GDP of 696 billion USD and a per capita of 41,000 USD, which represents in the world rank, 23rd and 12th respectively. It comes as not surprise that its ICT rank is also high, occupying 7th place in the word from 2012.
Saudi Arabian Policy on Cyber Capabilities
Brian Curnett and Samuel Liles
Saudi Arabia is a major player in the arena of world politics. However they are only a fledgling nation in the field of cyber arena and is still trying to bring itself into the modern era. It is the Saudi Arabian policy of replacing cyber security with cyber censorship which led to the vulnerabilities which exposed then nation’s oil industry to attack. As a compensatory mechanism foreign nation’s contractors to solve technical problems rather than developing a domestic knowledge base. This has made the nation of Saudi Arabia more vulnerable for the long term.
Technological Impact of Criminal Enterprises: The Impact of Cloud Computing:
Rachel Sitarz, Sam Liles
Cloud computing is an abstract term, which is often difficult for people to understand, yet most are moving to the cloud to store data. Criminal organizations are also utilizing the cloud of data storage, transmission, and communications, which led to the research question of, how are current criminal organizations structuring their criminal enterprises, and how does technology impact the structure? The current project is exploratory, making comparison of current criminal organizations with historical groups and maintains that those groups that are utilizing the cloud are no different than historical criminals. They simply are utilizing a new medium to facilitate their criminal activity. Criminal organizations have typically maintained a hierarchal and organizational structure. With the developments of technology, such as the cloud, groups are continuing to maintain enterprise structure, but allowing for geographically disparate transmission of data. This also leads to the potential problem of remote destruction of evidence, when Law Enforcement executes searches on a party or parties, within the organization. Criminals have taken to the technological advancements for many reasons, such as the anonymity factor, the expertise needed by law enforcement to apprehend criminals, and the ease of access. Technological advancements are often taken for granted, but is something that needs to be considered in the apprehension of criminals and the combat of criminal activity.
The Efficacy of Case Studies for Teaching Policy in Engineering and Technology Courses
Rylan Chong, Dr. Melissa Dark, Dr. Ida Ngambeki, and Dr. Dennis Depew
Public policy is an increasingly important topic in the engineering and technology curriculum as it has been recognized by a community of experts, National Research Council of the National Academies (NRCNA), Accreditation Board for Engineering and Technology (ABET), American Association for the Advancement of Science (AAAS), and the National Academy of Engineering (NAE). The purpose of this study was to extend the work of Chong, Depew, Ngambeki, and Dark “Teaching social topics in engineering: The case of energy policy and social goals” by exploring a method to introduce public policy using a case study approach to undergraduate engineering technology students in the engineering economics course in the College of Technology at Purdue University. The substantive contribution of this study addressed the following questions: 1) did the students understand and identify the policy context, 2) how effective was the use of case studies to introduce the students to policy, and 3) areas of improvement to enhance efficacy of the case studies to introduce students to policy?
The Impact of University Provided Nurse Electronic Medical Record (EMR)Training on Hospital Provider Systems: A Computer Simulation Approach
James Anderson, Elizabeth Borycki, Andre Kushniruk, Shannon Malovec, Angela Espejo, Marilyn Anderson
Hospitals lose valuable productivity when nurses are off of the unit for electronic medical record system (EMR) training. Universities lose valuable clinical training hours when students are required to learn various EMR systems at clinical sites during clinical rotations. Centralizing EMR training within the university classroom curriculum could provide the hospital with trained new hires while preserving student clinical time for bedside care. Through this study we investigated the cumulative influence of integrating EMR training in nursing classroom curriculum on hospital nurse time away from caregiving and number of EMR trained nurses. A computer simulation model was specified using the STELLA program. The model simulated once a year hiring of nurses over a 4 year period for a total of 500 new hires. The model predicted the number of new hires that need EMR training, the number of new hires that arrive trained by the University, and the time away from caregiving to train new hires in terms of change in University curriculum to include EMR training. Findings indicate that efficiency of clinical training can be potentially improved by centralizing EMR training within the nursing curriculum. Integrating EMR training in nursing classroom curriculum potentially results in more available time for nurse bedside care and reduced cost in health organization training of new nurses. Further investigation is needed to assess the cost impact of curricular integration.
The Irish Economy's Vulnerability to Cyber Conflict
Information technology comprises a quarter of Ireland's GDP. This project aims to answer the question of whether or not the Irish government is adequately prepared to protect this vulnerable sector of their economy.
Threats, Vulnerabilities, and Security Controls in Cloud Computing
Hans Vargas, Temitope Toriola
In cloud computing, information is not stored on your personal computer it is stored on the cloud. The cloud is a metaphor for the Internet. The cloud can be accessed by any computer anywhere in the world. This includes devices such as cell phone and kindle. Personal computers have limited space and often run out of resources. The equipment cannot keep up with the demand and the service slows down. The cloud can do anything it has no limits. The cloud takes the work off of one computer and puts the software into one database that many people can access at once from different computers. However there is risk in using cloud computing. Unauthorized people such as hackers may be able to get to your data as well. Cloud providers are companies that host cloud services and are in charge of protecting your data. They use many methods to protect your data in the cloud and keep it from hackers. This research investigates cloud providers to see if they are protecting cloud data like they claim to be.
Prevention, Detection and Response
A Critical Look at Steganographic Investigations
Steganography, the practice of hiding hidden information in plain sight, has been a threat for hundreds of years in different medium. In today’s world, hiding files and information digitally inside of images, audio, programs, and most any other file-type could pose a very real danger when two individuals are communicating without anyone knowing they are doing so. Researcher Michael Burgess designed a process and made a tool that takes any file and injects (and extracts) it inside of any mono wave file, as long as the wave file is approximately double the size of the target hidden file. The resulting file has the same size and properties of the original wave file, and no difference can be heard by the human ear. Alongside, all current anti-stego tools have a difficult time detecting that anything is hidden. With a tool as simple as this being able to pass by detection, steganographic investigations need to be taken much more seriously, and include more discovery of these tools rather than the files themselves.
Analysis of Cyberattacks on UASs in Simulation
Scott Yantek, James Goppert, Nandagopal Sathyamoorthy, Inseok Hwang
Unmanned aerial systems (UASs) have attained widespread use in military and research applications, and with recent court rulings their commercial use is rapidly expanding. Because of their dependence on computer systems, their high degree of autonomy, and the danger posed by a loss of vehicle control, it is critical that the proliferation of UASs be accompanied by a thorough analysis of their vulnerabilities to cyberattack. We approach the issue from a controls perspective, assuming the attacker has already gained some amount of control over the system. We then investigate vulnerabilities to certain types of attacks.
Communications, Information, and Cybersecurity in Systems-of-Systems
Cesare Guariniello, Dr. Daniel DeLaurentis
The analysis of risks associated with communications, and information security for a system-of-systems is a challenging endeavor. This difficulty is due to the interdependencies that exist in the communication and operational dimensions of the system-of-systems network, where disruptions on nodes and links can give rise to cascading failure modes. In this research, we propose the application of a functional dependency analysis tool, as a means of analyzing system-of-system operational and communication architectures. The goal of this research is to quantify the impact of attacks on communications, and information flows on the operability of the component systems, and to evaluate and compare different architectures with respect to their robustness and resilience following an attack. The model accounts for partial capabilities and partial degradation. By comparing architectures based on their sensitivity to attacks, the method can be used to guide decision both in architecting the system-of-systems and in planning updates and modifications, accounting for the criticality of nodes and links on the robustness of the system-of-systems. Synthetic examples show conceptual application of the method
Distributed Fault Detection and Isolation for Kalman Consensus Filter
Kartavya Neema, Daniel DeLaurentis
This research deals with the problem of developing a distributed fault detection methodology for recently developed distributed estimation algorithm called Kalman Consensus Filter (KCF). We extended the residual covariance matching techniques, developed for detecting faults in centralized Kalman filters, and use it for distributed fault detection in KCF. Faults present due to faulty sensor measurements are diagnosed and isolated from the system. Specifically, faults due to change in sensor noise statistics and outliers in the sensor measurements are considered. We further develop a Robust Kalman Consensus Filter algorithm and demonstrate the effectiveness of the algorithm using simulation results.
End to End Security in Service Oriented Architecture
Mehdi Azarmi, Bharat Bhargava
With the explosion of web-based services and increasing popularity of cloud computing, Service-Oriented Architecture is becoming a key architectural style for the development of distributed applications. However, there are numerous security challenges in SOA that need to be addressed. In this poster, we discuss the key security challenges in SOA and propose two solutions. These solutions are: a framework for end to end policy monitoring and enforcement; and secure and adaptive service composition.
INSuRE -- Information Security Research and Education
PI: Dr. Melissa Dark, CoPI: Brandeis Marshall, Project Team: Courtney Falk, L. Allison Roberts, Filipo Sharevski
The INSuRE project is an attempt to pilot and scale, and then again pilot and scale a sustainable research network that 1) connects institution-level resources, University enterprise systems, and national research networks; 2) enables more rapid discovery and recommendation of researchers, expertise, and resources; 3) supports the development of new collaborative science teams to address new or existing research challenges; 4) exposes and engages graduate students in research activity of national priority at participating institutions; 5) provides for the development and sharing of tools that support research, and, 6) facilitates evaluation of research, scholarly activity, and resources, especially over time.
Log-Centric Analytics for Advanced Persistent Threat Detection
Shiqing Ma, Xiangyu Zhang, Dongyan Xu
Today’s enterprises face increasingly significant threats such as advanced persistent threats(APTs). Unfortunately, current cyber attack defense technologies are not catching up with the attack trends. Meanwhile, enterprises continue to generate large volume of logs and traces at system, application, and network levels and they remain under-utilized in cyber attack detection. We present an integrated framework for advanced targeted attack detection. Our framework consists of two major components: LogIC(Log-based Investigation of Causality): a fine-grain system logging and causal analysis tool which enables high-accuracy causal analysis of system log generated by an individual machine, and LogAn(Log Analytics): a “Big Data” analyzer and correlator on end-system and network logs which enables advanced targeted attack detection by querying and correlating logs across machines in an enterprise. The key idea behind LogIC is to partition the execution of a long-running application process into multiple finer-grain “execution units” for high causal analysis accuracy, without application source code. The key idea behind LogAn is to leverage the single-host causal analysis results to detect an enterprise-wide APT, via causal graph recognition and context correlation.
Making the Case of Digital Forensics Field Training for Parole Services
The purpose of my research is to provide insight into the need for digital forensic field training for parole services. The current system utilized by most parole agencies is inefficient, costly, and disadvantageous to public safety. Basic forensic field training and digital equipment for parole agents could reduce arrest times, taxpayer costs, and increase public safety.
Periodic Mobile Forensics
Android devices are becoming more pervasive. Currently there are few enterprise methods to identify and measure malicious user and application behavior in order to detect when a compromise has occurred. Research being conducted at MITRE in conjunction with Purdue is looking at over the air (OTA) methods to determine when a phone has been compromised and how it can best be detected.
Robust Hybrid Controller Design: Cyber Attack Mitigation Strategy for Cyber-Physical Systems
Cheolhyeon Kwon and Inseok Hwang
This paper considers the controller design for Cyber-Physical Systems (CPSs) that is robust to various types of cyber attacks. While the previous studies have investigated a secure control by assuming a specific types of attack strategy, in this paper we propose a hybrid robust control scheme that contains multiple sub-controllers, each matched to a different type of cyber attacks. Then the system can be adapted to various cyber attacks (including those that are not assumed for sub-controller design) by switching its sub-controllers to achieve the best performance. We propose a method for designing the secure switching logic to counter all possible cyber attacks and mathematically verify the system’s performance and stability as well. The performance of the proposed control scheme is demonstrated by an example of the hybrid H 2 - H infinity controller applied to a CPS subject to cyber attacks.
Text-based Approaches to Detect Phishing Attacks
Gilchan Park, Lauren Stuart, Julia M. Taylor, Victor Raskin
The purpose of the first research is to report on an experiment into text-based phishing detection. The developed algorithm uses previously published work on the, so-called PhishNet-NLP, a content based phishing detection system. In particular, this research aims to analyze the keywords that lead used to do some actions in email texts. The algorithm produced the considerable results in filtering out malicious emails (TPR); however, the rate of text falsely identified as phishing (FPR) needed to be addressed. To solve the FPR problem, tradeoff between TPR and FPR was performed to reduce the FPR while minimizing the decrease in the phishing detection accuracy. The second research’s aim is to compare the results of computer and human ability to detect phishing attempts. Two series of experiments were conducted, one for machine and the other one for humans, using the same dataset, and both were asked to categorize the emails into phishing or legitimate. The results prove that machine and human subjects differ in classification of phishing emails. This comparison suggests that humans intelligence to detect some types of phishing emails that machine could not recognize needs to be semantically computerized so as to ameliorate the machine’s phishing detection ability.
The Case of Using Negative (Deceiving) Information in Data Protection
Mohammed Almeshekah, Mikhail Atallah and Eugene Spafford
In this paper we develop a novel taxonomy of methods and techniques that can be used to protect digital information. We explore complex relationships among these protection techniques grouped into four categories. We present analysis of these relationships and discuss how can they be applied at different scales within organizations. We map these protection techniques against the cyber kill-chain model and discuss some findings. Moreover, we identify the use of deceit as a useful protection technique that can significantly enhance the security of computer systems. We posit how the well-known Kerckhoffs’s principle has been misinterpreted to drive the security community away from deception-based mechanisms. We examine advantages these techniques can have when protecting our information in addition to traditional methods of denial and hardening. We show that by intelligently introducing deceit in information systems, we not only lead attackers astray, but also give organizations the ability to detect leakage; create doubt and uncertainty in leaked data; add risk at the adversaries’ side to using the leaked information; and significantly enhance our abilities to attribute adversaries. We discuss how to overcome some of the challenges that hinder the adoption of deception-based techniques.