CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
CERIAS Logo
Purdue University
Center for Education and Research in Information Assurance and Security

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Firewalls & Internet Security Conference

National Computer Security Association
Added 2016-11-29

Report to the President June 2005: Computational Science: Ensuring America's Competitiveness

President's Information Technology Advisory Commitee
Added 2016-11-29

Analyzing Computer Intrusions

Andrew H. Gross
Added 2016-11-29

Computer Viruses

David J. Stang
Added 2016-11-11

E-lection 2004 Is e-voting ready for prime time?

John Marshall Law School
Added 2016-11-11

Spyder

Trident Data Systems
Added 2016-11-11

Computational Environment for Modeling and Analysing Network Traffic Behaviour using the Divide and Recombine Framework

CERIAS TR 2016-6
Ashrith Barthur
Download: PDF

There are two essential goals of this research. The first goal is to design and construct a computational environment that is used for studying large and complex datasets in the cybersecurity domain. The second goal is to analyse the Spamhaus blacklist query dataset which includes uncovering the properties of blacklisted hosts and understanding the nature of blacklisted hosts over time. The analytical environment enables deep analysis of very large and complex datasets by exploiting the divide and recombine framework. The capability to analyse data in depth enables one to go beyond just summary statistics in research. This deep analysis is at the highest level of granularity without any compromise on the size of the data. The environment is also, fully capable of processing the raw data into a data structure suited for analysis. Spamhaus is an organisation that identifies malicious hosts on the Internet. Information about malicious hosts are stored in a distributed database by Spamhaus and served through the DNS protocol query-response. Spamhaus and other malicious-host-blacklisting organisations have replaced smaller malicious host databases curated independently by multiple organisations for their internal needs. Spamhaus services are popular due to their free access, exhaustive information, historical information, simple DNS based implementation, and reliability. The malicious host information obtained from these databases are used in the first step of weeding out potentially harmful hosts on the internet. During the course of this research work a detailed packet-level analysis was carried out on the Spamhaus blacklist data. It was observed that the query-responses displayed some peculiar behaviours. These anomalies were studied and modeled, and identified to be showing definite patterns. These patterns are empirical proof of a systemic or statistical phenomenon.

Added 2016-10-17

BUILDING A DIGITAL FORENSIC INVESTIGATION TECHNIQUE FOR FORENSICALLY SOUND ANALYSIS OF COVERT CHANNELS IN IPV6 AND ICMPV6, USING CUSTOM IDS SIGNATURES AND FIREWALL SYSTEM LOGS

CERIAS TR 2016-7
Lourdes Gino Dominic Savio
Download: PDF

Covert Channels are communication channels used for information transfer, and created by violating the security policies of a system (Latham, 1986, p. 80). Research in the field has shown that, like many communication channels, IPv4 and the TCP/IP protocol suite has features, functionality and options which could be exploited by cyber criminals to leak data or for anonymous communications, through covert channels. With the advent of IPv6, researchers are on the lookout for covert channels in IPv6 and one of them demonstrated a proof of concept in 2006. Nine years hence, IPv6 and its related protocols have undergone major changes, which introduced a need to reevaluate the current situation of IPv6. The current research is a continuation of our (author of this thesis - Lourdes, and committee member - Prof. Hansen) previous studies (Lourdes & Hansen, 2015, 2016), which demonstrated the corroboration of covert channels in IPv6 and ICMPv6 by building a software for the same and testing against a simulated enterprise network. Our study had also explained how some of the enterprise firewalls and Intrusion Detection Systems (IDS) do not currently detect such covert channels, and how they could be tuned to detect them. The current research aimed at understanding if these detection mechanisms (IDS signatures) of IPv6 and ICMPv6 covert channels are forensically sound, and at exploring if the system logs left by such covert channels in the firewall could provide forensically sound evidence. The current research showed that the IDS signatures that detected certain covert channels in IPv6 and ICMPv6, conformed to the forensic soundness criteria of ‘validity of the scientific method’, and ‘known/potential error rates’. The current research also showed that the firewall system logs potentially detected certain covert channels in IPv6 and ICMPv6 and also conformed to the forensic soundness criteria of ‘validity of the scientific method’. Thus the current study showed that these could be used as digital forensic investigation techniques for network forensics of certain types of covert channels in IPv6 and ICMPv6.

Added 2016-09-06

Cross-Domain Data Dissemination And Policy Enforcement

CERIAS TR 2015-19
Rohit Ranchal
Download: PDF

Modern information systems are distributed and highly dynamic. They comprise a number of hosts from heterogeneous domains, which collaborate, interact, and share data to handle client requests. Examples include cloud-hosted solutions, service-oriented architectures, electronic healthcare systems, product lifecycle management systems, and so on. A client request translates into multiple internal interactions involving different parties; each party can access and further share the client’s data. However, such interactions may share data with unauthorized parties and violate the client’s disclosure policies. In this case, the client has no knowledge of or control over interactions beyond its trust domain; therefore, the client has no means of detecting violations. Opaque data sharing in such distributed systems introduces new security challenges not present in the traditional systems. Existing solutions provide point-to-point secure data transmission and ensure security within a single domain, but are insufficient for distributed data dissemination because of the involvement of multiple cross-domain parties. This dissertation addresses the problem of policy-based distributed data dissemination (PD3) and proposes a data-centric solution for end-to-end secure data disclosure in distributed interactions. The solution ensures that the data are distributed along with the policies that dictate data access and an execution monitor (a policy evaluation and enforcement mechanism) that controls data disclosure and protects data dissemination throughout the interaction lifecycle. It empowers data owners with control of data disclosure decisions outside their trust domains and reduces the risk of unauthorized access. This dissertation makes the following contributions. First, it presents a formal description of the PD3 problem and identifies the main requirements for a new solution. Second, it introduces EPICS, an extensible framework for enforcing policies in composite web services, and describes its design, implementation, and evaluation. Third, it demonstrates a novel application of the proposed solution to address privacy and identity management in cloud computing.

Added 2016-09-05

Packet Filter Performance Monitor (Anti-DDoS Algorithm for Hybrid Topologies)

CERIAS TR 2016-4
Ibrahim M. Waziri, Jr.
Download: PDF

DDoS attacks are increasingly becoming a major problem. According to Arbor Networks, the largest DDoS attack reported by a respondent in 2015 was 500 Gbps. Hacker News stated that the largest DDoS attack as of March 2016 was over 600 Gbps, and the attack targeted the entire BBC website.

With this increasing frequency and threat, and the average DDoS attack duration at about 16 hours, we know for certain that DDoS attacks will not be going away anytime soon. Commercial companies are not effectively providing mitigation techniques against these attacks, considering that major corporations face the same challenges. Current security appliances are not strong enough to handle the overwhelming traffic that accompanies current DDoS attacks. There is also a limited research on solutions to mitigate DDoS attacks. Therefore, there is a need for a means of mitigating DDoS attacks in order to minimize downtime. One possible solution is for organizations to implement their own architectures that are meant to mitigate DDoS attacks.

In this dissertation, we presented and implemented an architecture that utilizes an activity monitor to change the states of firewalls based on their performance in a hybrid network. Both firewalls are connected inline. The monitor is mirrored to monitor the firewall states. The monitor reroutes traffic when one of the firewalls becomes overwhelmed due to a HTTP DDoS flooding attack. The monitor connects to the API of both firewalls. The communication between the firewalls and monitor is encrypted using AES, based on PyCrypto python implementation.

This dissertation is structured in three parts. The first part found the weakness of the hardware firewall and determined its threshold based on spike and endurance tests. This was achieved by flooding the hardware firewall with HTTP packets until the firewall became overwhelmed and unresponsive. The second part implements the same test as the first but targeted towards the virtual firewall. The same parameters, test factors, and determinants were used; however, a different load tester was utilized. The final part was the implementation and design of the firewall performance monitor.

The main goal of the dissertation is to minimize downtime when network firewalls are overwhelmed as a result of a DDoS attack.

Added 2016-08-02

Modeling Deception In Information Security As A Hypergame – A Primer

CERIAS TR 2016-5
Christopher Gutierrez, Mohammed Almeshekah, Eugene Spafford, Saurabh Bagchi, Jeff Avery, Paul Wood
Download: PDF

Hypergames are a branch of game theory used to model and analyze game theoretic conflicts between multiple players who may have misconceptions of the other players’ actions or preferences. They have been used to model military conflicts such as the Allied invasion of Normandy in 1945 [1], the fall of France in WWII [2], and the Cuban missile crisis [3]. Unlike traditional game theory models, hypergames give us the ability to model misperceptions that result from the use of deception, mimicry, and misinformation. In the security world, there is little work that shows how to use deception in a principled manner as a strategic defensive mechanism in computing systems. In this paper, we present how hypergames model deception in computer security conflicts. We discuss how hypergames can be used to model the interaction between adversaries and system defenders. We discuss a specific example of modeling a system where an insider adversary wishes to steal some confidential data from an enterprise and a security administrator is protecting the system. We show the advantages of incorporating deception as a defense mechanism.

[1] M. A. Takahashi, N. M. Fraser, and K. W. Hipel. A Procedure for Analyzing Hypergames. European Journal of Operational Research, 18:111–122, 1984

[2] P. G. Bennett and M. R. Dando. Complex Strategic Analysis: A Hypergame Study of the Fall of France. Journal of the Operational Research Society, 30(1):23–32, 1979.

[3] N. Fraser and K. Hipel. Conflict analysis: Models and Resolutions. North-Holland Series in System Science and Engineering. North-Holland, 1984.

Added 2016-07-25

Knowledge Modeling of Phishing Emails

CERIAS TR 2016-3
Courtney Falk
Download: PDF

This dissertation investigates whether or not malicious phishing emails are detected better when a meaningful representation of the email bodies is available.  The natural language processing theory of Ontological Semantics Technology is used for its ability to model the knowledge representation present in the email messages.  Known good and phishing emails were analyzed and their meaning representations fed into machine learning binary classifiers.  Unigram language models of the same emails were used as a baseline for comparing the performance of the meaningful data.  The end results show how a binary classifier trained on meaningful data is better at detecting phishing emails than a unigram language model binary classifier at least using some of the selected machine learning algorithms.

Added 2016-07-13