CERIAS - Center for Education and Research in Information Assurance and Security

Skip Navigation
CERIAS Logo
Purdue University
Center for Education and Research in Information Assurance and Security

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Digital Forensics in Law Enforcement: A Needs Based Analysis of Indiana Agencies

CERIAS TR 2015-18
Teri A. Cummins Flory
Download: PDF

Cyber crime is a growing problem, with the impact to both businesses and individuals increasing exponentially, but the ability of law enforcement agencies to investigate and successfully prosecute criminals for these crimes is unclear. Many national needs assessments were conducted in the late 1990’s and early 2000’s by the Department of Justice (DOJ) and the National Institute of Justice (NIJ), which all indicated that state and local law enforcement did not have the training, tools, or staff to effectively conduct digital investigations (Institute for Security and Technology Studies [ISTS], 2002; NIJ, 2004). Additionally, there have been some studies conducted at the state level, however, to date, none have been conducted in Indiana (Gogolin & Jones, 2010). A quick search of the Internet located multiple training opportunities and publications that are available at no cost to state and local law enforcement, but it is not clear how many agencies use these resources (“State, Local, & Tribal” for FLETC, n.d.; https://www.ncfi. usss.gov). This study provided a current and localized assessment of the ability of Indiana law enforcement agencies to effectively investigate when a crime that involves digital evidence is alleged to have occurred, the availability of training for both law enforcement officers and prosecuting attorneys, and the ability of prosecuting attorneys to pursue and vii obtain convictions in cases involving digital evidence. Through an analysis of the survey responses by Indiana law enforcement agencies and prosecutors’ offices, it is evident that Indiana agencies have improved their ability to investigate crimes with digital evidence, with more than half with employees on staff who have attended a digital forensic training course within the past five years. However, a large majority of the agencies still perceive their abilities to investigate crimes with digital evidence in the mid-range or lower. The results support the recommendation that a comprehensive resource guide needs to be made available that the agencies can use to locate experts, obtain assistance with standard operating procedures, learn about free training courses, and find funding opportunities to increase their capabilities in investigating crimes involving digital evidence.

Added 2016-01-20

The Impact of Mobile Network Forensics Evidence on the Criminal Case Processing Performance in Macedonia: An Institutional Analysis Study

CERIAS TR 2015-17
Filipo Sharevski
Download: PDF

The purpose of this study was to explore the contribution of the localization data, network-management data, and content-of-communication data in the case processing performance in Macedonia. The mobile network forensics evidence was analyzed respective to the impact of the mobile network data variety, the mobile network data volume, and the forensic processing on the case disposition time. The results from this study indicate that the case disposition time is negatively correlated with the network- management data volume and positively correlated with the content-of-communication data volume. The relevance of the network-management data was recognized in the highly granular service behavior profile developed using larger number of records, while the relevance of the content-of-communication data was recognized in the substantial number of excerpts of intercepted communication. The results also reveal a difference in the case processing time for the cases where there is only localization or network- management data versus when they are combined with the content-of-communication data.

Added 2016-01-15

An Approach to Near Field Data Selection in Radio Frequency Identification

CERIAS TR 2015-16
Robert D. Winkworth
Download: PDF

Personal identification is needed in many civil activities, and the common identification cards, such as a driver’s license, have become the standard document de facto. Radio frequency identification has complicated this matter. Unlike their printed predecessors, contemporary RFID cards lack a practical way for users to control access to their individual fields of data. This leaves them more available to unauthorized parties, and more prone to abuse. Here, then was undertaken a means to test a novel RFID card technology that allows overlays to be used for reliable, reversible data access settings. Similar to other proposed switching mechanisms, it offers advantages that may greatly improve outcomes. RFID use is increasing in identity documents such as drivers’ licenses and passports, and with it concern over the theft of personal information, which can enable unauthorized tracking or fraud. Effort put into designing a strong foundation technology now may allow for widespread development on them later. In this dissertation, such a technology was designed and constructed, to drive the central thesis that selective detuning could serve as a feasible, reliable mechanism. The concept had been illustrated effective in limiting access to all fields simultaneously before, and was here effective in limiting access to specific fields selectively. A novel card was produced in familiar dimensions, with an intuitive interface by which users may conceal the visible print of the card to conceal the wireless emissions it allows. A discussion was included of similar technologies, involving capacitive switching, that could further improve the outcomes if such a product were put to large-scale commercial fabrication. xvi The card prototype was put to a battery of laboratory tests to measure the degree of independence between data fields and the reliability of the switching mechanism when used under realistically variable coverage, demonstrating statistically consistent performance in both. The success rate of RFID card read operations, which are already greater than 99.9%, were exceeded by the success rate of selection using the featured technology. With controls in place for the most influential factors related to card readability (namely the distance from the reader antennas and the orientation of the card antenna with respect to them), the card was shown to completely resist data acquisition from unauthorized fields while allowing unimpeded access to authorized fields, even after thousands of varied attempts. The effect was proven to be temporary and reversible. User intervention allowed for the switching to occur in a matter of seconds by sliding a conductive sleeve or applying tape to regions of the card. Strategies for widespread implementation were discussed, emphasizing factors that included cost, durability, size, simplicity, and familiarity, all of which arise in card management decisions for common state and national identification such as a driver’s license. The relationship between the card and external database systems was detailed, as no such identification document could function in isolation. A practical solution involving it will include details of how multiple fields will be written to the card and separated sufficiently in external databases so as to allow for user-directed selection of data field disclosure. Opportunities for implementation in corporate and academic environments were discussed, along with the ways in which this technology could invite further investigation.

Added 2015-11-20

ErsatzPasswords: Ending Password Cracking and Detecting Password Leakage

Mohammed H. Almeshekah, Christopher N. Gutierrez, Mikhail J. Atallah and Eugene H. Spafford

In this work we present a simple, yet effective and practical, scheme to improve the security of stored password hashes, rendering their cracking detectable and insuperable at the same time. We utilize a machine-dependent function, such as a physically unclonable function (PUF) or a hardware security module (HSM) at the authentication server to prevent off-site password discovery, and a deception mechanism to alert us if such an action is attempted. Our scheme can be easily integrated with legacy systems without the need of any additional servers, changing the structure of the hashed password file or any client modifications. When using the scheme the structure of the hashed passwords file, etc/shadow or etc/master.passwd, will appear no different than in the traditional scheme. However, when an attacker exfiltrates the hashed passwords file and tries to crack it, the only passwords he will get are the ersatzpasswords — the “fake passwords”. When an attempt to login using these ersatzpasswords is detected an alarm will be triggered in the system. Even with an adversary who knows about the scheme, cracking cannot be launched without physical ac- cess to the authentication server. The scheme also includes a secure backup mechanism in the event of a failure of the hardware dependent function. We discuss our implementation and provide some discussion in comparison to the traditional authentication scheme.

Added 2015-09-29

NFS Version 3 Design and Implementation

Brian Pawlowski, Chet Jusczak, Peter staubach, Carl Smith, Diane Lebel, David Hitz

This paper describes a new version of the Network File System (NFS) that supports access to files larger than 4GB and increases sequential write throughput seven fold when compared to unaccelerated NFS Version 2. NFS Version 3 maintains the stateless server design and simple crash recovery of NFS Version 2, and the philosophy of building a distributed file service from cooperating protocols. We describe the protocol and its implementation, and provide initial performance measurements. We then describe the implementation effort. Finally, we contrast this work with other distributed file systems and discuss future revisions of NFS.

Added 2015-09-11

The RC5 Encryption Algorithm

Ronald L. Rivest
Added 2015-09-11

Origin Tracking

Vandeursen, A; Klint, P; Tip, F

The notion of an ‘origin’ is introduced in the framework of conditional, not necessarily orthogonal, term rewriting systems. Origins are relations between subterms of intermediate terms which occur during rewriting, and subterms of the initial term. Original tracking is a method for incrementally computing origins during rewriting. Origins are a generalization of the well known concept of residuals (also called descendants). A formal definition of origins is given and a method for implementing them is presented. Origin tracking is a highly versatile technique when applied to the prototyping of algebraic specifications of programing languages. For example, origin tracking allows program execution to be visualized in a semi-automatic way, given an algebraic specification of the dynamic semantics of the programming language. Furthermore, various notions of breakpoints for generic debuggers can be defined without difficulty. Given a specification of the static semantics of a programming language, origin tracking enables, once an error (such as type-incompatability) has been detected, the position of the error in the source program to be inferred automatically.

Added 2015-09-11

DisARM: Mitigating Buffer Overflow Attacks on Embedded Devices

CERIAS TR 2015-15
Javid Habibi, Ajay Panicker, Aditi Gupta, and Elisa Bertino
Download: PDF

Security of embedded devices today is a critical requirement for the Internet of Things (IoT) as these devices will access sensitive information such as social security numbers and health records. This makes these devices a lucrative target for attacks exploiting vulnerabilities to inject malicious code or reuse existing code to alter the execution of their software. Existing defense techniques have major drawbacks such as requiring source code or symbolic debugging information, and high overhead, limiting their applicability. In this paper we propose a novel defense technique, DisARM, that protects against both code-injection and code-reuse based buffer overflow attacks by breaking the ability for attackers to manipulate the return address of a function. Our approach operates on arbitrary executable binaries and thus does not require compiler support. In addition it does not require user interactions and can thus be automatically applied. Our experimental results show that our approach incurs low overhead and significantly increases the level of security against both code-injection and code-reuse based attacks.

Added 2015-09-09

A Secure Communication Protocol for Drones and Smart Objects

CERIAS TR 2015-14
Jongho Won, Seung-Hyun Seo, Elisa Bertino
Download: PDF

In many envisioned drone-based applications, drones will communicate with many different smart objects, such as sensors and embedded devices. Securing such communications requires an effective and efficient encryption key establishment protocol. However, the design of such a protocol must take into account constrained resources of smart objects and the mobility of drones. In this paper, a secure communication protocol between drones and smart objects is presented. To support the required security functions, such as authenticated key agreement, non-repudiation, and user revocation, we propose an efficient Certificateless Signcryption Tag Key Encapsulation Mechanism (eCLSC-TKEM). eCLSC-TKEM reduces the time required to establish a shared key between a drone and a smart object by minimizing the computational overhead at the smart object. Also, our protocol improves drone’s efficiency by utilizing dual channels which allows many smart objects to concurrently execute eCLSC-TKEM. We evaluate our protocol on commercially available devices, namely AR.Drone2.0 and TelosB, by using a parking management testbed. Our experimental results show that our protocol is much more efficient than other protocols.

Added 2015-09-09

Distance-based Trustworthiness Assessment for Sensors in Wireless Sensor Networks

CERIAS TR 2015-13
Jongho Won, Elisa Bertino
Download: PDF

Wireless Sensor Networks (WSNs) have been substituting for human senses to make human lives better by monitoring the environment and providing intelligence. Collected sensor data are used to make decisions as a human does. Therefore, providing trustworthy sensor data is crucial to make correct decisions. However, faulty sensors can give in- correct information. In addition, since sensors are usually deployed in unattended areas and can be compromised, cryptographic approaches are insucient. To address this problem, we propose a distance-based trustworthiness assessment scheme. In our scheme, a centralized trust assessment module outputs an absolute trust score of each sensed value and the trust score of each sensor. The trust scores of sensed values are calculated based on the differences of sensed values provided by a sensor and its neighbors and the physical distances from the neighbors. Our simulation results show that our scheme outputs practical and accurate trust scores in a realistic environment where the sensed values of interest gradually change over the monitored areas.

Added 2015-09-09

Specifying and checking UNIX security constraints

Allan Heydon & J.D. Tygar

We describe a system called Miro for specifying and checking security constraints. Our system is general because it is not tied to any particular operating system. It is flexible because users express security policies in a formal specification language, so it is easy to extend or modify a policy simply by augmenting or changing the specification for the current policy. Finally, our system is expressive enough to describe many relations on file system configurations; however, it is not expressive enough to describe more subtle security holes like Trojan Horses or weaknesses in the passwords chosen by the system’s users. This article is a case study of the Miro languages and tools. We show how to represent various UNIX security constraints-including those described in a well-known paper on UNIX security using our graphical specification language. We then describe the results we obtained from running our tools to check an actual UNIX file system against these constraints.

Added 2015-09-09

Dynamic detection and classification of computer viruses using general behavior patterns

Baudouin Le Charlier, Abdelaziz Mounji, Morton Swimmer

The number of files that need processing by the virus labs is growing nearly exponentially. Even though only a small proportion of these files contain new viruses, each file requires examination. The normal method for dealing with these files in the virus labs is still brute force manual analysis. A virus expert runs several tests on a given file and delivers a verdict on whether it is virulent or not. If it is a new virus, it will be necessary to detect it. Some tools have been developed speed up this process. These range from programs that identify previously classified files to programs that generate detection data. Some antiviruses have built in mechanisms based on heuristics that enable the antivirus to detect unknown viruses. unfortunately all these tools have limitations.  In this paper, we will demonstrate how an emulator is used to monitor system activity of a virtual PC, and how the expert system ASAX is used to allays the stream of data the emulator produced. We use general rules to generically detect real viruses reliably, and specific rules to extract details of their behavior. The resulting system is called VIDES and s a prototype for an automatic analysis system for computer viruses and possibly a prototype anti virus for the emerging 32 bit PC operating systems.

Added 2015-09-09

Parametric Program Slicing

John Field & G. Ramalingam

Program slicing is a technique for isolating computational threads in programs. In this paper, we show how to mechanically extract a family of practical algorithms for computing slices directly from semantic specifications. These algorithms are based on combining the notion of dynamic dependence tracking in term rewriting systems [13] with a program representation whose behavior is defined via an equational logic [12]. Our approach is distinguished by the fact that changes to the behavior of the slicing algorithm can be accomplished through simple changes in rewriting rules that define the semantics of the program representation. Thus, e.g., different notions of dependence may be specified, properties of language-specific datatypes can be exploited, and various time, space, and precision tradeoffs may be made. This flexibility enables us to generalize the traditional notions of static and dynamic slices to that of a constrained slice, where any subset of the inputs of a program may be supplied.

Added 2015-09-04

The GOST Encryption Algorithm

Bruce Schneier
Added 2015-09-04