Reports and Papers Archive
Wireless Sensor Networks (WSNs) are used in a wide variety of applications including environmental monitoring, electrical grids, and manufacturing plants. WSNs are plagued by the possibility of bugs manifesting only at deployment. However, debugging deployed WSNs is challenging for several reasons—the remote location of deployed nodes, the non-determinism of execution, and the limited hardware resources available. A primary debugging mechanism, record and replay, logs a trace of events while a node is deployed, such that the events can be replayed later for debugging. Existing recording methods for WSNs cannot capture the complete code execution, thus negating the possibility of a faithful replay and causing some bugs to go unnoticed. Existing approaches are not resource efficient enough to capture all sources of non-determinism. We have designed, developed, and verified two novel approaches to solve the problem of practical record and replay for WSNs. Our first approach, Aveksha, uses additional hardware to trace tasks and other generic events at the function and task level. Aveksha does not need to stop the target processor, making it non-intrusive. Using Aveksha we have discovered a previously unknown bug in a common operating system. Our second approach, Tardis, uses only software to deterministically record and replay WSN nodes. Tardis is able to record all sources of non-determinism, based on the observation that such information is compressible using a combination of techniques specialized for respective sources. We demonstrate Tardis by diagnosing a newly discovered routing protocol bug.
Control-Flow Integrity (CFI) is a defense which prevents control-flow hijacking attacks. While recent research has shown that coarse-grained CFI does not stop attacks, fine-grained CFI is believed to be secure.
We argue that assessing the effectiveness of practical CFI implementations is non-trivial and that common evaluation metrics fail to do so. We then evaluate fully-precise static CFI—- the most restrictive CFI policy that does not break functionality—- and reveal limitations in its security. Using a generalization of non-control-data attacks which we call Control-Flow Bending (CFB), we show how an attacker can leverage a memory corruption vulnerability to achieve Turing-complete computation on memory using just calls to the standard library. We use this attack technique to evaluate fully-precise static CFI on six real binaries and show that in five out of six cases, powerful attacks are still possible. Our results suggest that CFI may not be a reliable defense against memory corruption vulnerabilities.
We further evaluate shadow stacks in combination with CFI and find that their presence for security is necessary: deploying shadow stacks removes arbitrary code execution capabilities of attackers in three of six cases.
Modern systems rely on Address-Space Layout Randomization (ASLR) and Data Execution Prevention (DEP) to protect software against memory corruption vulnerabilities. The security of ASLR depends on randomizing regions in memory which can be broken by leaking addresses. While information leaks are common for client applications, server software has been hardened to reduce such information leaks.
Memory deduplication is a common feature of Virtual Machine Monitors (VMMs) that reduces the memory footprint and increases the cost-effectiveness of virtual machines (VMs) running on the same host. Memory pages with the same content are merged into one read-only memory page. Writing to these pages is expensive due to page faults caused by the memory protection, and this cost can be used by an attacker as a side-channel to detect whether a page has been shared. Leveraging this memory side-channel, we craft an attack that leaks the address-space layouts of the neighboring VMs, and hence, defeats ASLR. Our proof-of-concept exploit, CAIN (Cross-VM ASL INtrospection) defeats ASLR of a 64-bit Windows Server 2012 victim VM in less than 5 hours (for 64-bit Linux victims the attack takes several days). Further, we show that CAIN reliably defeats ASLR, regardless of the number of victim VMs or the system load.
Applications written in low-level languages without type or memory safety are prone to memory corruption. Attackers gain code execution capabilities through memory corruption despite all currently deployed defenses. Control-Flow Integrity (CFI) is a promising security property that restricts indirect control-flow transfers to a static set of well-known locations.
We present Lockdown, a modular, fine-grained CFI policy that protects binary-only applications and libraries without requiring source-code. Lockdown adaptively discovers the control-flow graph of a running process based on the executed code. The sandbox component of Lockdown restricts interactions between different shared objects to imported and exported functions by enforcing fine-grained CFI checks using information from a trusted dynamic loader. A shadow stack enforces precise integrity for function returns. Our prototype implementation shows that Lockdown results in low performance overhead and a security analysis discusses any remaining gadgets.
Website Forgery is a type of web based attack where the phisher builds a website that is completely independent or a replica of a legitimate website, with the goal of deceiving a user by extracting information that could be used to defraud or launch other attacks upon the victim. In this paper we attempt to identify the different types of website forgery phishing attacks and non-technical countermeasure that could be used by users, (mostly by non IT users) that lack the understanding of how phishing attack works and how they can prevent themselves from these criminals.
In this paper I reviewed the literature concerning investigator digital forensics models and how they apply to field investigators. A brief history of community supervision and how offenders are supervised will be established. I also covered the difference between community supervision standards and police standards concerning searches, evidence, standards of proof, and the difference between parole boards and courts. Currently, the burden for digital forensics for community supervision officers is placed on local or state law enforcement offices, with personnel trained in forensics, but may not place a high priority on outside cases. Forensic field training for community supervision officers could ease the caseloads of outside forensic specialists, and increase fiscal responsible by increasing efficiency and public safety in the field of community supervision.
In this paper, we compare, analyze and study the behavior of a malware processes within both Type 1 & Type 2 virtualized environments. In other to achieve this we to set up two different virtualized environments and thoroughly analyze each malware processes behavior. The goal is to see if there is a difference between the behaviors of malware within the 2 different architectures. At the end we achieve a result and realized there is no significant difference on how malware processes run and behave on either virtualized environment. However our study is limited to basic analysis using basic tools. An advance analysis with more sophisticated tools could prove otherwise.
We have seen an evolution of increasing scale and complexity of enterprise-class distributed applications, such as, web services for providing anything from critical infrastructure services to electronic commerce. With this evolution, it has become increasingly difficult to understand how these applications perform, when do they fail, and what can be done to make them more resilient to failures, both due to hardware and due to software? Application developers tend to focus on bringing their applications to market quickly without testing the complex failure scenarios that can disrupt or degrade a given web service. Operators configure these web services without the complete knowledge of how the configurations interact with the various layers. Matters are not helped by ad hoc and often poor quality failure logs generated by even mature and widely used software systems. Worse still, both end users and servers sometime suffer from “silent problems” where something goes wrong without any immediate obvious end-user manifestation. To address these reliability issues, characterizing and detecting software problems with some post-detection diagnostic-context is crucial. ^ This dissertation first presents a fault-injection and bug repository-based evaluation to characterize silent and non-silent software failures and configuration problems in three-tier web applications and Java EE application servers. Second, for detection of software failures, we develop simple low-cost application-generic and application-specific consistency checks, while for duplicate web requests (a class of performance problems), we develop a generic autocorrelation-based algorithm at the server end.Third, to provide diagnostic-context as a post-detection step for performance problems, we develop an algorithm based on pair-wise correlation of system metrics to diagnose the root-cause of the detected problem. ^
The need to ensure the primary functionality of any system means that considerations of security are often secondary. Computer security considerations are made in relation to considerations of usability, functionality, productivity, and other goals. Decision-making related to security is about finding an appropriate tradeoff. Most existing security mechanisms take a binary approach where an action is either malicious or benign, and therefore allowed or denied. However, security and privacy outcomes are often fuzzy and cannot be represented by a binary decision. It is useful for end users, who may ultimately need to allow or deny an action, to understand the potential differences among objects and the way that these differences are communicated matters. ^ In this work, we use machine learning and feature extraction techniques to model normal behavior in various contexts and then used those models to detect the degree that new behavior is anomalous. This measurement can then be used, not as a binary signal but as a more nuanced indicator that can be communicated to a user to help guide decision-making. ^ We examine the application of this idea in two domains. The first is the installation of applications on a mobile device. The focus in this domain is on permissions that represent capabilities and access to data, and we generate a model for expected permission requests. Various user studies were conducted to explore effective ways to communicate this measurement to influence decision-making by end users. Next, we examined to the domain of insider threat detection in the setting of a source code repository. The goal was to build models of expected user access and more appropriately predict the degree that new behavior deviates from the previous behavior. This information can be utilized and understood by security personnel to focus on unexpected patterns.^
One major impediment to large-scale use of cloud services is concern for confidentiality of the data and the computations carried out on it. This dissertation advances the state of art for secure and private outsourcing to untrusted cloud servers by solving three problems in the computational outsourcing setting and extending the semantics of oblivious storage in the storage outsourcing setting. ^ In computational outsourcing, this dissertation provides protocols for two parties to collaboratively design engineering systems and check certain properties of the codesigning system with the help of a cloud server, without leaking the designing parameters to each other or to the server. It also provides approaches to outsource two computationally intensive tasks, image feature extraction and generalized matrix multiplication, preserving the confidentiality of both the input data and the output result. Experiments are included to demonstrate the viability of the protocols. ^ In storage outsourcing, this dissertation extends the semantics of the oblivious storage scheme by providing algorithms to support nearest neighbor search. It enables clients to perform nearest neighbor queries on the outsourced storage without leaking the access pattern.^
Meaning-Based Machine Learning (MBML) is a research program intended to show how training machine learning (ML) algorithms on meaningful data produces more accurate results than that of using unstructured data.
Security for public cloud providers is an ongoing concern. Programs like FedRAMP look to certify a minimum level of compliance. This project aims to build a tool to help decision makers compare different clouds solutions and weigh the risks against their own organizational needs.
Our goal is to improve the detection of phishing attack emails by using natural language processing (NLP) technology that models the semantic meaning behind the email text.
In this paper we identified and addressed some of the key challenges in digital forensics. An intensive review was conducted of the major challenges that have already been identified. At the end, the findings proposed a solution and how having a standardized body that governs the digital forensics community could make a difference.