Sensor networks enable a wide range of applications in both military and civilian domains. However, the deployment scenarios, the functionality requirements, and the limited capabilities of these networks expose them to a wide-range of attacks against control traffic (such as wormholes, Sybil attacks, rushing attacks, etc). In this paper we propose a lightweight protocol called DICAS that mitigates these attacks by detecting, diagnosing, and isolating the malicious nodes. DICAS uses as a fundamental building block the ability of a node to oversee its neighboring nodes
Although the k-anonymity and l-diversity models have led to a number of valuable privacy-protecting techniques and algorithms, the existing solutions are currently limited to static data release. That is, it is assumed that a complete dataset is available at the time of data release. This assumption implies a significant shortcoming, as in many applications data collection is rather a continual process. Moreover, the assumption entails ``one-time’’ data dissemination; thus, it does not adequately address today’s strong demand for immediate and up-to-date information. In this paper, we consider incremental data dissemination, where a dataset is continuously incremented with new data. The key issue here is that the same data may be anonymized and published multiple times, each of the time in a different form. Thus, static anonymization (i.e., anonymization which does not consider previously released data) may enable various types of inference. In this paper, we identify such inference issues and discuss some prevention methods.
With a widespread growth in the potential applications of Wireless Sensor Networks (WSN), the need for reliable security mechanisms for them has increased manifold. Security protocols in WSNs, unlike the traditional mechanisms, need special efforts and issues to be addressed. This is attributed to the inherent computational and communicational constraints in these tiny embedded system devices. Another reason which distinguishes them from traditional network security mechanisms, is their usage in extremely hostile and unattended environments. The sensitivity of the data sensed by these devices also pose ever-increasing challenges. We present a layer based classification of WSN security threats and defenses proposed in the literature, with special focus on physical, link and network layer issues.
Li, Zhihong. Ph.D., Purdue University, December, 2006. Elliptic Curve Factoring Method via FFTs with Division Polynomials. Major Professor: Samuel S. Wagstaff, Jr.. In 1985, H. W. Lenstra, Jr. discovered a new factoring method, the Elliptic Curve Method (ECM), which efficiently finds 20- to 30- digit prime factors. On January 15, 2001, C. P. Schnorr proposed an idea to apply division polynomials for ECM. In this thesis, we modify and implement the idea in detail, analyze the complexity of the algorithm and the probability of success. We first extend the concept of division polynomial to a univariate case with the parameter a in the Weierstrass form y2 = x3 +ax+b as the variable. We generalize a result about the degree of this implied univariate division polynomial. Then we discover an algorithm to efficiently generate the m-th univariate division polynomial and determine the complexity of the algorithm. We then present the main algorithm of this thesis, which is the main result of the thesis as well. We demonstrate in both algebraic and geometric ways the sufficient conditions for the algorithm to be successful. We analyze the probability of success and prove the related results. To demonstrate the structure of the main algorithm, we divide it to several parts and introduce every part in detail. Then we propose and prove a theorem about the complexity of the main algorithm. We also present an optimization of the main algorithm for a family of numbers with specific form. This family is of great interest as well. We show that for this family, the optimal algorithm can factor numbers even faster and also remove the memory restriction of the multiple m of the point on the elliptic curves, which is also the index of the implied division polynomial being used. vii At the end, we address some issues related to the implementation of the algorithm. With the algorithm, we can find some 40-digit primes on a 1.86GHz 1066FBS PC with 4GB DDR2 SDRAM at 533MHz. It also finds some 50-digit primes. Now we are trying the algorithm on 60-digit primes and we hope that the algorithm will find some 70-digit primes.
Koglin, Yunhua Ph.D., Purdue University, December, 2006. Security Mechanisms for Content Distribution Networks. Major Professor: Elisa Bertino. Securing data is becoming a crucial need for most internet-based applications. In this research, we investigate security mechanisms for content distribution networks. We address the problem of how to ensure that data, when moving among di erent parties, are modi ed only according to the stated policies. We cast our solution in supporting parallel and distributed secure updates to XML documents. The approach, based on the use of a security region-object parallel ow (S-RPF) graph protocol, allows di erent users to simultaneously update di erent portions of the same document, according to the speci ed access control policies. It ensures data con dentiality and integrity. Additionally, it supports a decentralized management of update operations in that a subject can exercise its privileges and verify the correctness of the operations performed so far on the document without interacting, in most of the cases, with the document server. We then extend our document update application into Byzantine and failure prone systems by removing the trusted party which is responsible for recovery of the document. We have developed an approach which uses a group of delegates for recovering documents. Many optimizations have been provided. We improve previous solutions by proposing a scalable distributed protocol which uses cryptographic techniques to provide dynamic group communications, participating anonymity and completeness, and privacy on access privileges. Other security problems such as con dentiality and availability are also investigated in the application of content-based publish/subscribe (pub/sub) systems. We propose a hierarchical event forwarding scheme which increases system availability by x
tolerating some broker failures. Our approach can e ciently determine the subscription groups to which an event must be delivered by exploiting locality. Moreover, we propose an e cient encryption scheme, under which a broker encrypts an event only once. The encryption key can be e ciently derive
Code injection attacks, despite being well researched, continue to be a problem today. Modern architectural solutions such as the NX-bit and PaX have been useful in limiting the attacks, however they enforce program layout restrictions and can often times still be circumvented by a determined attacker. We propose a change to the memory architecture of modern processors that addresses the code injection problem at its very root by virtually splitting memory into code memory and data memory such that a processor will never be able to fetch injected code for execution. This virtual split-memory system can be implemented as a software only patch to an operating system, and can be used to supplement existing schemes for improved protection. Our experimental results show the system is effective in preventing a wide range of code injection attacks while incurring acceptable overhead.
Proving ownership rights on outsourced relational databases is a crucial issue in today internet-based application environments and in many content distribution applications. In this paper, we present a mechanism for proof of ownership based on the secure embedding of a robust imperceptible watermark in relational data. We formulate the watermarking of relational databases as a constrained optimization problem, and discuss efficient techniques to solve the optimization problem and to handle the constraints. Our watermarking technique is resilient to watermark synchronization errors because it uses a partitioning approach that does not require marker tuples. Our approach overcomes a major weakness in previously proposed watermarking techniques. Watermark decoding is based on a threshold-based technique characterized by an optimal threshold that minimizes the probability of decoding errors. We implemented a proof of concept implementation of our watermarking technique and showed by experimental results that our technique is resilient to tuple deletion, alteration and insertion attacks.
In a computing environment where access to system resources is controlled by an access control policy and execution of database transactions is dictated by database locking policy, interaction between the two policies can result in constraints restricting execution of transactions. We present a methodology for the verification of database transaction requirements in a Role Based Access Control (RBAC) environment. Specifically, we propose a step by step approach for the extraction of implicit requirements of a database transaction, and present a mechanism whereby these requirements can be verified against an RBAC policy representation. Based on the requirements of database transaction, we define feasible states of the access control policy which allow the transaction to be executed. We also illustrate the interaction of multiple database transactions executing in a single security environment. Finally, we define conditions in an access control policy, which allow the execution of a database transaction without relying on the underlying database locking policy for serializability and deadlock avoidance.
A high-level security policy states an overall safety requirement for a sensitive task. One example of a high-level security policy is a separation of duty policy, which requires a sensitive task to be performed by a team of at least k users. Recently, Li and Wang proposed an algebra for specifying a wide range of high-level security policies with both qualification and quantity requirements on users who perform a task. In this paper, we study the problem of direct static enforcement of high-level security policies expressed in this algebra. We formally define the notion of a static safety policy, which requires that every set of users together having all permissions needed to complete a sensitive task must contain a subset that satisfies the corresponding security requirement expressed as a term in the algebra. The static safety checking problem asks whether an access control state satisfies a given high-level policy. We study several computational problems related to the static safety checking problem, and design and evaluate an algorithm for solving the problem.
Existing non-discretionary access control systems (such as Security Enhanced Linux) are difficult to use by ordinary users. We identify several principles for designing usable access control system and introduce the Host Integrity Protection Policy (HIPP) model that adds usable non-discretionary access control to operating systems. The HIPP model is designed to defend against attacks targeting network server and client programs and to protect the system from careless mistakes users might make. It aims at not breaking existing applications or existing ways of using and administering systems. HIPP has several novel features to achieve these goals. For example, it supports several types of partially trusted programs to support common system administration practices. Furthermore, rather than requiring file labeling, it uses information in the existing discretionary access control mechanism for non-discretionary access control. We also discuss our implementation of the HIPP model for Linux using the Linux Security Modules framework, as well as our evaluation results.
Wireless sensor networks are becoming a critical computational infrastructure, in which the communication between nodes needs to be protected from eavesdropping and tampering. Symmetric key cryptography is the fundamental technique being used. The protocols in this domain suffer from one or more of the problems of weak security guarantees if some nodes are compromised, lack of scalability, high energy overhead for key management and increased end-to-end data latency. In this paper, we propose a protocol called SECOS that mitigates these problems. SECOS divides the sensor field into control groups each with a control head. Data exchange between nodes within a control group happens through the mediation of the control head which provides the common key. The keys and the control heads are changed periodically to enhance security. SECOS enhances the survivability of the network by handling failures of control nodes. The experiments based on a simulation model show 7 times reduction in energy overhead and 50% reduction in latency compared to the state-of-the-art protocol, SPINS. We also provide an analytical derivation of the optimal control group size that operates under the resource constraints and minimizes energy consumption.