This paper addresses resolution of normative inconsistencies in privacy regulation resulting from merging documents of various kinds. The solution we propose is similar to the past approaches, in that we also resort to the implementation of a certain priority in order to resolve actual contradiction. At the core of the processing conflicts lies text-meaning-representation (TMR) module. Conflict detection explores modalities as well as OPPOSITE/NOT relation of corresponding principal heads of the TMR(s). Additionally, we claim that unlike purely axiomatic frameworks used, ontological semantics accounts for semantic heterogeneity and does not place a restriction on the type of regulation that can be processed.
Policy integration and inter-operation is often a crucial requirement when parties with different access control policies need to participate in collaborative applications and coalitions. Such requirement is even more difficult to address for dynamic large-scale collaborations, in which the number of access control policies to analyze and compare can be quite large. An important step in policy integration and inter-operation is to analyze the similarity of policies. Policy similarity can sometimes also be a pre-condition for establishing a collaboration, in that a party may enter a collaboration with another party only if the policies enforced by the other party match or are very close to its own policies. Existing approaches to the problem of analyzing and comparing access control policies are very limited, in that they only deal with some special cases. By recognizing that a suitable approach to the policy analysis and comparison requires combining different approaches, we propose in this paper a comprehensive environment—EXAM. The environment supports various types of analysis query, that we categorize in the paper. A key component of such environment, on which we focus in the paper, is the policy analyzer able to perform several types of analysis. Specifically, our policy analyzer combines the advantages of existing MTBDD-based and SAT-solver-based techniques. Our experimental results, also reported in the paper, demonstrate the efficiency of our analyzer.
Simulation, emulation, and wide-area testbeds exhibit different tradeoffs with respect to fidelity, scalability, and manageability.
Network security and network planning/dimensioning experiments introduce additional requirements compared to traditional networking and distributed system experiments. For example, high capacity attack or multimedia flows can push packet forwarding devices to the limit and expose unexpected behaviors. Many popular simulation and emulation tools use high-level models of forwarding behavior in switches and routers, and give little guidance on setting model parameters such as buffer sizes. Thus, a myriad of papers report results that are highly sensitive to the forwarding model or buffer size used.
In this work, we first motivate the need for better models by performing an extensive comparison between simulation and emulation environments for the same Denial of Service (DoS) attack experiment. Our results reveal that there are drastic differences between emulated and simulated results and between various emulation testbeds. We then argue that measurement-based models for routers and other forwarding devices are crucial. We devise such a model and validate it with measurements from three types of Cisco routers and one Juniper router, under varying traffic conditions. The structure of our model is device-independent, but requires device-specific parameters. The compactness of the parameter tables and simplicity of the model make it versatile for high-fidelity simulations that preserve simulation scalability. We construct a black box profiler to infer parameter tables within a few hours. Our results indicate that our model can approximate different types of routers.
Additionally, the results indicate that queue characteristics vary dramatically among the devices we measure, and that backplane contention must be modeled.
In the modern multi-user computer environment, Internet-capable network servers provide connectivity that allows a large portion of the user population to access information at the desktop from sources around the world. Because of the ease with which information can be accessed, computer security breaches may occur unless systems and restricted information stored therein are kept secure. Breaches of security can have serious consequences, including theft of confidential corporate documents, compromise of intellectual property, unauthorized modification of systems and data, denial of service, and others. Considerable research has been conducted on threats to security.
Numerous sophisticated security methods have been developed, many of which rely on individuals to implement and use them. However, these methods may not accomplish their intended objectives if they are not used properly. Despite the apparent influence of usability, surprisingly little research has been conducted on the trade-off between usability and the degree of security provided by various information security methods. In the present paper, we review the various information security methods that are used, appraise the usability issues, and develop a taxonomy to organize these issues. The intent is to make a strong case for the need for systematic usability analyses and for the development of usability metrics for information security.
Achieving high performance in cryptographic processing is important due to the increasing connectivity among today’s computers. Despite steady improvements in microprocessor and system performance, private-key cipher implementations continue to be slow. Irrespective of the cipher used, the main reason for the low performance is lack of parallelism, which fundamentally comes from encryption modes such as the Cipher Block Chaining (CBC) mode. In CBC, each plaintext block is XOR’ed with the previous ciphertext block and then encrypted, essentially inducing a tight recurrence through the ciphertext blocks. To deliver high performance while maintaining high level of security assurance in real systems, the cryptography community has proposed Interleaved Cipher Block Chaining (ICBC) mode. In four-way interleaved chaining, the first, fifth, and every fourth block thereafter are encrypted in CBC mode; the second, sixth, and every fourth block thereafter are encrypted as another stream, and so on. Thus, interleaved chaining loosens the recurrence imposed by CBC, enabling the multiple encryption streams to be overlapped. The number of interleaved chains can be chosen to balance performance and adequate chaining to get good data diffusion. While ICBC was originally proposed to improve hardware encryption rates by employing multiple encryption chips in parallel, this is the first paper to evaluate ICBC via multithreading commonly-used ciphers on a symmetric multiprocessor (SMP). ICBC allows exploiting the full processing power of SMPs, which spend many cycles in cryptographic processing as medium-scale servers today, and will do so as chip-multiprocessor clients in the future. Using the Wisconsin Wind Tunnel II, we show that our multithreaded ciphers achieve encryption rates of 92 Mbytes/s on a 16-processor SMP at 1 GHz, reaching a factor of almost 10 improvement oiler a uniprocessor, which achieves 9 Mbytes/s.
We show that malicious participants in a peer-to-peer system can subvert its membership management mechanisms to create large-scale DDoS attacks on nodes not even part of the overlay system. The attacks exploit many fundamental design choices made by peer-to-peer system designers such as (i) use of push-based mechanisms; (ii) use of distinct logical identifier (e.g. IDs in a DHT) corresponding to the same physical identifier (e.g., IP address), typically to handle hosts behind NATs; and (iii) inadequate or poorly designed mechanisms to validate membership information. We demonstrate the significance of the attacks in the context of mature and extensively deployed peer-to-peer systems with representative and contrasting membership management algorithms - DHT-based Kad and gossip-based ESM.
Machines that provide TCP services are often susceptible to various types of Denial of Service attacks from external hosts on the network. One particular type of attack is known as a SYN flood, where external hosts attempt to overwhelm the server machine by sending a constant stream of TCP connection requests, forcing the server to allocate resources for each new connection until all resources are exhausted. This paper discusses several approaches for dealing with the exhaustion problem, including SYN caches and SYN cookies. The advantages and drawbacks of each approach are presented, and the implementation of the specific solution used in FreeBSD is analyzed.
In multihop wireless systems, such as ad hoc and sensor networks, the need for cooperation among nodes to relay each other’s packets exposes them to a wide range of security attacks. A particularly devastating attack is known as the wormhole attack, where a malicious node records control and data traffic at one location and tunnels it to a colluding node far away, which replays it locally. This can either disrupt route establishment or make routes pass through the malicious nodes. In this paper, we present a lightweight countermeasure for the wormhole attack, called LiteWorp, which relies on overhearing neighbor communication. LiteWorp is particularly suitable for resource-constrained multihop wireless networks, such as sensor networks. Our solution allows detection of the wormhole, followed by isolation of the malicious nodes. Simulation results show that every wormhole is detected and isolated within a very short period of time over a large range of scenarios. The results also show that the fraction of packets lost due to the wormhole when LiteWorp is applied is negligible compared to the loss in an unprotected network. Simulation results bring out the configuration where no framing is possible, while still having high detection rate. Analysis is done to show the low resource consumption of LiteWorp, the low detection latency, and the likelihood of framing by malicious nodes.
There is an inherent trade-off between epidemic and deterministic tree-based broadcast primitives. Tree-based approaches have a small message complexity in steady-state but are very fragile in the presence of faults. Gossip, or epidemic, protocols have a higher message complexity but also offer much higher resilience.
This paper proposes an integrated broadcast scheme that combines both approaches. We use a low cost scheme to build and maintain broadcast trees embedded on a gossipbased overlay. The protocol sends the message payload preferably via tree branches but uses the remaining links of the gossip overlay for fast recovery and expedite tree healing. Experimental evaluation presented in the paper shows that our new strategy has a low overhead and that is able to support large number of faults while maintaining a high reliability.
Sensor networks enable a wide range of applications in both military and civilian domains. However, the deployment scenarios, the functionality requirements, and the limited capabilities of these networks expose them to a wide-range of attacks against control traffic (such as wormholes, Sybil attacks, rushing attacks, etc). In this paper we propose a lightweight protocol called DICAS that mitigates these attacks by detecting, diagnosing, and isolating the malicious nodes. DICAS uses as a fundamental building block the ability of a node to oversee its neighboring nodes’ communication. On top of DICAS, we build a secure routing protocol, LSR, which in addition supports multiple node-disjoint paths. We analyze the security guarantees of DICAS and use ns-2 simulations to show its effectiveness against three representative attacks. Overhead analysis is conducted to prove the lightweight nature of DICAS.