Early systems for networked intrusion detection (or, more generally, intrusion or misuse management) required either a centralized architecture or a centralized decision-making point, even when the data gathering was distributed. More recently, researchers have developed far more decentralized intrusion detection systems using a variety of techniques. Such systems often rely upon data sharing between sites which do not have a common administrator and therefore cooperation will be required in order to detect and respond to security incidents. It has therefore become important to address cooperation and data sharing in a formal manner. In this paper, we discuss the detection of distributed attacks across cooperating enterprises. We begin by defining relationships between cooperative hosts, then use the take-grant model to identify both when a host could identify a widespread attack and when that host is at increased risk due to data sharing. We further refine our definition of potential indentification using access, integrity, and cooperation policies which limit sharing. Finally, we include a breif description of both a simple Prolog model encorporating data sharing policies and a prototype cooperative intrusion detection system.
Attacks and misuses of computer systems are major concerns in today’s network-based world. We present information visualization techniques based on a glyph metaphor for visually representing textual log information.
In this paper, we present a comprehensive approach for privacy preserving access control based on the notion of purpose. Purpose information associated with a given data element specifies the intended use of the data element, and our model allows multiple purposes to be associated with each data element. A key feature of our model is that it also supports explicit prohibitions, thus allowing privacy officers to specify that some data should not be used for certain purposes. Another important issue addressed in this paper is the granularity of data labeling, that is, the units of data with which purposes can be associated. We address this issue in the context of relational databases and propose four different labeling schemes, each providing a different granularity. In the paper we also propose an approach to representing purpose information, which results in very low storage overhead, and we exploit query modification techniques to support data access control based on purpose information.
Denial of Service (DoS) attacks are a serious threat for the Internet. DoS attacks can consume memory, CPU, and network resources and damage or shut down the operation of the resource under attack (victim). The quality of service (QoS) enabled networks, which offer different levels of service, are vulnerable to QoS attacks as well as DoS attacks. The aim of a QoS attack is to steal network resources, e.g., bandwidth, or to degrade the service perceived by users. We present a classisificaton and a brief explanation of the approaches used to deal with the DoS and QoS attacks. Futhermore, we propose network monitoring techniques to detect service violations and to infer DoS attacks. Finally, a quantitative comparison among all schemes is conducted, in which, we highlight the merits of each scheme and estimate the overhead (both processing and communication) introduced by it. The comparison provides guideliness for selecting the appropriate scheme, or a combination of schemes, based on the requirements and how much overhead can be tolerated.
This dissertation investigates two research problems: (a) designing ad hoc routing protocols that monitor network conditions, select routes to satisfy routing requirements, and adapt to network topology, traffic load, and congestion; (b) building an integrated infrastructure for heterogeneous wireless networks with movable base stations and developing techniques for network management, routing, and security.
The experimental study of ad hoc routing protocols shows that the on-demand approach outperforms the proactive approach in less stressful situations, while the later one is more scalable with respect to the network size. Mobility and congestion are the primary reasons for the packet loss for the on-demand and proactive approaches respectively. Self-adjusting congestion avoidance (SAGA) routing protocol integrates the channel spatial reuse with the multi-hop routing to reduce congestion. Using the intermediate delay as the routing metric enables SAGA to bypass hot spots where contention is intense. An estimate of the transmission delay is derived based on local information available at a host. Comparison of SAGA with AODV, DSR, and DSDV shows that SAGA introduces the lowest end-to-end delay. It outperforms DSDV in the measured metrics. SAGA can sustain heavier traffic load and offers higher peak throughput than AODV and DSR. It is shown that considerations of congestion and the intermediate delay can enhance the routing performance significantly.
Hierarchical mobile wireless network is proposed to support wireless networks with movable base stations. Mobile hosts are organized into hierarchical groups. An efficient group membership management protocol is designed to support mobile hosts roaming among different groups. Segmented membership-based group routing protocol takes advantage of the hierarchical structure and membership information to reduce overhead. A secure packet forwarding algorithm is designed to protect the network infrastructure. The roaming support algorithm cooperates with the proposed mutual authentication protocol to secure both the foreign group and the mobile host. The evaluation shows that the computation overhead of the secure packet forwarding is less than 2% of the CPU time, and that of the secure roaming support ranges from 0.2% to 5% of the CPU time depending on the number of hosts and their motion. This justifies the feasibility of the security mechanisms.
The key findings for 2004 are: Electronic attack, computer crime, computer access misuse and abuse trends, and Readiness to protect and manage the security of IT systems.
An integrated checkpointing and recovery scheme which exploits the low latency and high coverage characterisitics of a concurrent error detection scheme is presented. Message dependency which is the main source of multistep rollback in distributed systems is minimized by using a new message validation technique derived from the notion of concurrent error detection. The concept of a new global state matrix is introduced to track error checking and message dependency in a distributed system and assist in the recovery. The analyitcal model, algorithms, and data structures to support an easy implementation of the new scheme are presented. The completeness and correctness of the algorithms are proved. A number of scenarios are illustrations that give the details of the analytical model are presented. The benefits of the integrated checkpointing scheme are quantified by means of simulation using an object-oriented test framework.
Launching a denial of service (DoS) attack is trivial, but detection and response is a painfully slow and often a manual process. Automatic classification of attacks as single-or multi-source can help focus a response, but current packet-header-based approaches are susceptible to spoofing. This paper introduces a framework for classifying DoS attacks based on header content, transient ramp-up behavior and novel techniques such as spectral analysis. Although headers are easily forged, we show that characteristics of attack ramp-up and attack specrum and more difficult to spoof. To evaluate our framework we monitored access links of a regional ISP detecting 80 live attacks. Header analysis identified the number of attackers in 67 attacks, while the remaining 13 attacks were classified based on ramp-up and spectral analysis. We validate our results thrugh monitoring at a sencond site, controlled experiments and stimulation. We use experiments and simulation to understand the underlying reasons for the characteristics observed. In addition to helping understand attack dynamics, classifications mechanisms such as ours are important for the development of realistic models of DoS traffic, can be packaged as an automated tool to aid in rapid response to attacks, and can also be used to estimate the level of DoS activity on the Internet.
Trust in ad hoc networks is an open area of research. The ad hoc environment has characteristics that are fundamentally different from fixed networks in a way that makes establishing, recalling, and maintaining trust relationships difficult. The dynamic nature of the network and the heterogeneity of the hosts are two issues that complicate establishing trust.
Secure electronic communication relies on cryptography. Even with perfect encryption, communication may be compromised without effective security protocols for key exchange, authentication, etc. We are now seeing proliferation of large secure environments characterized by high volume, encrypted traffic between principals, facilitated by Public Key Infrastructures (PKI). PKI’s are dependent on security protocols. Unfortunately, security protocols are susceptible to subtle errors. To date, we have relied on formal methods to tell us if security protocols are effective. These methods do not provide complete or measurable protocol security. Security protocols are also subject to the same implementation and administrative vulnerabilities as communication protocols. As a result, we will continue to operate security protocols that have flaws. In this paper, we describe a method and architecture to detect intrusions in security protocol environments such as Public Keys Infrastructures. Our method is based on classic techniques of knowledge-based and behavior-based intrusion detection systems.
The application of science and education to computer-related crime forensics is still largely limited to law enforcement organizations. Building a suitable workforce developemt program could support the rapidly growing field of computer and network forensics.
Analyzing security protocols is notoriously difficult. In this paper, we show how a novel tool for analyzing classical cryptographic protocols can be used to model and analyze complex Internet security protocol families. CPAL-ES allows the representation of the interaction between two sub-protocols. Within a protocol such as Transport Layer Security (TLS) these are selected from a collection of sub-protocols utilized by a principal. Modeling subversion related to sub-protocol interactions is an important part of formally understanding attacks upon protocol suites. The CPAL environment contains sufficient functionality to verify the feasibility of these attacks. We also define and classify the characteristics that add complexity to modern security protocol and some impacts this complexity has on security protocol analysis. Finally, we discuss the modifications that were necessary in our formal method tool to answer this complexity and show how the tool illuminated flaws in the TLS protocol.
Tools to evaluate Cryptographic Protocols (CPs) exploded into the literature after development of BAN Logic. Many of these were created to repair weaknessess in BAN Logic. Unfortunately, these tools are all complex and difficult to implement individually, with little or no effort available to implement multiple tools in a workbench environment. We propose a framework that allows a protocol analyst to exercise multiple CP evaluation tools in a single environment. Moreover, this environment exhibits characteristics that will enhance the effectiveness of the CP evaluation methods themselves.