The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

Desert Island Books

E Spafford
Download: PDF

Eugene Spafford discusses the books that have been most influential in shaping his attitudes about security and privacy.

Added 2008-03-31

A failure to learn from the past

E Spafford
Download: PDF

On the evening of 2 November 1988, someone “infected” the Internet with a worm program. That program exploited flaws in utility programs in systems based on BSD-derived versions of UNIX. The flaws allowed the program to break into those machines and copy itself, thus infecting those systems. This program eventually spread to thousands of machines, and disrupted normal activities and Internet connectivity for many days. It was the first major network-wide attack on computer systems, and thus was a matter of considerable interest. We provide a brief chronology of both the spread and eradication of the program, a presentation about how the program worked, and details of the aftermath. That is followed by discussion of some observations of what has happened in the years since that incident. The discussion supports the title-that the community has failed to learn from the past.

Added 2008-03-31

Efficient intrusion detection using automaton inlining

R Gopalakrishna, E Spafford, J Vitek

Host-based intrusion detection systems attempt to identify attacks by discovering program behaviors that deviate from expected patterns. While the idea of performing behavior validation on-the-fly and terminating errant tasks as soon as a violation is detected is appealing, existing systems exhibit serious shortcomings in terms of accuracy and/or efficiency. To gain acceptance, a number of technical advances are needed. In this paper we focus on automated, conservative, intrusion detection techniques, i.e. techniques which do not require human intervention and do not suffer from false positives. We present a static analysis algorithm for constructing a flow- and context-sensitive model of a program that allows for efficient online validation. Context-sensitivity is essential to reduce the number of impossible control-flow paths accepted by the intrusion detection system because such paths provide opportunities for attackers to evade detection. An important consideration for on-the-fly intrusion detection is to reduce the performance overhead caused by monitoring. Compared to the existing approaches, our inlined automaton model (IAM) presents a good tradeoff between accuracy and performance. On a 32K line program, the monitoring overhead is negligible. While the space requirements of a naive IAM implementation can be quite high, compaction techniques can be employed to substantially reduce that footprint.

Added 2008-03-31

James P. Anderson: An Information Security Pioneer

E Spafford
Download: PDF

In memory of James P. Anderson

Added 2008-03-31

Computer Science: Happy Birthday, Dear Viruses

R Ford, E Spafford
Added 2008-03-31

Efficient availability mechanisms in distributed database systems

Bharat Bhargava, Abdelsalam Helal
Download: PDF
Added 2008-03-31

A low-cost, low-delay location update/paging scheme in hierarchical cellular networks

Xiaoxin Wu, Biswanath Mukherjee, Bharat Bhargava
Download: PDF

A low-cost, two-step location update/paging scheme in a macrocell/microcell network is proposed and investigated. To reduce operating cost, the location update is operated only in the macrocell tier. A callee will be paged in the macrocell tier first. If the paging delay in the macrocell tier is too high due to large queuing delay, the callee will then be paged in the microcell tier. Original searching method is used in the microcell tier paging. The operation for the scheme is simple, since the macrocell/microcell cellular network has the advantage that a mobile user in such a cellular network can receive a signal from both a macrocell and a microcell. The analytical results show that, along with the low location update/paging cost, the two-step paging scheme also achieves low paging delay.

Added 2008-03-31

Key distribution and update for secure inter-group multicast communication

Weichao Wang, Bharat Bhargava
Download: PDF

Group communication has become an important component in wireless networks. In this paper, we focus on the environments in which multiple groups coexist in the system, and both intra and inter group multicast traffic must be protected by secret keys. We propose a mechanism that integrates polynomials with flat tables to achieve personal key share distribution and efficient key refreshment during group changes. The proposed mechanism distributes keys via true broadcast. The contributions of the research include: (1) By switching from asymmetric algorithms to symmetric encryption methods, the proposed mechanism avoids heavy computation, and improves the processing efficiency of multicast traffic and the power usage at the wireless nodes. The group managers do not have to generate public-private key pairs when the group member changes. (2) It becomes more difficult for an attacker to impersonate another node since personal key shares are adopted. The additional storage overhead at the wireless nodes and the increased broadcast traffic during key refreshment are justified. In addition, we describe techniques to improve the robustness of the proposed mechanism under the complicated scenarios such as collusive attacks and batch group member changes.

Added 2008-03-31

A round trip time and time-out aware traffic conditioner for differentiated services networks

A Habib, B Bhargava, S Fahmy
Download: PDF

TCP connection throughput is inversely proportional to the connection round trip time (RTT). To mitigate TCP bias to short RTT connections, a differentiated services traffic conditioner can ensure connections with long RTTs do not starve when connections with short RTTs get all extra resources after achieving the target rates. Current proposals for RTT-aware conditioners work well for a small number of connections when most TCP connections are in the congestion avoidance phase. If there is a large number of TCP connections, however, connections time-out and go to slow start. We show that current RTT-aware conditioners over-protect long RTT flows and starve short RTT flows in this case. We design and evaluate a conditioner based on RTT as well as the retransmission time-out (RTO). The proposed RTT-RTO aware traffic conditioner works well for realistic situations with a large number of connections. Simulation results in a variety of situations confirm that the conditioner mitigates RTT bias.

Added 2008-03-31

Data Organization Issues for Location-Dependent Queries in Mobile Computing

S Madria, B Bhargava, E Pitoura, V Kumar

We consider queries which originate from a mobile unit and whose result depends on the location of the user who initiates the query. Example of such a query is How many people are living in the region I am currently in?” We execute such queries based on location-dependent data involved in their processing. We build concept hierarchies based on the location data. These hierarchies define mapping among different granularities of locations. One such hierarchy is to generate domain knowledge about the cities that belong to a state. The hierarchies are used as distributed directories to assist in finding the database or relation that contains the values of the location-dependent attribute in a particular location. We extend concept hierarchies to include spatial indexes on the location-dependent attributes. Finally, we discuss how to partition and replicate relations based on the location to process the queries efficiently. We briefly discuss the implementation issues.

Added 2008-03-31

Measurements and Quality of Service Issues in Electronic Commerce Software

A Bhargava, B Bhargava

The performance of network and communication software is a major concern for making the electronic commerce applications in a distributed environment a success. The quality of service in electronic commerce can generically be measured by convenience, privacy/security, response time, throughput, reliability, timeliness, accuracy, and precision. We present the quality of service parameters, software architecture used in e-commerce, experimental data about transaction processing in the internet, characteristics of digital library databases used in e-commerce and communication measurements for such data. We present a summary of e-commerce companies and their status and give an example of electronic trading as an application.

Added 2008-03-31

PartJoin: An Efficient Storage and Query Execution for Data Warehouses

Ladjel Bellatreche, Michel Schneider, Mukesh Mohania, Bharat Bhargava
Download: PDF

The performance of OLAP queries can be improved drastically if the warehouse data is properly selected and indexed. The problems of selecting and materializing views and indexing data have been studied extensively in the data warehousing environment. On the other hand, data partitioning can also greatly increase the performance of queries. Data partitioning has advantage over data selection and indexing since the former one does not require additional storage requirement. In this paper,we show that it is beneficial to integrate the data partitioning and indexing (join indexes)techniques for improving the performance of data warehousing queries.We present a data warehouse tuning strategy, called PartJoin, that decomposes the fact and dimension tables of a star schema and then selects join indexes. This solution takes advantage of these two techniques, i.e., data partitioning and indexing. Finally,we present the results of an experimental evaluation that demonstrates the effectiveness of our strategy in reducing the query processing cost and providing an economical utilisation of the storage space.

Added 2008-03-31

Virtual routers: a tool for emulating IP routers

F Baumgartner, T Braun, B Bhargava
Download: PDF

Setting up experimental networks of a sufficient size is a crucial element for the development of communication services. Unfortunately, the required equipment, like routers and hosts, is expensive and its availability is limited. On the other hand, simulations often lack interoperability to real systems and scalability, which limits the scope and the validity of their results. Therefore, an intermediate approach between these two alternatives that allows for setting up testbeds on a cluster of computers is needed. This paper presents an intermediate approach based on the emulation of IP routers and evaluates the concept. In a first set of experiments the impact of various parameters on the packet delay was investigated, while further experiments compare the performance of differentiated services run on the network emulator with the results obtained by the well known network simulator ns.

Added 2008-03-31

Network tomography-based unresponsive flow detection and control

A Habib, B Bhargava
Download: PDF

To avoid a congestion collapse, network flows should adjust their sending rates. Adaptive flows adjust the rate, while unresponsive flows do not respond to congestion and keep sending packets. Unresponsive flows waste resources by taking their share of the upstream links of a domain and dropping packets later when the downstream links are congested We use network tomography-an edge-to-edge mechanism to infer per-link internal characteristics of a domain-to identify unresponsive flows that cause packet drops in other flows. We have designed an algorithm to dynamically regulate unresponsive flows. The congestion control algorithm is evaluated using both adaptive and unresponsive flows, with sending rates as high as four times of the bottleneck bandwidth, and in presence of short and long-lived background traffic.

Added 2008-03-31

Security in Data Warehousing

B Bhargava
Download: PDF

Data warehouse [2, 4, 5, 6] is an integrated repository derived from multiple source (operational and legacy) databases. The data warehouse is created by either replicating the different source data or transforming them to new representation. This process involves reading, cleaning, aggregating and storing the data in the warehouse model. The software tools are used to access the warehouse for strategic analysis, decision-making, marketing types of applications. It can be used for inventory control of shelf stock in many departmental stores.

Added 2008-03-31