The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

Reports and Papers Archive


Browse All Papers »       Submit A Paper »

An information security ethics education model

Melissa Dark, Nathan Harter, Linda Morales, Mario A. Garcia

This paper proposes a model for teaching information assurance ethics. The model is composed of four dimensions: the moral development dimension; the ethical dimension; the security dimension; and the solutions dimension. The ethical dimension explores the ethical ramifications of a topic from a variety of perspectives. The security dimension includes ways in which an information assurance topic manifests to information assurance professionals. The solutions dimension focuses on remedies that individuals, groups of individuals and society have created to address security problems and associated ethical dilemmas. The moral development dimension describes the stages and transitions that humans experience as they develop morally, as they develop their own personal beliefs and behaviors about right and wrong.

Added 2008-06-16

An Adaptive Access Control Model for Web Services

E. bertino, A.C. Squicciarini, L. Martino, F. Paci

This paper presents an innovative access control model, referred to as Web service Access Control Version 1 (Ws-AC1), specifically tailored to Web services. The most distinguishing features of this model are the flexible granularity in protection objects and negotiation capabilities. Under Ws-AC1, an authorization can be associated with a single service and can specify for which parameter values the service can be authorized for use, thus providing a fine access control granularity. Ws-AC1 also supports coarse granularities in protection objects in that it provides the notion of service class under which several services can be grouped. Authorizations can then be associated with a service class and automatically propagated to each element in the class. The negotiation capabilities of Ws-AC1 are related to the negotiation of identity attributes and the service parameters. Identity attributes refer to information that a party requesting a service may need to submit in order to obtain the service. The access control policy model of Ws-AC1 supports the specification of policies in which conditions are stated, specifying the identity attributes to be provided and constraints on their values. In addition, conditions may also be specified against context parameters, such as time. To enhance privacy and security, the actual submission of these identity attributes is executed through a negotiation process. Parameters may also be negotiated when a subject requires use of a service with certain parameters values that, however, are not authorized under the policies in place. In this paper, we provide the formal definitions underlying our model and the relevant algorithms, such as the access control algorithm. We also present an encoding of our model in the Web Services Description Language (WSDL) standard for which we develop an extension, required to support Ws-AC1.

Added 2008-06-16


Teaching Students to Design Secure Systems

Jim Davis, Melissa Dark

The authors look more closely at determining an appropriate scope and sequence for information assurance (IA) and briefly describe a project whose goal is the articulation of an IA curriculum.

Added 2008-06-13

How to Perform a Security Audit

M. Dark, A. Poftak
Added 2008-06-13

Evaluation Theory and Practice as Applied to Security Education

Melissa J. Dark

This paper will overview general evaluation purposes, elements, and steps for designing an evaluation in order to provide foundational information that can be used to conduct an evaluation of any security awareness, training, or education programs.

Added 2008-06-13

Hiding program slices for software security

Xiangyu Zhang, R. Gupta

Given the high cost of producing software, development of technology for prevention of software piracy is important for the software industry. In this paper we present a novel approach for preventing the creation of unauthorized copies of software. Our approach splits software modules into open and hidden components. The open components are installed (executed) on an insecure machine while the hidden components are installed (executed) on a secure machine. We assume that while open components can be stolen, to obtain a fully functioning copy of the software, the hidden components must be recovered. We describe an algorithm that constructs hidden components by slicing the original software components. We argue that recovery of hidden components constructed through slicing, in order to obtain a fully functioning copy of the software, is a complex task. We further develop security analysis to capture the complexity of recovering hidden components. Finally we apply our technique to several large Java programs to study the complexity of recovering constructed hidden components and to measure the runtime overhead introduced by splitting of software into open and hidden components.

Added 2008-06-12

You Can Run, But You Can't Hide: An Effective Statistical Methodology to Trace Back DDoS Attackers

Terence K.T. Law, John C.S. Lui, David K.Y. Yau

There is currently an urgent need for effective solutions against distributed denial-of-service (DDoS) attacks directed at many well-known Web sites. Because of increased sophistication and severity of these attacks, the system administrator of a victim site needs to quickly and accurately identify the probable attackers and eliminate the attack traffic. Our work is based on a probabilistic marking algorithm in which an attack graph can be constructed by a victim site. We extend the basic concept such that one can quickly and efficiently deduce the intensity of the “local traffic” generated at each router in the attack graph based on the volume of received marked packets at the victim site. Given the intensities of these local traffic rates, we can rank the local traffic and identify the network domains generating most of the attack traffic. We present our traceback and attacker identification algorithms. We also provide a theoretical framework to determine the minimum stable time t_{min}, which is the minimum time needed to accurately determine the locations of attackers and local traffic rates of participating routers in the attack graph. Entensive experiments are carried out to illustrate that one can accurately determine the minimum stable time t_{min} and, at the same time, determine the location of attackers under various threshold parameters, network diameters, attack traffic distributions, on/off patterns, and network traffic conditions.

Added 2008-06-12

Protection of Application Service Hosting Platforms: an Operating System Perspective

Xuxian Jiang, Dongyan Xu

he Application Service Hosting Platform (ASHoP), as a realization of the utility computing vision, has recently received tremendous attention from both industry and academia. An ASHoP provides a shared and high performance platform to host multiple Application Services (ASes). The ASes are outsourced by Application Service Providers (ASPs) to save their own IT resources. Furthermore, ASHoP resources are allocated to the ASes in an ondemand fashion, so that resource supply always follows the…

Added 2008-06-11

Principals, policies and keys in a secure distributed programming language

T. Chothia, D. Duggan, J. Vitek
Added 2008-06-04

A Bayesian approach toward active learning for collaborative filtering

Rong Jin, Luo Si

Collaborative filtering is a useful technique for exploiting the preference patterns of a group of users to predict the utility of items for the active user. In general, the performance of collaborative filtering depends on the number of rated examples given by the active user. The more the number of rated examples given by the active user, the more accurate the predicted ratings will be. Active learning provides an effective way to acquire the most informative rated examples from active users. Previous work on active learning for collaborative filtering only considers the expected loss function based on the estimated model, which can be misleading when the estimated model is inaccurate. This paper takes one step further by taking into account of the posterior distribution of the estimated model, which results in more robust active learning algorithm. Empirical studies with datasets of movie ratings show that when the number of ratings from the active user is restricted to be small, active learning methods only based on the estimated model don’t perform well while the active learning method using the model distribution achieves substantially better performance.

Added 2008-06-03

On detecting service violations and bandwidth theft in QoS network domains

Ahsan Habib, Sonia Fahmy, Srinivas r. Avasarala, Venkatesh Prabhakar, Bharat Bhargava

We design and evaluate a simple and scalable system to verify quality of service (QoS) in a differentiated services domain. The system uses a distributed edge-to-edge monitoring approach with measurement agents collecting information about delays, losses and throughput, and reporting to a service level agreement monitor (SLAM). The SLAM detects potential service violations, bandwidth theft, denial of service attacks, and flags the need to re-dimension the network domain or limit its users. Measurements may be performed entirely edge-to-edge, or the core routers may participate in logging packet drop information. We compare the core-assisted and edge-to-edge schemes, and we extend network tomography-based loss inference mechanisms to cope with different drop precedences in a QoS network. We also develop a load-based service monitoring scheme which probes the appropriate edge routers for loss and throughput on demand. Simulation results indicate that the system detects attacks with reasonable accuracy, and is useful for damage control in both QoS-enabled and best effort network domains.

Added 2008-06-03


Efficient join processing over uncertain data

R. Cheng, S. Singh, S. Prabhakar, R. Shah, J.S. Vitter, Y. Xia

In many applications data values are inherently uncertain. This includes moving-objects, sensors and biological databases. There has been recent interest in the development of database management systems that can handle uncertain data. Some proposals for such systems include attribute values that are uncertain. In particular, an attribute value can be modeled as a range of possible values, associated with a probability density function. Previous efforts for this type of data have only addressed simple queries such as range and nearest-neighbor queries. Queries that join multiple relations have not been addressed in earlier work despite the significance of joins in databases. In this paper we address join queries over uncertain data. We propose a semantics for the join operation, define probabilistic operators over uncertain data, and propose join algorithms that provide efficient execution of probabilistic joins. The paper focuses on an important class of joins termed probabilistic threshold joins that avoid some of the semantic complexities of dealing with uncertain data. For this class of joins we develop three sets of optimization techniques: item-level, page-level, and index-level pruning. These techniques facilitate pruning with little space and time overhead, and are easily adapted to most join algorithms. We verify the performance of these techniques experimentally.

Added 2008-06-03

Can Identical Twins be Discriminated Based on Fingerprints

A.K. Jain, S. Prabhakar
Added 2008-06-03