Most organizations today require the verification of personal information pertaining to users in order to provide service to users. Privacy of such information is of growing concern and because organizations often ask for similar information, this process can also be redundant and inefficient. Recent proposals dealing with federated identity management have the potential to alleviate such problems. A federation is a set of organizations that establish mutual trust with each other. This allows them to share client information whenever possible depending on their service disclosure policies and user privacy preferences. This paper addresses such problem by integrating federated identity management with trust negotiation techniques. We focus on a trust negotiation approach suitable for federated environments. Our federated trust negotiation approach relies on the use of special-purpose tickets, that is, signed assertions that are released by the federation members to users upon successful negotiations. The main advantage of such integration is that if a user has already successfully negotiated with a member of the federation, subsequent negotiations with other federation members may require a reduced number of interactions between the client and the service provider.
The CuPIDS project is an exploration of increasing information system security by dedicating computational resources to system security tasks in a shared resource, multi-processor (MP) architecture. Our research explores ways in which this architecture offers improvements over the traditional uni-processor (UP) model of security. There are a number of areas to explore, one of which has a protected application running on one processor in a symmetric multiprocessing (SMP) system while a shadow process specific to that application runs on a different processor, monitoring its activity, ready to respond immediately if the application veers off course. This paper describes initial work into defining such an architecture and the prototype work done to validate our ideas.
Location-based services, such as finding the nearest gas station, require users to supply their location information. However, a user
The current paper extends and matures the earlier taxonomy framework of the author (Rogers, 1999), and provides a preliminary two dimensional classification model. While there is no one generic profile of a hacker, studies indicate that there are at least eight primary classification variables: Novices (NV), Cyber-Punks (CP), Internals (IN), Petty Thieves (PT), Virus Writers (VW), Old Guard hackers (OG), Professional Criminals (PC), and Information Warriors (IW), and 2 principal components, skill and motivation. Due to the complex nature and interactions of these variables and principal components, the traditional one-dimensional continuum is inadequate for model testing. A modified circular order circumplex is presented as a better method for developing a preliminary hacker taxonomy. The paper discusses the importance of developing a meaningful understanding of the hacker community and the various sub-groups. Directions for future research are also discussed.
Current mechanisms for distributed access management are limited in their capabilities to provide federated information sharing while ensuring adequate levels of resource protection. This work presents a policy-based framework designed to address these limitations for access management in federated systems. In particular, it supports: (i) decentralized administration while preserving local autonomy, (ii) fine-grained access control while avoiding rule-explosion in the policy,(iii) credential federation through the use of interoperable protocols, (iv) specification and enforcement of semantic and contextual constraints, and (v) usage control in resource provisioning through effective session management. The paper highlights the significance of our policy-based approach in comparison with related mechanisms. It also presents a system architecture of our implementation prototype.
This paper investigates the possibilities of steganographically embedding information in the “noise” created by automatic translation of natural language documents. Because the inherent redundancy of natural language creates plenty of room for variation in translation, machine translation is ideal for steganographic applications. Also, because there are frequent errors in legitimate automatic text translations, additional errors inserted by an information hiding mechanism are plausibly undetectable and would appear to be part of the normal noise associated with translation. Significantly, it should be extremely difficult for an adversary to determine if inaccuracies in the translation are caused by the use of steganography or by deficiencies of the translation software.
This thesis explores cryptography and applies a normative ethical theory to determine what if any uses of cryptography are ethically permissible. Cryptography is divided into confidentiality, integrity, and authentication before being considered under the deontological moral theory of Immanueal Kant and other modern philosophers such as Alan Donagan, John Rawls, and Robert Nozick. Brief discussions on the fields of ethics and cryptography are included to aid any reader not familiar with them.
In this work we address the problem of portable and flexible privacy-preserving access rights for large online data repositories. Privacy-preserving access control means that the service provider can neither learn what access rights a customer has nor link a request to access an item to a particular customer, thus maintaining privacy of both customer activity and customer access rights. Flexible access rights allow any customer to choose any subset of items from the repository and correspondingly be charged only for the items selected. And portability of access rights means that the rights themselves can be stored on small devices of limited storage space and computational capabilities, and therefore the rights must be enforced using the limited resources available.
Our main results are solutions to the problem that utilize minimal perfect hash functions and order-preserving minimal perfect hash functions. None of them use expensive cryptography, all require very little space, and they are therefore suitable for computationally weak and space-limited devices such as smartcards, sensors, etc. Performance of the schemes is measured as the probability of false positives (i.e., the probability that access to an unpurchased item will be permitted) for a given storage space bound. Using our techniques, for a data repository of size n and subscription order of m << n items, we achieve a probability of false positives of m^-c using only O(cm) bits of storage space, where c is an adjustable parameter (a constant or otherwise) that can be set to provide the desired performance. This is the first time that such provable bounds are established for this problem, and we believe the techniques we use are of more general interest through the unusual use we make of perfect hashing.
In this paper, we present a watermarking based approach, and its implementation, for mitigating phishing attacks - a form of web- based identity theft. ViWiD is an integrity checking mechanism based on visible watermarking of logo images. ViWiD performs all of the compu- tation on the company
The rapid self-configuration, ease of deployment and small cost of components, coupled with the tremendous potential in areas of environmental and structural monitoring, supply chain automation, identification of products at check-out points, access control and security, motivate the popularity of wireless sensor networks, the recent interest generated by wireless Radio Frequency Identification (RFID) systems and their envisioned integration. While the autonomous operation and random deployment of components are the principal causes of the low set up cost of these systems, they also become the source of fundamental problems. This thesis studies the problem of extending the network lifetime in the context of sensor and RFID systems by defining and detecting redundant components whose simultaneous deactivation maintains the initial network coverage. For wireless sensor networks, we reduce the problem to the computation of Voronoi diagrams. Moreover, we examine the impact of redundancy elimination on the related problem of coverage-boundary detection. We present efficient distributed algorithms for computing and maintaining solutions for the redundant sensor elimination problem and coverage boundary problem in cases of sensor failures or insertion of new sensors. We prove the safety and liveness properties of our algorithms, and characterize their time complexity and traffic generated. Using detailed simulations, we also quantify the impact of system parameters such as sensor density, transmission range and failure rates on network traffic. In the context of wireless RFID systems, we provide an efficient solution to a fundamental problem generated by reader collisions occurring at tags. We prove that an optimal solution for the redundant-reader problem is NP-hard and propose a randomized approximation algorithm. We conduct elaborate experiments on realistic topologies in order to evaluate the accuracy, message overhead and efficacy of the protocols. Our simulations show that by repeating each query $\log m$ times and using $2\log m$ time units for each query, where $m$ is the total number of RFID readers, each reader can discover more than 99\% of the covered RFID tags. Moreover, even without the existence of a centralized entity, we discover consistently more than half of the redundant readers of a greedy algorithm using centralized information.
We present efficient algorithms for the problems of matching red and blue disjoint geometric obstacles in the plane and connecting the matched obstacle pairs with mutually nonintersecting paths that have useful geometric properties. We first consider matching n red and n blue disjoint rectilinear rectangles and connecting the n matched rectangle pairs with nonintersecting monotone rectilinear paths; each such path consists of O(n) segments and is not allowed to touch any rectangle other than the matched pair that it is linking. Based on a numbering scheme for certain geometric objects and on several useful geometric observations, we develop an O(n log n) time, O(n) space algorithm that produces a desired matching for rectilinear rectangles. If an explicit printing of all the n paths is required, then our algorithm takes O(n logn+L) time and O(n) space, where L is the total size of the desired output. We then extend these matching algorithms to other classes of red/blue polygonal obstacles. The numbering scheme also finds applications to other problems.
A model developed for the investigation of security in computer systems is refined in three major ways, incoporating an object structure, a notion of current security level, and an altered *-property. In addition, the various ramifications of classifying a control structure are explored. It is shown that security requirements can be fulfilled in a system using these refinements.