A number of attacks have been proposed against the idea of a genuinity test. The rationale for these attacks is based on misinterpretation of published details about this system. We correct these misunderstandings by providing a detailed analysis and contradictory evidence for each claim.
We present and analyze portable access control mechanisms for large data repositories, in that the customized access policies are stored on a portable device (e.g., a smart card). While there are significant privacy-preservation advantages to the use of smart cards anonymously created and bought in public places (stores, libraries, etc), a major difficulty is that, for huge data repositories and limited-capacity portable storage devices, it is not possible to represent any possible access configuration on the card. If we let n denote the number of documents on a server, then we need to design succinct descriptions of portable access rights to arbitrary subsets of these n documents, such as they “fit” in only k available space, where k is much smaller than n. We describe and analyze schemes for both unstructured and structured collections of documents. For these schemes, we give fast algorithms for efficiently using the limited space available on the card. For a customer whose card is supposed to contain a subset S of documents, access to all of S must be allowed. In some situations a small enough number of “false positives” (which are accesses to non-S documents) is acceptable to the server, and the challenge then is to minimize the number of false positives implicit to any given card. In our model the customer does not know which documents correspond to those false positives, the probability of a randomly chosen document being a false positive is small, and too many unsuccessful access attempts are viewed by the server as an exhaustive search attack, which can possibly result in zero-ing out the card.
Recent related work by Bykova and Atallah was geared towards the situation where the document repository and/or access policies change rapidly, and are therefore not vulnerable to on-line sharing of false-positive experiences by different users. In this paper we seek to prevent such collusive attacks by different card holders: It is a design requirement that the information in one card is useless to the holder of another card; that is, even if two customers have the same S, they would not have the same set of false positives.
Access control is an important component of any database management system. Several access control models have been proposed for conventional databases. However, these models do not seem adequate for geographical databases, due to the peculiarities of geographical data. Previous work on access control models for geographical data mainly concerns raster maps (images). In this paper, we present a discretionary access control model for geographical maps. We assume that each map is composed of a set of features. Each feature is represented in one or more maps by spatial objects, described by means of different spatial properties: geometrical properties, describing the shape, extension and location of the objects composing the map, and topological properties, describing the topological relationships existing among spatial objects. The proposed access control model allows the security administrator to define authorizations against map objects at a very fine granularity level, taking into account the various spatial representations and the object dimension. The model also supports both positive and negative authorizations as well as different propagation rules that make access control very flexible.
Suppose a number of hospitals in a geographic area want to learn how their own heart-surgery unit is doing compared with the others in terms of mortality rates, subsequent complications, or any other quality metric. Similarly, a number of small businesses might want to use their recent point-of-sales data to cooperatively forecast future demand and thus make more informed decisions about inventory, capacity, employment, etc. These are simple examples of cooperative benchmarking and (respectively) forecasting that would benefit all participants as well as the public at large, as they would make it possible for participants to avail themselves of more precise and reliable data collected from many sources, to assess their own local performance in comparison to global trends, and to avoid many of the inefficiencies that currently arise because of having less information available for their decision-making. And yet, in spite of all these advantages, cooperative benchmarking and forecasting typically do not take place, because of the participants’ unwillingness to share their information with others. Their reluctance to share is quite rational, and is due to fears of embarrassment, lawsuits, weakening their negotiating position (e.g., in case of over-capacity), revealing corporate performance and strategies, etc. The development and deployment of private benchmarking and forecasting technologies would allow such collaborations to take place without revealing any participant’s data to the others, reaping the benefits of collaboration while avoiding the drawbacks. Moreover, this kind of technology would empower smaller organizations who could then cooperatively base their decisions on a much broader information base, in a way that is today restricted to only the largest corporations. This paper is a step towards this goal, as it gives protocols for forecasting and benchmarking that reveal to the participants the desired answers yet do not reveal to any participant any other participant’s private data. We consider several forecasting methods, including linear regression and time series techniques such as moving average and exponential smoothing. One of the novel parts of this work, that further distinguishes it from previous work in secure multi-party computation, is that it involves floating point arithmetic, in particular it provides protocols to securely and efficiently perform division.
This paper sets forth to explore the idea of gray hat hacking
Separation of Duty (SoD) is widely considered to be a fundamental principle in computer security. A Static SoD (SSoD) policy states that in order to have all permissions necessary to complete a sensitive task, the cooperation of at least a certain number of users is required. In Role-Based Access Control (RBAC), Statically Mutually Exclusive Roles (SMER) constraints are used to enforce SSoD policies. In this paper, we pose and answer fundamental questions related to the use of SMER constraints to enforce SSoD policies. We show that directly enforcing SSoD policies is intractable (coNP-complete), while checking whether an RBAC state satisfies a set of SMER constraints is efficient. Also, we show that verifying whether a given set of SMER constraints enforces an SSoD policy is intractable (coNP-complete) and discuss why this intractability result should not lead us to conclude that SMER constraints are not an appropriate mechanism for enforcing SSoD policies.
When customers need to each be given portable access rights to a subset of documents from a large universe of n available documents, it is often the case that the space available for representing each customer’s access rights is limited to much less than n, say it is no more than m bits. This is the case when, e.g., limited-capacity inexpensive cards are used to store the access rights to huge multimedia document databases. How does one represent subsets of a huge set of n elements, when only m bits are available and m is much smaller than n? We use an approach reminiscent of Bloom filters, by assigning to each document a subset of the m bits: If that document is in a customer’s subset then we set the corresponding bits to 1 on the customer’s card. This guarantees that each customer gets the documents he paid for, but it also gives him access to documents he did not pay for (“false positives”). We want to do so in a manner that minimizes the expected total false positives under various deterministic and probabilistic models: In the former model we assume k customers whose respective subsets are known a priori, whereas in the latter we assume (more realistically) that each document has a probability of being included in a customer’s subset. We cannot use randomly assigned bits for each document (in the way Bloom filters do), rather we need to consider the a priori knowledge (deterministic or probabilistic) we are given in each model in order to better assign a subset of the m available bits to each of the n documents. We analyze and give efficient schemes for this problem.
The current study was exploratory and represents a first attempt at a standardized method for digital forensics event reconstruction based on statistical significance at a given error rate (? = .01). The study used four scenarios to test the ability to determine whether contraband images located on a system running Windows XP, were intentionally downloaded or downloaded without the user
The honeypot has emerged as an effective tool to provide insights into new attacks and current exploitation trends. Though effective, a single honeypot or multiple independently operated honeypots only provide a limited local view of network attacks. Deploying and managing a large number of coordinating honeypots in different network domains will not only provide a broader and more diverse view, but also create potentials in global network status inference, early network anomaly detection, and attack correlation in large scale. However, coordinated honeypot deployment and operation require close and consistent collaboration across participating network domains, in order to mitigate potential security risks associated with each honeypot and the non-uniform level of security expertise in different network domains. It is challenging, yet desirable, to provide the two conflicting features of decentralized presence and uniform management in honeypot deployment and operation.
To address these challenges, this paper presents Collapsar, a virtual-machine-based architecture for network attack detention. A Collapsar center hosts and manages a large number of high-interaction virtual honeypots in a local dedicated network. These honeypots appear, to potential intruders, as typical systems in their respective production networks. Decentralized logical presence of honeypots provides a wide diverse view of network attacks, while the centralized operation enables dedicated administration and convenient event correlation, eliminating the need for honeypot experts in each production network domain. We present the design, implementation, and evaluation of a Collapsar testbed. Our experiments with several real-world attack incidences demonstrate the effectiveness and practicality of Collapsar.
The Application Service Hosting Platform (ASHP) has recently received tremendous attention from both industry and academia. An ASHP provides a shared high-performance infrastructure to host different Application Services (AS), outsourced by Application Service Providers (ASP). In this paper, we focus on the protection of ASHP, which has inherent requirement of sharing, openness, and mutual isolation. Different from a dedicated server platform, which is analogous with a private house, an ASHP is like an apartment building, involving the `host’ - the ASHP infrastructure and the `tenants’ - the AS. Strong protection and isolation must be provided between the host and the tenants, as well as between different tenants.
Unfortunately, traditional OS architecture and mechanisms are not adequate to provide strong ASHP protection. In this paper, we first make the case for a new OS architecture based on the virtual OS technology. We then present three protection mechanisms we have developed in SODA, our ASHP architecture. The mechanisms include: (1) resource isolation between AS, (2) virtual switching and firewalling between AS, and (3) kernelized intrusion detection and logging for each AS. For (3), we have developed a system called Kernort inside the virtual OS kernel. Kernort detects network intrusions in real-time and logs AS activities even when the AS has been compromised. Moreover, for the privacy of AS, logs are encrypted by Kernort so that the `landlord’ (namely ASHP owner) cannot view them without authorization. We are applying SODA to iShare, an Internet-based distributed resource sharing platform.
This paper presents the design of a new middleware which provides trust and accountability to distributed data sharing communities. The main application for the project is within the context of scientific collaborations where many researchers share directly collected data, thus allowing them to create new data sets by performing transformations on existing shared data sets. In data sharing communities one cannot always trust the data obtained from others in the community. However the field of data provenance does not consider malicious or untrustworthy users. By adding accountability to the provenance of each data set, this middlware ensures data integrity insofar as any errors can be identified and corrected. The user is further protected from faulty data by a trust view created from past experiences and second-hand recommendations. A trust view is based on real world social interactions and reflects each user’s own experiences within the community. By identifying providers of faulty data and removing them from a trust view, the integrity of all data is increased.
Steganography is an ancient art. With the advent of computers, we have vast accessible bodies of data in which to hide information, and increasingly sophisticated techniques with which to analyze and recover that information. While much of the recent research in steganography has been centered on hiding data in images, many of the solutions that work for images are more complicated when applied to natural language text as a cover medium. Many approaches to steganalysis attempt to detect statistical anomalies in cover data which predict the presence of hidden information. Natural language cover texts must not only pass the statistical muster of automatic analysis, but also the minds of human readers. Linguistically na
Protection and secure exchange of Web documents is becoming a crucial need for many internetbased applications. Securing Web documents entail addressing two main issues: confidentiality and integrity. Ensuring document confidentiality means that document contents can only be disclosed to subjects authorized according to specified security policies, whereas by document integrity we mean that the document contents are correct with respect to a given application domain and that the document contents are modified only by authorized subjects. Whereas the problem of document confidentiality has been widely investigated in the literature, the problem of how to ensure that a document, when moving among different parties, is modified only according to the stated policies still lacks comprehensive solutions. In this paper we present a solution to this problem by proposing a model for specifying update policies, and an infrastructure supporting the specification and enforcement of these policies in a distributed and cooperative environment, in which subjects in different organizational roles can modify possibly different portions of the same document. The key aspect of our proposal is that, by using a combination of hash functions and digital signature techniques, we create a distributed environment that enable subjects, in most cases, to verify, upon receiving a document, whether the update operations performed on the document till that point are correct with respect to the update policies, without interacting with the document server. Our approach is particularly suited for environments, such as mobile systems, pervasive systems, decentralized workflows, and peer-to-peer systems.