Access control policies in healthcare domain define permissions for users to access different medical records. Role Based Access Control (RBAC) helps to restrict medical records to users in a certain role but sensitive information in medical records can still be compromised by authorized insiders. The threat is from users who are not treating the patient but have access to medical records .We propose selective combination of policies where sensitive records are only available to primary doctor under Discretionary Access Control (DAC). This helps not only better compliance of principle of least privilege but also helps to mitigate the threat of authorized insiders disclosing sensitive patient information. We use Policy Machine (PM) proposed by NIST to combine policies and develop a flexible healthcare access control policy which has benefits of context awareness and discretionary access. Temporal constrains have been added to RBAC in PM and after combination of Generalized Temporal RBAC and DAC an example healthcare scenario has been setup.
Modern computer systems permit mobile users to access protected information from remote locations. In certain secure environments, it would be desirable to restrict this access to a particular computer or set of computers. Existing solutions of machine-level authentication are undesirable for two reasons. First, they do not allow fine-grained application layer access decisions. Second, they are vulnerable to insider attacks in which a trusted administrator acts maliciously. In this work, we describe a novel approach using secure hardware that solves these problems. In our design, multiple administrators are required for installation of a system. After installation, the authentication privileges are physically linked to that machine, and no administrator can bypass these controls. We define an administrative model and detail the requirements for an authentication protocol to be compatible with our methodology. Our design presents some challenges for large-scale systems, in addition to the benefit of reduced maintenance.
Content-Based Publish-Subscribe (CBPS) is an asynchronous messaging paradigm that supports a highly dynamic and many-to-many communication pattern based on the content of the messages themselves. In general, a CBPS system has three distinct parties - \textit{Content Publishers} , \textit{Content Brokers}, and \textit{Subscribers} - working in a highly decoupled fashion. The ability to seamlessly scale on demand has made CBPS systems the choice of distributing \textit{messages/documents} produced by \textit{Content Publishers} to many \textit{Subscribers} through \textit{Content Brokers}. Most of the current systems assume that \textit{Content Brokers} are trusted for the confidentiality of the data published by \textit{Content Publishers} and the privacy of the subscriptions, which specify their interests, made by \textit{Subscribers}. However, with the increased use of technologies, such as service oriented architectures and cloud computing, essentially outsourcing the broker functionality to third-party providers, one can no longer assume the trust relationship to hold. The problem of providing privacy/confidentiality in CBPS systems is challenging, since the solution to the problem should allow \textit{Content Brokers} to make routing decisions based on the content without revealing the content to them. The problem may appear unsolvable since it involves conflicting goals, but in this paper, we propose a novel approach to preserve the privacy of the subscriptions made by \textit{Subscribers} and confidentiality of the data published by \textit{Content Publishers} using cryptographic techniques when third-party \textit{Content Brokers} are utilized to make routing decisions based on the content. We analyze the security of our approach to show that it is indeed sound and provide experimental results to show that it is practical.
In data publishing, anonymization techniques such as generalization and bucketization have been designed to provide privacy protection. In the meanwhile, they reduce the utility of the data. It is important to consider the tradeoff between privacy and utility. In a paper that appeared in KDD 2008, Brickell and Shmatikov proposed an evaluation methodology by comparing privacy gain with utility gain resulted from anonymizing the data, and concluded that “even modest privacy gains require almost complete destruction of the data-mining utility”. This conclusion seems to undermine existing work on data anonymization. In this paper, we analyze the fundamental characteristics of privacy and utility, and show that it is inappropriate to directly compare privacy with utility. We then observe that the privacy-utility tradeoff in data publishing is similar to the risk-return tradeoff in financial investment, and propose an integrated framework for considering privacy-utility tradeoff, borrowing concepts from the Modern Portfolio Theory for financial investment. Finally, we evaluate our methodology on the Adult dataset from the UCI machine learning repository. Our results clarify several common misconceptions about data utility and provide data publishers useful guidelines on choosing the right tradeoff between privacy and utility.
Recent work has shown the importance of considering the adversary’s background knowledge when reasoning about privacy in data publishing. However, it is very difficult for the data publisher to know exactly the adversary’s background knowledge. Existing work cannot satisfactorily model background knowledge and reason about privacy in the presence of such knowledge. This paper presents a general framework for modeling the adversary’s background knowledge using kernel estimation methods. This framework subsumes different types of knowledge (e.g., negative association rules) that can be mined from the data. Under this framework, we reason about privacy using Bayesian inference techniques and propose the skyline (B, t)-privacy model, which allows the data publisher to enforce privacy requirements to protect the data against adversaries with different levels of background knowledge. Through an extensive set of experiments, we show the effects of probabilistic background knowledge in data anonymization and the effectiveness of our approach in both privacy protection and utility preservation.
Hierarchical data models (e.g. XML, Oslo) are an ideal data exchange format to facilitate ever increasing data sharing needs among enterprises, organizations as well as general users. However, building efficient and scalable Event Driven Systems (EDS) for selectively disseminating such data remains largely an unsolved problem to date. In general, an EDS has three distinct parties - Content Publishers ({pubs}), Content Brokers ({bs}), Subscribers ({subs}) - working in a highly decoupled Publish-Subscribe (PS) model. With a large Subscriber base having different interests and many documents ({docs}), the deficiency in existing such systems lies in the techniques used to distribute (match/filter and forward) content from pubs to subs through {bs}. Thus, we propose an efficient and scalable approach to selectively distribute different subtrees of possibly large documents, which have access control restrictions, to different $U_i$‘s $in$ subs by exploiting the hierarchical structure of those documents. A novelty of our approach is that we map subscription routing tables in bs to efficient tree data structures in order to perform matching and other commonly used operations efficiently. bs form a DAG consisting of multiple trees from pubs to {subs}. Along with our simple but adequate subscription language, our proposed approach combines policy-driven covering and merging based routing to dramatically reduce the load towards the root of the distribution trees leading to a scalable system. The experimental results clearly reinforce our claims.
We propose a novel scheme for selective distribution of content, encoded as documents, that preserves the privacy of the users to whom the documents are delivered and is based on an efficient and novel group key management scheme.
Our document broadcasting approach is based on access control policies specifying which users can access which documents, or subdocuments. Based on such policies, a broadcast document is segmented into multiple subdocuments, each encrypted with a different key. In line with modern attribute-based access control, policies are specified against identity attributes of users. However our broadcasting approach is privacy-preserving in that users are granted access to a specific document, or subdocument, according to the policies without the need of providing in clear information about their identity attributes to the document publisher. Under our approach, not only does the document publisher not learn the values of the identity attributes of users, but it also does not learn which policy conditions are verified by which users, thus inferences about the values of identity attributes are prevented. Moreover, our key management scheme on which the proposed broadcasting approach is based is efficient in that it does not require to send the decryption keys to the users along with the encrypted document. Users are able to reconstruct the keys to decrypt the authorized portions of a document based on subscription information they have received from the document publisher. The scheme also efficiently handles new subscription of users and revocation of subscriptions.
Client honeypots are typically implemented using some form of virtualization to contain malware encountered by the client machine. However, current virtual environments can be detected in multiple ways by malware. The malware can be executed from within a browser or require escaping from the browser to detect the virtualization. In many cases, detection is accomplished by a simple test. Malware can then modify its behavior based on this information. Thus, an implementation of client honeypots which does not depend on virtualization is needed to fully study malware.
In recent years, the field of uncertainty management in databases has received considerable interest due to the presence of numerous applications that handle probabilistic data. In this dissertation, we identify and solve important issues for managing uncertain data natively at the database level. We propose the semantics of join operation in the presence of attribute uncertainty and present various pruning techniques to significantly improve the join performance. Two index structures for indexing categorical uncertain data are also presented. For optimization of probabilistic queries, we discuss novel selectivity estimation techniques. We also introduce a new model for handling arbitrary pdf (both discrete and continuous) attributes natively at the database level. This model is consistent with Possible Worlds Semantics and is closed under the fundamental relation operations of selection, projection and join. We also present and discuss the implementation of Orion � a relational database with native support for uncertain data. Orion is developed as an extension of the open source relational database, PostgreSQL. The experiments performed in Orion show the effectiveness and efficiency of our approach.
Preserving a strong connection between mathematics and information security, elliptic and hyperelliptic curve cryptography are playing an increasingly important role during the past decade. We present some problems that relate low genus curves and cryptography.
We first discuss a new application of elliptic curve cryptography (ECC) to a real-world problem of access control in secure broadcasting of data. The asymmetry, introduced by the elliptic curve discrete logarithm problem, is the key to achieving the required security feature that existing methods fail to obtain.
We then talk about the use of genus 2 curves in the ``real model’’ in cryptography, and present explicit divisor doubling formulas for such curves. These formulas are particularly important for implementation purposes.
Finally, we present a new method for finding cryptographically strong parameters for the CM construction of genus 2 curves. This method uses the idea of polynomial parameterization, which allows suitable parameters to be generated in batches. We give a brief analysis of the algorithm. We also provide algorithms for generating parameters for genus 2 curves to be used in pairing-based cryptography. Our method is an adaptation of the Cocks-Pinch construction for pairing-friendly elliptic curves. Our methods start from a prescribed embedding degree $k$ and a primitive quartic CM field $K$, and output a prime subgroup order $r$ of the Jacobian over a prime field $mathbb_p$, with $rho = 2log(p)/log(r)approx 8$.
Users increasingly use their mobile devices for electronic transactions to store related information, such as digital receipts. However, such information can be target of several attacks. There are some security issues related to M-commerce: the loss or theft of mobile devices results in a exposure of transaction information; transaction receipts that are send over WI-FI or 3G networks can be easily intercepted; transaction receipts can also be captured via Bluetooth connections without the user’s consent; and mobile viruses, worms and Trojan horses can access the transaction information stored on mobile devices if this information is not protected by passwords or PIN numbers. Therefore, assuring privacy and security of transactions’ information, as well as of any sensitive information stored on mobile devices is crucial. In this paper, we propose a privacy-preserving approach to manage electronic transaction receipts on mobile devices. The approach is based on the notion of transaction receipts issued by service providers upon a successful transaction and combines Pedersen commitment and Zero Knowledge Proof of Knowledge (ZKPK) techniques and Oblivious Commitment-Based Envelope (OCBE) protocols. We have developed a version of such protocol for Near Field Communication (NFC) enabled cellular phones.