Ideas that are used in the design, development, and performance of concurrency control mechanisms have been summarized. The locking, time-stamp, optimistic-based mechanisms are included. The ideas of validation in the optimistic approach are presented in some detail. The degree of concurrency and classes of serializability for various algorithms have been presented. Questions that relate arrival rate of transactions with degree of concurrency and performance have been briefly presented. Finally, several useful ideas for increasing concurrency have been summarized. They include flexible transactions, adaptability, prewrites, multidimensional time stamps, and relaxation of two-phase locking
We discuss various secured Web access schemes using dynamic and static approaches. In a static approach the access environment, that is, the set of authorized users, the mode of access, their access rights, etc., are predefined. This approach is suitable only for a static set up where the user requirements do not change frequently. In the dynamic approach, on the other hand, the authorized user set is defined when Web pages access requests appear. An interested user to the Web is authenticated by necessary information provided by the user. Once the information is verified, the user is either given conditional access, timed access, or full access only to the information relevant to the user
In this paper, we present a version of the linear hash structure algorithm to increase concurrency using multi-level transaction model. We exploit the semantics of the linear hash operations at each level of transaction nesting to allow more concurrency. We implement each linear hash operation by a sequence of operations at lower level of abstraction. Each linear hash operation at leaf-level is a combination of search and read/write operations. We consider locks at both vertex (page) and key level (tuple) to further increase concurrency. As undo-based recovery is not possible with multi-level transactions, we use compensation-based undo to achieve atomicity. We have implemented our model using object-oriented technology and multithreading paradigm. In our implementation, linear hash operations such as find, insert, delete, split, and merge are implemented as methods and correspond to multi-level transactions.
The Indiana Center for Database Systems (ICDS) at Purdue University has embarked in an ambitious endeavor to become a premiere world-class database research center. This goal is substantiated by the diversity of its research topics, the large and diverse funding base, and the steady publication trend in top conferences and journals. ICDS was founded with an initial grant from the State of Indiana Corporation of Science and Technology in 1990. Since then it has grown to now have 9 faculty members and about 30 total researchers. This report describes the major research projects underway at ICDS as well as efforts to move research toward practice.
Data mining technology has given us new capabilities to identify correlations in large data sets. This introduces risks when the data is to be made public, but the correlations are private. We introduce a method for selectively removing individual values from a database to prevent the discovery of a set of rules, while preserving the data for other applications. The efficacy and complexity of this method are discussed. We also present an experiment showing an example of this methodology.
Secure Multi-Party Computation enables parties with private data to collaboratively compute a global function of their private data, without revealing that data. The increase in sensitive data on networked computers, along with improved ability to integrate and utilize that data, make the time ripe for practical secure multi-party computation. This paper surveys approaches to secure multi-party computation, and gives a method whereby an efficient protocol for two parties using an untrusted third party can be used to construct an efficient peer-to-peer secure multi-party protocol.
Privacy preserving mining of distributed data has numerous applications. Each application poses different constraints: What is meant by privacy, what are the desired results, how is the data distributed, what are the constraints on collaboration and cooperative computing, etc. We suggest that the solution to this is a toolkit of components that can be combined for specific privacy-preserving data mining applications. This paper presents some components of such a toolkit, and shows how they can be used to solve several privacy-preserving data mining problems.
The problem of sharing manufacturing, inventory, or capacity to improve performance is applicable in many decentralized operational contexts. However, the solution of such problems commonly requires an intermediary or a broker to manage information security concerns of individual participants. Our goal is to examine use of cryptographic techniques to attain the same result without the use of a broker. To illustrate this approach, we focus on a problem faced by independent trucking companies that have separate pick-up and delivery tasks and wish to identify potential efficiency-enhancing task swaps while limiting the information they must reveal to identify these swaps. We present an algorithm that finds opportunities to swap loads without revealing any information except the loads swapped, along with proofs of the security of the protocol. We also show that it is incentive compatible for each company to correctly follow the protocol as well as provide their true data. We apply this algorithm to an empirical data set from a large transportation company and present results that suggest significant opportunities to improve efficiency through Pareto improving swaps. This paper thus uses cryptographic arguments in an operations management problem context to show how an algorithm can be proven incentive compatible as well as demonstrate the potential value of its use on an empirical data set.
We investigate the performance issues of destination-sequenced distance vector (DSDV) and ad-hoc on-demand distance vector (AODV) routing protocols for mobile ad hoc networks. Four performance metrics are measured by varying the maximum speed of mobile hosts, the number of connections, and the network size. The correlation between network topology change and mobility is investigated by using linear regression analysis. The simulation results indicate that AODV outperforms DSDV in less stressful situations, while DSDV is more scalable with respect to the network size. It is observed that network congestion is the dominant reason for packet drop for both protocols. We propose a new routing protocol, congestion-aware distance vector (CADV), to address the congestion issues. CADV outperforms AODV in delivery ratio by about 5%, while introduces less protocol load. The result demonstrates that integrating congestion avoidance mechanisms with proactive routing protocols is a promising way to improve performance.
Action frauds constitute largest part of all Internet frauds. Cheating is a kind of fraud that does not have direct evidences of its occurrence. We conduct theoretical studies as well as simulation experiments to find out the effect of cheating in three important types of auctions: English auction, first-price sealed-bid, and second-price sealed-bid auction. Our cheating environment consists of shill bidding, bid shading and false bidding in English, first-price and second-price auction, respectively. In the experiments ordinary bidders, bidders with the equilibrium bidding strategy, and cheaters compete with each other. Both theoretical and experimental results confirm that the equilibrium bidding strategies indeed increases the bidders’ expected utility. Therefore, it can be concluded that adoption of rational bidding strategies can combat cheating. It is found that most of the auction sites intuitively prefer English auction to other auction mechanisms. There is not much theoretical or experimental evidence to support such an intuition. We use honest bidder’s expected gain and honest seller’s revenue loss as a basis to compare these three important auctions types. The analysis of the results reveals English auction to be the most preferred mechanism from both honest buyer’s and honest seller’s point of view. This result can be used as an experimental evidence to explain the popularity of English auction over the Internet.
Most of the proposed security protocols for wireless sensor networks (WSN) are designed to provide the uniform level of security across the network. There are various multi-sensing applications like sensors monitoring airport runway control system which may also be used to monitor environmental conditions such as wind speed and direction. When these nodes communicate, they may require different levels of security. For example, in case of a highjack event, the secure communication among nodes in a target region in the airport runway control system should be provided as they exchange highly critical data. In this paper, we propose a scheme called role-based access in sensor networks (RBASH) which provides role-based multilevel security in sensor networks. Each group is organized in such a way that they can have different roles based on the context and thus, can provide or have different levels of accesses. RBASH provides the desired security level based on the application need. The multilevel security is based on assigned keys to different nodes at different levels. To achieve this goal, we organize the network using Hasse diagram then compute the key for each individual node and extend it further to construct the key for a group. Based on experimental observations, we conclude that RBASH is energy and communication efficient in providing security compared to some other protocols which provides uniform security for all the nodes.
Recent work has shown that conventional operating system audit trails are insufficient to detect low-level network attacks. Because audit trails are typically based upon system calls or application sources, operations in the network protocol stack go unaudited. Earlier work has determined the audit data needed to detect low-level network attacks. We describe an implementation of an audit system which collects this data and analyze the issues that guided the implementation. Finally, we report the performance impact on the system and the rate of audit data accumulation in a test network.
hen collecting requirements for software, designers may learn of needs for specific forms of protection to be present. These needs may be translated into requirements for encryption or authentication, but what about the non-obvious aspects of security - including privacy, auditability and assurance - that are usually overlooked in the requirements capture process? When we overlook these issues, we get software that doesn’t deserve our trust. In this paper, I discuss some of the aspects of security that are regularly overlooked by designers and suggest some standard questions that should be addressed in every design