n this paper, we investigate the use of multiple directional antennas on sensor motes for location determination and mobile node monitoring. One key aspect that distinguishes wireless sensor networks is inexpensive transmitters and receivers that still maintain acceptable connectivity. Therefore, complex RF solutions are often not applicable. We propose and demonstrate a location estimation algorithm on a single sensor node equipped with inexpensive directional antennas by measuring the received signal strength of the transmission peers. This algorithm is further applied to the dynamic tracking of a wandering mote. The location tracking error can be reduced from 30% to 16% by using moving average schemes and merging estimates from different sets of antennas. The mean error of tracking estimates can be obtained to provide the certainty of location tracking. Therefore, only a single mote with angular diverse multiple antennas is needed to determine the location of a mote without triangulation.
s organizations increase their reliance on, possibly distributed, information systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. Though a number of techniques, such as encryption and electronic signatures, are currently available to protect data when transmitted across sites, a truly comprehensive approach for data protection must also include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics, and other relevant contextual information, such as time. It is well understood today that the semantics of data must be taken into account in order to specify effective access control policies. Also, techniques for data integrity and availability specifically tailored to database systems must be adopted. In this respect, over the years the database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. However, despite such advances, the database security area faces several new challenges. Factors such as the evolution of security concerns, the
The emerging Web service technology has enabled the development of Internet-based applications that integrate distributed and heterogeneous systems and processes which are owned by different organizations. However, while Web services are rapidly becoming a fundamental paradigm for the development of complex Web applications, several security issues still need to be addressed. Among the various open issues concerning security, an important issue is represented by the development of suitable access control models, able to restrict access to Web services to authorized users. In this paper we present an innovative access control model for Web services. The model is characterized by a number of key features, including identity attributes and service negotiation capabilities. We formally define the protocol for carrying on negotiations, by specifying the types of message to be exchanged and their contents, based on which requestor and provider can reach an agreement about security requirements and services. We also discuss the architecture of the prototype we are currently implementing. As part of the architecture we propose a mechanism for mapping our policies onto the WS-Policy standard which provides a standardized grammar for expressing Web services policies
The Generalized Temporal Role-Based Access Control (GTRBAC) model provides a comprehensive set of temporal constraint expressions which can facilitate the specification of fine-grained time-based access control policies. However, the issue of the expressiveness and usability of this model has not been previously investigated. In this paper, we present an analysis of the expressiveness of the constructs provided by this model and illustrate that its constraints-set is not minimal. We show that there is a subset of GTRBAC constraints that is sufficient to express all the access constraints that can be expressed using the full set. We also illustrate that a nonminimal GTRBAC constraint set can provide better flexibility and lower complexity of constraint representation. Based on our analysis, a set of design guidelines for the development of GTRBAC-based security administration is presented.
Recently, a new class of data mining methods, known as privacy preserving data mining (PPDM) algorithms, has been developed by the research community working on security and knowledge discovery. The aim of these algorithms is the extraction of relevant knowledge from large amount of data, while protecting at the same time sensitive information. Several data mining techniques, incorporating privacy protection mechanisms, have been developed that allow one to hide sensitive itemsets or patterns, before the data mining process is executed. Privacy preserving classification methods, instead, prevent a miner from building a classifier which is able to predict sensitive data. Additionally, privacy preserving clustering techniques have been recently proposed, which distort sensitive numerical attributes, while preserving general features for clustering analysis. A crucial issue is to determine which ones among these privacy-preserving techniques better protect sensitive information. However, this is not the only criteria with respect to which these algorithms can be evaluated. It is also important to assess the quality of the data resulting from the modifications applied by each algorithm, as well as the performance of the algorithms. There is thus the need of identifying a comprehensive set of criteria with respect to which to assess the existing PPDM algorithms and determine which algorithm meets specific requirements. In this paper, we present a first evaluation framework for estimating and comparing different kinds of PPDM algorithms. Then, we apply our criteria to a specific set of algorithms and discuss the evaluation results we obtain. Finally, some considerations about future work and promising directions in the context of privacy preservation in data mining are discussed.
In this paper we discuss issues concerning the development of inter- active virtual reality (VR) environments. We argue that the integration of such type of environments with database technology has the potential of providing on one side much flexibility and on the other hand of resulting in enhanced in- terfaces for accessing contents from digital archives. The paper also describes a project dealing with the dissemination of cultural heritage contents. Within the project an integrated framework has been developed that enhances conventional VR environments with database interactions.
Abstract. Trust negotiation between two subjects require each one proving its properties to the other. Each subject specifies disclosure policies stating the types of credentials and attributes the counterpart has to provide to obtain a given re- source. The counterpart, in response, provides a disclosure set containing the nec- essary credentials and attributes. If the counterpart wants to remain anonymous, its disclosure sets should not contain identity revealing information. In this pa- per, we propose anonymization techniques using which a subject can transform its disclosure set into an anonymous one. Anonymization transforms a disclosure set into an alternative anonymous one whose information content is different from the original one. This alternative disclosure set may no longer satisfy the original disclosure policy causing the trust negotiation to fail. To address this problem, we propose that trust negotiation requirements be expressed at a more abstract level using property-based policies. Property-based policies state the high-level prop- erties that a counterpart has to provide to obtain a resource. A property-based policy can be implemented by a number of disclosure policies. Although these disclosure policies implement the same high-level property-based policy, they re- quire different sets of credentials. Allowing the subject to satisfy any policy from the set of disclosure policies, increases not only the chances of a trust negotiation succeeding but also the probability of ensuring anonymity.
Large scale distributed systems typically have interactions among different services that create an avenue for propagation of a failure from one service to another. The failures being considered may be the result of natural failures or malicious activity, collectively called disruptions. To make these systems tolerant to failures it is necessary to contain the spread of the occurrence automatically once it is detected. The objective is to allow certain parts of the system to continue to provide partial functionality in the system in the face of failures. Real world situations impose several constraints on the design of such a disruption tolerant system of which we consider the following - the alarms may have type I or type II errors; it may not be possible to change the service itself even though the interaction may be changed; attacks may use steps that are not anticipated a priori; and there may be bursts of concurrent alarms. We present the design and implementation of a system named ADEPTS as the realization of such a disruption tolerant system. ADEPTS uses a directed graph representation to model the spread of the failure through the system, presents algorithms for determining appropriate responses and monitoring their effectiveness, and quantifies the effect of disruptions through a high level survivability metric. ADEPTS is demonstrated on a real e-commerce testbed with actual attack patterns injected into it.
Self-propagating codes, called worms, such as Code Red, Nimda, and Slammer, have drawn significant attention due to their enormous adverse impact on the Internet. There is a great interest in the research community in modeling the spread of worms and in providing adequate defense mechanisms against them. In this paper, we present a (stochastic) branching process model for characterizing the propagation of Internet worms. This model leads to the development of an automatic worm containment strategy that prevents the spread of worms beyond its early stages. Specifically, using the branching process model, we are able to (1) provide a precise condition that determines whether the worm will eventually die out and (2) provdide the probability that the total number of hosts that the worm infects will be below a certain level. We use these insights to develop a simple automatic worm containment scheme, which is demonstrated, through simulations and real trace data, to be both effective and non-intrusive.
Since sensor data gathering is the primary functionality of sensor networks, it is important to provide a fault tolerant method for reasoning about sensed events in the face of arbitrary failures of nodes sending in the event reports. In this paper, we propose a protocol called TIBFIT to diagnose and mask arbitrary node failures in an event-driven wireless sensor network. In our system model, sensor nodes are organized into clusters with rotating cluster heads. The nodes, including the cluster head, can fail in an arbitrary manner generating missed event reports, false reports, or wrong location reports. Correct nodes are also allowed to make occasional natural errors. Each node is assigned a trust index to indicate its track record in reporting past events correctly. The cluster head analyzes the event reports using the trust index and makes event decisions. TIBFIT is analyzed and simulated using the network simulator ns-2 and its coverage is evaluated with a varying number and varying intelligence of malicious nodes. We show that once TIBFIT gathers enough system state, accurate event detection is possible even if more than 50% of the network nodes are compromised.
In multihop wireless systems, such as ad-hoc and sensor networks, the need for cooperation among nodes to relay each other
With the development of location aware sensor applications, location determination has become an increasingly important middleware technology. Numerous current technologies for location determination of sensor nodes use the received signal strength from sensor nodes using omni-directional antennas. However, an increasing number of sensor systems are now deploying directional antennas due to their advantages like energy conservation and better bandwidth utilization. In this paper, we present techniques for location determination in a sensor network with directional antennas under different kinds of deployment of the nodes. We show how the location estimation problem can be solved by measuring the received signal strength from just one or two anchors in a 2D plane with directional antennas. We implement our technique using Berkeley MICA2 sensor motes and show that it is up to three times more accurate than triangulation using omni-directional antennas. We also perform Matlab simulations that show the accuracy of location determination with increasing node density.
A large class of sensor networks is used for data collection and aggregation of sensory data about the physical environment. Since sensor nodes are often powered by limited energy sources, such as battery which may be difficult to replace, energy saving is an important criterion in any activity. Some deployments of sensor networks have passive mobile nodes, that is, nodes that are mobile without their own control. For example, a node mounted on an animal for bio-habitat monitoring, or a light-weight node dropped into a river for water quality monitoring. Passive mobility makes the activity of data gathering challenging since the positions of the nodes can change arbitrarily. As a result the nodes may move too far from the data aggregation point, such as a base station, making the data transmission extremely energy intensive. In extreme cases, the nodes may become disconnected from the rest of the network making them unusable. We propose a sensor network architecture with some nodes capable of controlled mobility to solve this problem. Controlled mobility implies the nodes can be moved in a controlled manner in response to commands, with a determined direction, speed, etc.. We present the different categories of nodes in our architecture and mobility algorithms for the two classes, called collector and locator, that have controlled mobility. It is well accepted that efficient data gathering benefits from the knowledge of locations of nodes. Passive mobile nodes makes location determination (i.e. localization) a crucial problem. We propose the use of the locators for this through a novel scheme based on triangulation. We provide theoretical and simulation based analysis of the mobility algorithms with respect to the metrics of energy, latency, and buffer space requirement.
We are witnessing an exponential growth in the use of mobile computing devices such as laptops, PDAs and mobile phones, accessing critical data while on the move. The need to safeguard against unauthorized access to data in a mobile world is a pressing requirement. Access to critical data depends on users’ identity as well as environmental parameters such as time and location. While temporal based access control models are well suited for enforcing access control decisions on fixed users, they loose their effectiveness when users employing mobile computing devices are not fixed in space and are moving from a secure locale to an insecure one, or vice versa. Issues of location as a context parameter for access control have been addressed by a number of researchers but definition of rich spatial constraints which effectively capture semantics and relationship of physical and virtual (e.g. membership to an IP group) locales is still missing. The inclusion of multiple constraints (temporal and spatial) to the access control policy exposes the need to be able to compose a policy which is verifiable for consistency and structural integrity. Further, the access control policy is expected to evolve over time and inclusion of new constraints, permissions or user rights may conflict with the existing ones. In this regard, we draw upon techniques developed for software engineering and use them for policy specification modeling and conflict resolution. The first contribution in this paper is the development of the Generalized Spatio-Temporal Role Based Access Control (GST-RBAC) model, by proposing a formal framework for composition of complex spatial constraints exploiting topological relationship between physical and virtual locales. Spatial constraints are defined for spatial role enabling, spatial user-role assignment, spatial role-permission assignment and spatial activation of roles. The notion of spatial separation of duty is also developed whereby a user is not permitted to activate two roles simultaneously if the roles are being activated from specific locales. Another feature of the proposed GST-RBAC is the spatial role hierarchy, which allows inheritance of permissions between roles, contingent upon roles being activated from predefined locales. The second contribution in this paper is GST-RBAC policy specification framework using light weight formal modeling language, Alloy and, analysis of access control policy model using the accompanying constraint analyzer. In addition, for consistent evolution of access control policy, the policy administrator can specify additional policy fragments in the policy model and can verify consistency of the overall policy for conflict free composition of the actual policy.
An efficient recovery protocol for lost messages is crucial for supporting reliable multicasting. The tree- based recovery protocols group nodes into recovery regions and designate a recovery node per region for buffering and retransmitting lost messages. In these protocols, the recovery host may get overloaded during periods of large message losses and costly remote recovery may be initiated even though a peer node has the lost message. To address these drawbacks, the Randomized Reliable Multicast Protocol (RRMP) was proposed which distributes the responsibility of error recovery among all members in a group. The pressure on the buffer and computational resources on the intermediate nodes is increasing due to the wide distribution of multicast participants with widely varying reception rates and periodic disconnections. In this paper, we propose the Lightweight Randomized Reliable Multicast (LRRM) protocol that optimizes the amount of buffer space by providing an efficient mechanism based on best-effort multicast for retrieving a lost message. A theoretical analysis and a simulation based study of two realistic topologies indicate that LRRM provides comparable recovery latency to RRMP for lower buffer space usage. While presented in the context of RRMP, LRRM can also benefit other tree- based reliable multicast protocols.