Debugging distributed systems is notoriously difficult. This is in large part due to both the software complexity and accordingly large state space, and because of the inherent asynchrony in networked systems. We have built a model-checker to test unmodified distributed systems built with the Mace programming toolkit. The modelchecker (MaceMC) uses automated state space exploration combined with long random execution to find and isolate bugs forcing distributed systems to violate liveness properties, which are essentially the properties which developers to specify that a system not only avoids bad states, but eventually accomplishes its goal.
We continue to extend the features and capabilities of the Mace model checker. We are now working on how to use the model checking infrastructure to find performance bugs in unmodified distributed systems, and how to incorporate malicious agents into the testing infrastructure to automatically detect errors and system degradation caused by these malicious agents, and to verify fixes once designed.
We are carrying out the research and evaluation of techniques for the manufacture of physical objects that support counterfeit recognition, tamper detection, and traitor identification. Counterfeit- recognition refers to the ability to distinguish an illegitimately manufactured copy of the physical object (even if it meets the manufacturing quality requirements). Traitor identification refers to the capacity to reveal the identity of the culprit user for an illegitimately copied physical object. We also adapt the counterfeit-recognition techniques to the related problem of tamper detection, which refers to the capacity to detect unauthorized modifications of the object. This is the first attempt at a science for marking objects whose copying is inherently imprecise, unlike digital objects that are replicable perfectly (with zero error). The approach we propose for achieving counterfeit-recognition consists of using a secret key (which is a large integer) to manufacture objects that carry a genuinity mark, such that the mark is automatically readable by a reader that does not have possession of the full key, and that operates from a single view of the object without requiring the object to have a particular orientation relative to the reader. The challenge is to design a genuinity mark that is not replicable by an adversary who does not have the secret key, even if the adversary has a golden model of the object and has manufacturing capability that is superior (i.e., has higher precision) to that of the legitimate manufacturer. A re-instancing attack by such an adversary consists of using the golden model to produce the counterfeit copies (by making them close to the golden model). A copying attack, on the other hand, consists of digitally acquiring the object from an available legitimate copy and then re-manufacturing it. We seek solutions that thwart both types of attacks. Tamper-evidence will be achieved by using a separate mark that is appropriately fragile in the sense that damaging a legitimate instance of the object will destroy the mark —- this fragile mark is computed after the genuinity mark is determined, hence the genuinity mark must have enough resilience to withstand the changes necessitated by the subsequent modification that inserts the fragile mark (yet it cannot be so resilient as to persist in the face of a copying attack). This project holds the promise of helping overcome counterfeiting of physical parts, a problem that has been called “the crime of the century” by a recent manufacturing industry report and whose yearly cost is rapidly escalating (its cost to the automotive industry alone was $12 billion in 1997).
Prof. Caldwell’s research program is known as the Group Performance Environments Research (GROUPER) Laboratory. GROUPER research examines human factors engineering aspects of information flow, task coordination, and team performance as affected by information technology. GROUPER applies human factors engineering principles to how people get, share, and use information well. Their interdisciplinary work studies the effects of contextual factors such as task demands, user expectations of technology performance, and other constraints (such as time or limited expertise) that influence human-system interactions and team coordination to manage system behavior.
3D tele-immersive collaborative environments are becoming a reality. The emerging tele-immersive (TI) technology empowers and enables collaborative interactions and a plethora of new applications among geographically distributed sites. TI technology allows creation of a cyber TI room, where geographically separated users can jointly perform physical activities such as dance or exercise. This project is working to take this vision further and allow users to participate in simultaneous TI sessions and to cyber-walk between TI rooms. To achieve the TI rooms vision, the underlying cyber-physical infrastructure must consider both (a) streams of 3D data as a first class object in its design and in its deployment, and (b) holistic end-to-end management of the multi-stream environments for each TI room. Hence, the project is developing a Holistic Multi-stream Environment for Distributed Immersive Applications (H-MEDIA). They will investigate (a) system architectures with correlated multi-streaming; (b) real-time virtualization of resources for resource isolation between individual TI rooms and switching (cyber-walk) between rooms; (c) end-to-end configurable, robust and fault-tolerant virtual networks for different rooms; and (d) adaptive configuration and system management that will yield customizable, stable, adaptable, available and robust individual TI rooms. H-MEDIA research will have impact on communities in computer science and also on medical, social science and other domains. The H-MEDIA project will also result in educational benefits such as involving graduate students research in very novel TI technologies, inclusion of undergraduate students, and impact on education in other disciplines such as new teaching of choreography in TI environments, as well as many others.
This research focuses on human aspects of online security and privacy assurance. With respect to online security, we have performed task analyses of the procedures required to use different types of authentication methods (e.g., passwords, biometrics, tokens, smart cards) and determined the costs and benefits of the alternative methods. Although passwords are the weakest of the methods, they are the most pervasive and widely accepted form of authentication for many systems. Thus, we have performed experiments designed to identify techniques for improving both the security and memorability of passwords. With respect to privacy assurance, we have performed analyses on Web privacy policies to determine organizations’ privacy and security goals. We also conducted usability tests examining users’ comprehension of privacy policies, factors that influence users’ trust in an organization, and users’ ability to configure privacy agents to check machine-readable policies for an organization’s adherence to specific privacy practices. Because the methods for ensuring security and privacy involve human users, our goal is to improve the interaction between humans and the technical devices and interfaces employed in security- and privacy-related tasks.
While peer-to-peer (P2P) systems have emerged in popularity in recent years, their large-scale and complexity make them difficult to reason about. We argue that systematic analysis of traffic characteristics of P2P systems can reveal a wealth of information about their behavior, and highlight potential undesirable activities that such systems may exhibit. As a first step to this end, we present an offline and semi-automated approach to detect undesirable behavior. Our analysis is applied on real traffic traces collected from a Point-of-Presence (PoP) of a national-wide ISP in which over 70% of the total traffic is due to eMule [19], a popular P2P file-sharing system. Flow-level measurements are aggregated into “samples” referring to the activity of each host during a time interval. We then employ a clustering technique to automatically and coarsely identify similar behavior across samples, and extensively use domain knowledge to interpret and analyze the resulting clusters. Our analysis shows several examples of undesirable behavior including evidence of DDoS attacks exploiting live P2P clients, significant amounts of unwanted traffic that may harm network performance, and instances where the performance of participating peers may be subverted due to maliciously deployed servers. Identification of such patterns can benefit network operators, P2P system developers, and actual end-users.