Ad hoc networks are natively cooperative systems in the sense that their nodes have to relay data to one another. The inherent drawback of this scheme is that it renders these networks susceptible to intruders. Collaborative attacks, in which various attackers may coordinate actions to hit the network stronger, are also facilitated by the natural cooperation existing in ad networks. In this paper, we discuss the most important forms of attacks, address possible collaborations among attackers, show how machine learning techniques and signal processing techniques can be used to detect and defend against collaborative attacks in such environments, and discuss implementation issues. We also perform evaluations to determine the best design options for our preliminary proposed scheme to collaboratively respond to attacks.
This proposal, developing a networked system to allow safe and rapid analysis of network security and vulnerabilities with respect to worms, viruses, and other malicious conduct, creates a reconfigurable facility, named ReASSURE, for efficient reproducible, controlled, and safely contained experiments in computer science and technology with emphasis on information assurance and security. The new instrument will integrate functionalities in a manner that will enable high levels of safety and efficiency in manipulating, testing, and developing potentially dangerous experimental networking and virtual machine software while providing computational power to remote users. Advancing the study of virtual machine technology, the activity offers settings where potentially dangerous experimentation with networking and VM technologies can be performed safely. Providing as testbed networking facility, the infrastructure supports projects that require “self-contained” computing environments in computer science (including security), computer technology, forensics, and information warfares.
State and Local Law Enforcement Agencies cannot afford the small scale digital device forensic tools that exist, do not have adequate small scale digital device forensic tools, do not have a comprehensive knowledge of how the small scale digital device forensic tools work, and do not have a central repository for sharing their experiences about the small scale digital device forensic tools. To this end, and to fill this void, it is our objective to build a cost-effective forensic tool that acquires evidence from small scale digital devices; presents and explains the protocols and the specific commands used to acquire and interpret evidence as the evidence is acquired and interpreted; and report or export the evidence for further analysis. Additionally, development will include a central repository for the tools users to communicate specifically regarding the use, success, and education of the protocols and their application.
There is an alarming trend that elusive malware is armed with techniques that detect, evade, and subvert malware detection facilities of the victim. On the defensive side, a fundamental limitation of traditional host-based anti-malware systems is that they run inside the very hosts they are protecting, making them vulnerable to malware’s counter-detection and subversion. To address this limitation, solutions using virtual machine (VM) technologies advocate placing the malware detection facility outside of the protected VM. However, a dilemma exists between these two approaches: The “”out of the box”” approach gains tamper resistance at the cost of losing the native, semantic view of the host enjoyed by the “”in the box”” approach. To resolve the above dilemma, a new approach called OBSERV (“”Out of the Box with SEmantically Reconstructed View”“) is introduced to achieve the advantages of both camps by reconstructing the semantic internal view of a VM from external, low-level observations. OBSERV enables two exciting malware defense opportunities: (1) malware detection by view comparison and (2) real-time detection and stoppage of kernel-level rootkits. The broader impact of this research is two-fold: (1) It will enhance the trustworthiness and effectiveness of widely deployed anti-malware systems. Moreover, OBSERV is expected to be viewed favorably by the anti-virus software industry because of its support for existing off-the-shelf anti-virus software. (2) Results from this research will lead to the development of education materials for undergraduate and graduate courses and for professional training sessions.
The EncoreJ project is developing tools and libraries for transparent rewriting of Java code, making distributable Java applications resilient in the face of execution node reconfiguration and failure. Developers control the system, but EncoreJ automatically rewrites compiled Java code, as packages are loaded, adding support for creating, accessing, and computing upon local and remote objects, and for resilience in the face of system failures and reconfigurations. EncoreJ further interfaces with a variety of persistence mechanisms (e.g., databases), both for providing fundamental resilience (saving/restoring information) and for coordinating recovery with the mechanisms of the external database.
EncoreJ exploits resiliency support to make it easy to reconfigure applications as the host platform evolves, adding and removing resources dynamically; e.g., a virtual node might go down and be replaced by another, in order to force work to move to a newly available system. Programmers describe “”on the side”” (without modifying source code), how to place, move, and replicate objects and computations; the source code remains the primary mechanism for expressing algorithms clearly without hard-coded details of distribution or resilience.
The EncoreJ tools and prototype are a platform for research by the wider community working on policies/algorithms for migration, replication, scheduling, etc., in Grid systems. The focus is a convenient and flexible platform, powerful and extensible, without over-commitment to any particular policies or strategies. EncoreJ builds on readily available and standard systems (Java virtual machines and packages) to ensure wide applicability and easy distribution and adoption.
Building and keeping credibility in the eyes of consumers is a complex task faced by online organizations. While research on the behavioral aspects of online consumer behavior focusing on trust, satisfaction, and loyalty, with the aim of creating a compelling online environment to satisfy consumer needs, is increasing, the majority of these research efforts do not focus on the needs of visually impaired people, and their perception, perceived risk, and trust in Web-based environments. In this project, the PI will seek to identify the antecedents of trust from the viewpoint of the visually impaired consumer (both online users and non-users). The study will specifically seek to: identify the salient factors important in developing visually impaired consumer trust in online businesses; identify the interpersonal trust factors and the corresponding levels of interpersonal trust critical to the adaptation or continued use of the Internet by visually impaired consumers; explore variations in trust antecedents based on the demographic characteristics (age, gender, and ethnic and cultural background) of visually impaired consumers; develop a trust typology model for visually impaired consumers; and develop educational programs to aid non-users and users with increasing trust online. An understanding of visually impaired consumer perspectives on trust will enhance knowledge of the underlying principles of trust in e-commerce and move the discipline closer towards developing a centralized control for trust. Detailed knowledge of the hopes and concerns of visually impaired consumers broken down by age groups, gender, and ethnic backgrounds is critical for policy makers in charting effective Internet transaction-related policies. The findings from this study should also be of importance for online organizations in the development of proactive strategies for visually impaired customer recruitment and retention in online transactions.
This project is expected to make three broad contributions towards developing a runtime infrastructure, called PROGNOSIS, for failure data collection and online analysis. The first set of contributions will be on collecting and analyzing system events and failure data from an actual BlueGene/L system over an extended period of time. In addition to presenting the raw system events, we will be developing filtering techniques to remove unimportant information and identifying stationary intervals, together with defining the attributes for logging and their frequency. The second set of contributions will be models for online analysis and prediction of evolving failure data by exploiting correlations between system events over time, across the nodes, and with respect to external factors such as imposed workload and operating temperature. The third set of contributions will be on demonstrating the uses of PROGNOSIS. This work will be specifically extending two important runtime techniques - parallel job scheduling and checkpointing - with the information provided by PROGNOSIS; will investigate how predictability of failures along spatial and/or temporal dimensions can enhance schedulers to provide a better trade-off between higher system utilization versus job loss upon failures, and will develop techniques to fine tune the frequency and location of checkpoints with PROGNOSIS. More importantly, the confidence level behind the prediction that is needed for online decision making will be evaluated, and the effect of inaccurate predictions.
This project is expected to make three broad contributions towards developing a runtime infrastructure, called PROGNOSIS, for failure data collection and online analysis. The first set of contributions will be on collecting and analyzing system events and failure data from an actual BlueGene/L system over an extended period of time. In addition to presenting the raw system events, we will be developing filtering techniques to remove unimportant information and identifying stationary intervals, together with defining the attributes for logging and their frequency. The second set of contributions will be models for online analysis and prediction of evolving failure data by exploiting correlations between system events over time, across the nodes, and with respect to external factors such as imposed workload and operating temperature. The third set of contributions will be on demonstrating the uses of PROGNOSIS. This work will be specifically extending two important runtime techniques - parallel job scheduling and checkpointing - with the information provided by PROGNOSIS; will investigate how predictability of failures along spatial and/or temporal dimensions can enhance schedulers to provide a better trade-off between higher system utilization versus job loss upon failures, and will develop techniques to fine tune the frequency and location of checkpoints with PROGNOSIS. More importantly, the confidence level behind the prediction that is needed for online decision making will be evaluated, and the effect of inaccurate predictions.