Security becomes more complex when participating entities are physically separated from the current location; knowing who and what is communicating from a remote location complicates security decisions. Research in this area includes wireless computing, communication protocol design and verification, agent computation, quality-of-service protection, firewall design and testing, SCADA security, dynamic and protective routing, security for grid computing, and sensor net security.
Defending against denial-of-service attacks (DoS) in a mobile ad hoc network (MANET) is challenging because the network topology is dynamic and nodes are selfish. In this paper, we propose a DoS mitigation technique that uses digital signatures to verify legitimate packets, and drop packets that do not pass the verification. Since nodes are selfish, they may not perform the verification in order to avoid paying the overhead. A bad packet that escapes verification along the whole network path will bring a penalty to all its forwarders. A network game can be formulated in which nodes along a network path, in optimizing their own benefits, are encouraged to act collectively to filter out bad packets. Analytical results show that Nash equilibrium can be attained for players in the proposed game, and significant benefits can be provided to forwarders such that many of the bad packets will be eliminated by verification.
Mobile sensors can be used to effect complete coverage of a surveillance area for a given threat over time, thereby reducing the number of sensors necessary. The surveillance area may have a given threat profile as determined by the kind of threat, and accompanying meteorological, environmental, and human factors. In planning the movement of sensors, areas that are deemed higher threat should receive proportionately higher coverage. We propose a coverage algorithm for mobile sensors to achieve a coverage that will match – over the long term and as quantified by an RMSE metric – a given threat profile. Moreover, the algorithm has the following desirable proper ties: (1) stochastic, so that it is robust to contingencies and makes it hard for an adversary to anticipate the sensor’s movement; (2) efficient; and (3) practical, by avoiding movement over inaccessible areas. Fur ther to matching, we argue that a fairness measure of performance over the shor ter time scale is also important. We show that the RMSE and fairness are in general antagonistic, and argue for the need of a combined measure of performance, which we call efficacy. We show how a pause time parameter of the coverage algorithm can be used to control the tradeoff between the RMSE and fairness, and present an efficient offline algorithm to determine the optimal pause time maximizing the efficacy. Lastly, we discuss the effects of multiple sensors, under both independent and coordinated operation. Extensive simulation results – under realistic coverage scenarios – are presented for performance evaluation.
We analyze the ability of a stochastic coverage algorithm to achieve both accurate threat-based coverage and effective information capture. When mobile sensors are used to cover the region over time, the goal of threat-based coverage is to allocate the sensors’ coverage time between the subregions in proportion to their threat levels. We show that, in contrast to prior results on mobile coverage for maximizing simple event capture, limiting mobility by strategically pausing the sensor is important for threat-based coverage of physical world monitoring. Besides being energy efficient, pausing has two desirable effects. First, it can improve the accuracy of the threat-based coverage, in particular, the accuracy increases monotonically with a pause time parameter, and a large enough parameter will ensure exact matching of the sensor’s coverage profile with the region’s threat profile. Second, diverse natural phenomena require a non-negligible sensing time to overcome statistical uncertainties posed by the random nature of the phenomena. Suitable pausing allows a subregion to be observed long enough for reliable results.
Secure video content distribution is a key aspect in the deployment of Telepresence Services and Video on Demand, two critical applications for the ecosystem targeted by Cisco products. Efficient mechanisms and systems need to be developed to guarantee confidentiality and controlled access to a broad range of broadcast video streams. At the same time, an effective framework for secure video content distribution should also guarantee subscribers’ privileges to access video streams matching their respective subscription and on-demand requirements.
In this project, we will build, by employing an innovative approach called Access Control Polynomial (ACP), a Secure Video Stream Framework for dynamic and anonymous subscriber groups. The framework will effectively address the underlying challenges of secure video stream broadcasting and guaranteed access, anonymity, dynamicity, granularity, and scalability.
Secure group communications (SGC) refers to a setting in which a group of participants can send and receive messages (sent to the group members), in a way that outsiders are unable to glean information even if they are able to intercept the messages. SGC is important because several prevalent applications require it. These applications include teleconferencing, tele-medicine, real-time information services, distributed interactive simulations, collaborative work, interactive games and the deployment of VPN (Virtual Private Networks). The goals for this project are four-fold: 1. study various issues enabling SGC which include, but are not limited to, group key management, burst behavior and efficient burst operations, membership management, group member admission control, authentication and non-repudiation; 2. study and provide solutions for specific SGC scenarios such as dynamic conferencing and SGC with hierarchical access control; 3. investigate research challenges for SGC over wireless/mobile environments; 4. integrate research results into the curriculum and perform public dissemination of findings and software.
In the battle against Internet malware, we have witnessed increasingly novel features of emerging malware in their infection, propagation, and contamination strategies – examples include polymorphic appearance, multi-vector infection, self-destruction, and intelligent payloads such as self-organized attack networks or mass-mailing. Furthermore, the damages caused by a malware incident can be detrimental and hard to recover (e.g., the installation of kernel-level rootkits). Our research goal is to thoroughly understand key malware behavior such as probing, propagation, exploitation, contamination, and “value-added” payloads. These results will be used to design effective malware detection and defense solutions. To reach this goal, we realize that effective malware experimentation tools and environments are lacking in current malware research. By leveraging and extending virtualization technology, we propose to develop a virtualization-based integrated platform for the capture, observation, and analysis of malware. The platform consists of two parts: The front-end of the platform is a virtual honey farm system called Collapsar, which captures and contains malware instances from the real Internet. The back-end of the platform is a virtual playground environment called vGround, where the captured malware instances are unleashed to run while remaining completely isolated from the real Internet. Using this integrated platform, security researchers will be able to observe and analyze various aspects of malware behavior as well as to evaluate corresponding malware defense solutions, with high fidelity and efficiency.
As modern computer technology advances, manufacturers are able to integrate a large number of processors and processor components into smaller and more unified packages. The results are low cost computer systems with significant multiprocessing capabilities. Can these computing resources be organized to perform dedicated services in a reliable and secure manner? Poly^2 (short for poly-computer, poly-network) is a hardened framework in which critical services can operate. This framework is intended to provide robust protection against attacks to the services running within its domain. The design and implementation is based on sound, widely acknowledged security design principles. It will form the basis for providing present and future services while, at the same time, being highly robust and resistant to attack. A prototype of the new architecture has been developed that provides traditional network services (e.g. web, FTP, email, DNS, etc.) using commodity hardware and an open source operating system. Our efforts include developing and exploring security metrics that we hope will define the level of security provided by this approach.
The design and configuration of enterprise networks is one of the hardest challenges that operators face today. A key challenge in doing so is the need to reconfigure network devices to ensure high-level operator goals are correctly realized. The high-level objectives (such as performance and security goals) that operators have for their networks are embedded in hundreds of low-level device configurations. Reconfiguring network devices is challenging given the huge semantic gap between these high-levelobjectives, and low-level configurations. Errors in changing configurations have been known to result in outages, business service disruptions, violations of Service Level Agreements~(SLA) and cyber-attacks~\cite{mahajan:02,kerravala02,Alloy}. In our research, we are looking at principled approaches for the systematic design and configuration of enterprise networks. We believe our research will minimize errors, and enable operators to ensure their networks continue to meet desired high-level security objectives. An important problem that we are currently tackling is that of ensuring correctness of security policies when migrating enterprise data centers to cloud computing models.
While peer-to-peer (P2P) systems have emerged in popularity in recent years, their large-scale and complexity make them difficult to reason about. We argue that systematic analysis of traffic characteristics of P2P systems can reveal a wealth of information about their behavior, and highlight potential undesirable activities that such systems may exhibit. As a first step to this end, we present an offline and semi-automated approach to detect undesirable behavior. Our analysis is applied on real traffic traces collected from a Point-of-Presence (PoP) of a national-wide ISP in which over 70% of the total traffic is due to eMule [19], a popular P2P file-sharing system. Flow-level measurements are aggregated into “samples” referring to the activity of each host during a time interval. We then employ a clustering technique to automatically and coarsely identify similar behavior across samples, and extensively use domain knowledge to interpret and analyze the resulting clusters. Our analysis shows several examples of undesirable behavior including evidence of DDoS attacks exploiting live P2P clients, significant amounts of unwanted traffic that may harm network performance, and instances where the performance of participating peers may be subverted due to maliciously deployed servers. Identification of such patterns can benefit network operators, P2P system developers, and actual end-users.
Group-oriented services are envisioned to be an important class of application in the environment of wireless mesh networks. This project focuses on developing scalable, robust, and secure group communication protocols for wireless mesh networks. In particular, we will: · Build a wireless mesh test-bed for experiments and protocol validation · Develop distributed protocols for efficient group communication (multicast, broadcast) in wireless mesh networks · Investigate and develop efficient and robust group key management protocol in wireless mesh networks · Study the viability and limitations of cross-layer design as a new paradigm of building secure network services