This is a planning grant that focuses on an embedded middleware development tool for sensor networks that is based on a research prototype recently developed by this team at Purdue University. For this planning grant the project team proposes to expand their existing sensor capability by purchasing a larger sensor network test-bed to validate software development tools for run-time error monitoring and diagnosis. In addition, the project will enable an application case study for carbon dioxide monitoring for indoor circulation systems. Sensor nodes typically are highly vulnerable to hardware breakdowns when deployed in harsh conditions. Because of their ad hoc and dynamic nature, the communication protocols of networked embedded systems tend to be complex and frequently error-prone. In addition, these networks experience: components and communication links that are exposed to potential adversaries and hence are under security threats such as node capture, denial of service, and malicious code injection; constrained resources such as storage, bandwidth, computing power and energy; and, even though they may be correctly designed, network protocols may be implemented incorrectly due to programming errors. The goal of this project is to permit the broad research community of Networked Embedded Systems (NES) to use the robust programming tool proposed with this project for run-time error monitoring and diagnosis. The tool will target the problem that errors can occur in any of the many components of a sensor network and those errors need to be detected quickly and effectively.
This collaborative project, developing a testbed that enables research on understanding and analysis of vulnerabilities of Voice over IP (VoIP), investigates issues related to Quality of Service (QoS) in VoIP, taking into account possible attacks, identity management, spamming, Denial of Service (DoS) attacks, 911 emergency management, and high availability. Research results will be translated to engineering guidelines for preventing security breaches during development and deployment of VoIP networks. This VoIP infrastructure can, in turn, be reused for different multimedia services like video and instant messaging. Since VoIP is expected to reach critical mass during the next five years, many federal agencies are already putting migration strategies in place. In view that VoIP will have to interoperate with conventional Public Switched Telephone Network (PSTN), this work anticipates discovery of security holes and vulnerabilities during deployment and usage. Thus, vulnerabilities need to be investigated proactively and algorithms and techniques need to be developed to secure VoIP from security threats due to interoperability problems, lack of standards, attacks by hackers, script kiddies, spammers, corporate espionage, and terrorism. This multi-university project limits the scope to spam prevention, defense against DoS, securing 911 emergency services, study the impact of security and QoS.
Broader Impact: With 4 universities, this collaborative project studies security threats and solutions proactively and disseminates the results to commercial and government organizations. The research results should advance the research frontier in the area of security for next generation networks and create practical applications to implementation in VoIP networks. Results, translated into engineering guidelines, should impact developers. The experiments benefit from the geographically distributed sites while the test plan stimulates collaboration between faculty and students. Workshops have been held with participation from the Department of Homeland Security, Department of Defense, FBI, NSA, NIST, FCC, industry consortiums such as International Packet Communications Consortium (IPCC) and SIP.EDU in Internet2, VoPSF, VoIPSA, telecommunication service providers, vendors, and universities. This multi-university infrastructure provides an excellent opportunity for students to experience a real-life telecommunication network. This reconfigurable testbed may be integrated into many courses enabling new research and education in VoIP.
In the past six years, 44 states in the United States have embraced a new form of privacy and identity theft regulation – mandatory disclosure of data breach information. Information disclosure regulation is a form of legislation considered effective for issues that span consumer protection and risk and where market mechanisms would/could work effectively to shape consumer and producer behavior and bring about allocative efficiency. Informational regulation is a new approach in the data privacy milieu, but has a precedent in environmental and health policy. While data breach information disclosure policies intend to have an impact on consumer and producer behavior, little is known about the costs and benefits of these policies and whether they are in fact enhancing social welfare in the area of identity theft and privacy. This project investigates the conditions under which mandatory information disclosure will lead to 1) a reduction in identity theft, 2) enhancement of privacy, and ultimately the conditions under which it will enhance social welfare.
Commodity processors are highly programmable, but their need to support general purpose computation limits both peak and sustained performance. Such observations have motivated the use of “accelerator” boards, which are co-processing elements that interface with the host server through a standard hardware bus such as PCI-Express but have their own computational engine and typically their own memory as well. Unlike the main processor, the accelerator does not support general applications; instead, its hardware and software is tuned for only specific types of computations. Accelerators can offload the most demanding parts of an application from the host processor, speeding up the desired computation using their specialized resources. This improved performance enables various forms of high-performance computing (HPC), but comes at a high cost in programmability. This research targets high-performance computing research using PC-based clusters for cost and scalability combined with accelerators for high performance. The Purdue Everest project encompasses several related efforts in achieving high performance, low power consumption, and high programmability for highly heterogeneous systems. Acquiring a 30-node Gigabit Ethernet-based cluster of multicore PC-based workstations equipped with various accelerator boards (e.g., GPU, Cell, FPGA, Crypto) will enable research into effective and highly-programmable use of accelerator-based clusters. Supporting multiple accelerators per node allows applications to use different accelerator boards in different phases. This cluster also allows fair apples-to-apples comparisons of different accelerators by keeping the other system factors constant. This research also investigates the use of multiple concurrency domains, with parallelism across the cluster, across the cores in a single node, among the host processors and accelerators in a single node, and across the processing elements of a given accelerator.
This research program is motivated by the observation that today’s security problems are often caused by errors in policy specification and management, rather than failure in, for example, cryptographic primitives. Formal verification techniques have been successfully applied to the design and analysis of hardware, software, distributed algorithms, and cryptographic protocols. This project aims at achieving similar success in access control.
This project studies novel approaches to specifying properties about access control policies and the verification of them. Recent results include security analysis in trust management and role based access control, analyzing the relationship between separation of duty policies and role mutual exclusion constraints, the development of a novel algebra for specifying multi-user policies, the introduction of resiliency policies, and so on.
Access control is one of the most fundamental security mechanisms in use today; however, the specification and management of access control policies remains a challenging problem, and today’s administrators have no effective tools to assist them. This research addresses these needs and arising challenges by developing new verification techniques for access control policies, and verification tools that will help administrators specify, understand, and manage their access control policies. In particular, this research studies security analysis and insider threat assessment. Security analysis techniques answer the fundamental question of whether an access control system preserves essential security properties across changes to the authorization state. Insider threat assessment techniques determine what damages insiders can cause if they misuse the trust that has been placed on them. While focusing primarily on the widely-deployed Role-Based Access Control model, this project also aims at developing theoretical foundations and general techniques for access control policy verification. Insights obtained from this research will be applicable to other richer access control models and will help improve the understanding of the power and limitation of access control.
Since ad hoc networks rely on nodes cooperation to establish communication, malicious nodes can compromise the entire network. If they collaborate the devastation is even worse. Collaborative attacks may cause more devastating impacts on wireless environments than single and uncoordinated groups of attacks, as they combine efforts of more than one attacker against the target victim. In this paper, we present the most important forms of attacks, discuss possible collaborations among attackers, show how machine learning techniques and signal processing techniques can be used to detect and defend against collaborative attacks in such environments, and discuss implementation issues.
Adversarial classification applications range from the annoyance of spam to the damage of computer hackers to the destruction of terrorists. In all of these cases, statistical classification techniques play an important role in distinguishing the legitimate from the destructive. These problems pose a significant new challenge not addressed in previous research: The behavior of a class controlled by the adversary may adapt to avoid detection. Hence the future datasets and the training data are no longer from the same populations. We model the problem as a two-player game, where the adversary tries to maximize its return and the data miner tries to minimize the misclassification error. We examine under which conditions an equilibrium would exist, and provide a method to estimate the classifier performance and the adversary’s behavior at such an equilibrium point — the players’ equilibrium strategies. Such information is critical for the construction of a resilient classifier against the adversary.
Traditional search engines like Google typically ignore a large amount of information behind the search engines of many online text information sources. Federated text search provides one-stop access to the hidden information via a single interface that connects to multiple search engines of text information sources. Existing federated search solutions only focus on content relevance and ignore a large amount of valuable information about users and information sources. This project includes novel research on: (1) Multiple Type Resource Representation: model important information of text information sources such as search response time and search engine effectiveness; (2) Utility-Centric Resource Selection: satisfy a user’s search criteria by considering multiple types of evidence such as content relevance, search results from past queries, personal information needs, and search response time; (3) Effective and Efficient Results Merging: produce accurate merged ranked results with little cost of acquiring the content information of the returned documents; (4) System Adaptation by Results Analysis: analyze the search results from past queries for more accurate federated search solutions; (5) System Development and Evaluation: build and test algorithms within research environments as well as a new FedLemur system for a real world application. The project advances the state-of-the-art of research in federated search. It will have broad impacts for other applications such as peer to peer search. The project Web site (http://www.cs.purdue.edu/~lsi/Federated_Search_Career_Award.html) will be used for results dissemination. The education component of the project will expand information retrieval instruction to address multi-disciplinary requirements, improve the education of information technology workforce, and arouse interests of K-12 students for search technologies.