Hardware technologies have made steady progress in miniaturization of sensors and computing/communication devices, which has driven a trend towards pervasive computing, which is a way to let computing devices directly interact with the physical world to monitor the natural environment, to provide building safety, and so on. In order to make pervasive computing a reality, it is critical to secure the underlying networked embedded systems, because these systems may collect important environment data upon which time-sensitive decisions are dependent.
Unfortunately, many of networked embedded systems, e.g. wireless and wired sensor networks, RFID infrastructure, wireless mesh networks, have components or links that are openly exposed to potential adversaries and hence are under constant security threats such as node capture, denial of service, and intrusion, among others. To make the matters worse, many networked embedded systems have much more constrained resources such as storage, bandwidth, computing power and energy than computers used in non-embedded applications, e.g. desktop machines and servers. Sophisticated computer security schemes developed over the last few decades are often infeasible on networked embedded systems, at least not in their original forms.
The research team of this project develops a multi-grade monitoring scheme, supported by a new programming interface, in which low-cost monitoring activities are deployed in normal mode of operation of the systems to detect suspicious symptoms which are possibly, although not necessarily, caused by security threats. This effort will lead to much more effective, yet affordable, security monitoring and defense on networked embedded systems.
Modeling complex real world problems in the national and homeland security domains require multi-disciplinary thinking and utilize multiple analytical approaches to represent massive numbers of entities, their behaviors, and the emergent interactions among them. As such, the traditional approach to building comprehensive, requirements-driven simulations does not work for such problems. This project uses a Society-based Approach to Integration using a ?shared but self-managed? paradigm, wherein, autonomous members collaborate in a society while sharing only a part of their knowledge. Component simulations self-assemble into realistic synthetic environments. The self-assembly of simulations is achieved through a domain-specific ontology, simulation specifications, and semantic matching between diverse members. New members join an existing society or an existing member modify its interaction needs without requiring the society to reconfigure. Using knowledge discovery, each member determines what aspects of entities in the society to interact with. In this way, a society is automatically configured into a synthetic environment. Broader impacts of this project include: creation and deployment of large scale synthetic environments by bridging new and existing models and simulations from diverse disciplines; leverage knowledge generated by the wider DDDAS community in creating complex synthetic environments at scales and diversity much greater than the state-of-the-art; facilitate rapid integration across diverse systems and paradigms, such as, discrete event simulations with agent based simulations, in a semantically consistent manner; and develop open source technology that will benefit the community at large with broader application to simulation based engineering, education, and decision analytics.
This project will develop a comprehensive security framework using content-based and context-aware access control models for XML-based applications in distributed heterogeneous multi-enterprise environments. Such applications include electronic commerce, finance and banking, manufacturing, corporate databases, health-care and other on-line services and businesses. For these applications, information access may need to be restricted due to the sensitivity, importance or the relevance of the content of the information, time, location and other contextual information obtained at the time the access requests are made. The proposed framework will be built upon role-based access control (RBAC) models. In this project the following tasks will be pursued: development of a content and context-based generalized temporal RBAC model (CC-GTRBAC) for XML documents and extension of XML language for the proposed model. The extended language will be used to develop a security model that will allow protection of XML document sources at various levels including conceptual, XML schema and XML instance levels; extending CC-GTRBAC to develop a secure multi-enterprise environment for distributed XML documents; development of an experimental prototype of a distributed XML environment to check the efficacy and viability of this research.
This project applies recent techniques in transactional computing to the problem of preventing unwanted declassification of secure information. Regulating the nature and amount of information that is declassified for complex software system is difficult; even when leaks are identified, suitably repairing the computation is usually not possible. The project develops ideas inspired from language-centric transactional computing to support information flow security by encapsulating critical regions that (a) either cannot be analyzed effectively statically or (b) declassify some set of confidential data. Isolation and atomicity properties of transactional regions ensure the approach is safe even in a multi-threaded environment. The technical issues associated with controlled declassification are examined from an entirely new perspective—rather than attempting to prevent statically any leaks from occurring, this research explores approaches that dynamically monitor when leaks occur, transparently reverting program state to an earlier safe context when leaks are identified. This security model encapsulates untrusted operations and library functions within monitored regions, allowing only information explicitly marked as declassified to escape the region scope. As regions run in isolation, they ensure that they can not be influenced by non-monitored code, nor can they influence its outcome. The monitoring infrastructure leverages transactional mechanisms to track memory use, and restore program state when declassification violations are detected. The broader impacts are significant. Information flow and declassification are critical problems to cyber-infrastructure, homeland security, and commercial interests. Techniques that provide scalable, transparent, and effective solutions to this problem are of immediate benefit to current government and business initiatives.
Vulnerabilities in software, especially those that are remote exploitable, are the root cause of wave after wave of security attacks, such as botnet, zero-day worms, non-control data corruptions, and even server-break-ins. Thus, analyzing and exposing software vulnerabilities has become one of the most active research areas today. In the past, software vulnerability detection/exposing approaches could be divided into two categories: dynamic and static. Static analysis creates a lot of false positives. Dynamic approaches monitor program execution and detect attempts of attacking a software system. These technique incur non-trivial runtime overhead and cannot detect vulnerabilities that not under attack. Dynamic test generation has the potential of generating exploit inputs to confirm vulnerabilities. However, most existing dynamic test generation techniques suffer from the scalability problem. In this project, we develop a practical dynamic approach that is intended to use in combination with other static tools. We observe that although the suspect pool produced by existing static tools has a high false positive rate, it is nonetheless much smaller than the whole population. Therefore, we use existing static tools as the frontend to generate a set of suspects. Our technique then tries to generate exploits for these suspects. A suspect is convicted only when an exploit can be acquired as the evidence. Such exploits significantly assist regular users and administrators to evaluate the robustness of their software and convince vendors to debug and patch. The key idea is to use data lineage tracing to identify a set of input values relevant to the execution of a vulnerable code location. Exploit specific mutations are applied to the relevant input values in order to trigger an attack, e.g., for example, changing an integer value to MAXUINT to induce an integer overflow. Since these inputs are usually a very small subset of the whole input sequence, mutating the whole input, like in random test generation, is avoided. Our technique does not rely on symbolic execution and constraint solving and thus can easily handle long execution. In case an execution that covers a vulnerable code location cannot be found, our technique also allows user interactions to mutate an input so that the execution driven by the mutated input covers the vulnerable code location. Our technique addresses a wide range of vulnerabilities including buffer overflow, integer overflow, format string, etc. Our dynamic analysis works at binary level, which greatly facilitates users that do not have the source code access but are concerned about software vulnerabilities. We have developed a data lineage tracing prototype. It traces the set of input that is relevant to a particular execution point. The lineage information is used to guide our evidence generation procedure. The challenge of efficiency is overcome by using Reduced Ordered Binary Decision Diagrams (RoBDDs). Our initial experience with a set of known and unknown real vulnerabilities showed that our technique can very quickly generate exploit inputs.
Created by the NW3C and CERT, and hosted by the Purdue University College of Technology, these workshops bring together members of the business, information technology, and law enforcement communities to initiate dialogue on computer security issues. Working together, participants identify the barriers to effective cooperation and investigate the ways to overcome those barriers.
In groups, participants define computer-related incidents, learn appropriate levels of response, and share effective solutions for dealing with computer incidents and crimes. Starting in single community teams (i.e., business, information technology, law enforcement), they analyze sample incidents. Then, teams reform into cross-community teams to simulate a task force and make recommendations about how to proceed. Discussion documents guide attendees through the process, with checklists for each professional role.
The major objective of this research is to create a low cost portable usability engineering laboratory that can be used to rapidly evaluate the usability, safety and security of medical information systems such as electronic medical records and e-prescribing. This proposal will describe how simulations of clinical activity (involving human subjects carrying out clinical tasks) and mathematical computer-based simulations can be linked to forecast the impact of interface design features upon medical errors and security breaches using healthcare information technology (HIT). There are two phases to the research.. In Phase 1 a clinical simulation will be conducted involving physicians who will be asked to use a hand-held prescription writing application to enter and record medications administered during a simulated clinical interaction. In this phase of the study, data arising from the clinical simulation will be collected and then analyzed using qualitative approaches to assess the relationship between aspects of interface design (i.e., usability problems) and subject medication error and security breaches. In Phase 2, the base rates for error associated with specific types of usability problems (from Phase 1) will form the input into a computer-based mathematical simulation. This work is unique in health care as it directly connects two distinct forms of simulations – (1) clinical simulations of user behavior and (2) mathematical simulation to forecast error rates over time (based on parameters obtained from an empirical study involving the use of clinical simulation. The research will examine the impact of aspects of interface design upon medical error rates over a period of weeks and months.
We study the minimum period of the Bell numbers, which arise in combinatorics, modulo a prime. It is shown that this period is probably always equal to its maximum possible value. Interesting new divisibility theorems are proved for possible prime divisors of the maximum possible period. The conclusion is that these numbers are not suitable for use as RSA public keys.