We identify the trust management problem as a distinct and important component of security in network services. Aspects of the trust managment problem include formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Existing systems that support security in networked applications, including X.509 and PGP, address only narrow subsets of the overall trust management problem and often do so in a manner that is appropriate to only one application. This paper presents a comprehensive approach to trust management, based on a simple language for specifying trusted actions and trust relationships. It also describes a prototype implementation of a new trust management system, called PolicyMaker, that will facilitate the development of security features in a wide range of network services.
Lack of widely available Internet security has discouraged some commercial users. The author describes efforts to make cryptographic security more widely available and looks at efforts to secure the Internet infrastructure.
Test coverage criteria define a set of entities of a program flowgraph and require that every entity is covered by some test. In this paper we first indentify E(c), the set of entities to be covered according to a criterion (c), for a family of widely used test coverage criteria. We then present a method to derive a minimum set of entities, called a spanning set, such that a set of test paths covering the entities in this set covers every entity in E(c). We provide a generalised algorithm, which is parametrized by the coverage criterion. We suggest several useful applications of spanning sets of entities to testing. In particular, they help to reduce and to estimate the number of tests needed to satisfy test coverage criteria.
As a complex software system evolves, its implementation tends to deverge from the intended or documented design models. Such undesirable deviation makes the system hard to understand, modify, and maintain. This paper presents a hybrid computer-assisted approach for confirming that the implementation of a system maintains its expected design models and rules. Our approach closely integrates logic-based static analysis and dynamic visualization, providing multiple code views and perspectives. We show that the hybrid technique helps determine design implementations congruence at various levels of abstarction: concrete rules like coding guidelines, architectural models like design patterns or connectors, and subjective design principles like low coupling and high cohesion. The utility of our approach has been demonstrated in the development of Choices, a new multimedia operating system which inherits many design decisions and guidelines learned from experience in the construction and maintenance of its predecessor, Choices.
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the “intrusion-detection” concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of our present research is to extend our network intrusion-detection concept from the LAN environment to arbitrarily wider areas with the network topology being arbitary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager (viz. a monitoring process or collection of processes running in background) in each host; a LAN manager for monitoring each LAN in the system; and a central manager which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions.
This paper presents the design of SAINT, a tool being developed at the National Autonomous University of Mexico that will allow integrated analysis of information gathered from various sources, such as security tools and system logs. By simulating events occuring in the system, and collected from the different sources, SATAN will allow dectection, or even prevention of problems that may otherwise go undectected due to lack of information about them in any single place. SATAN’s modular and extensible architecture make it feasible to add new modules for processing new data types, detecting new kinds of problems, or presenting the results in different formats.
Haystack is a prototype system for the detection of intrusions in multi-user Air Force computer systems. Haystack reduces voluminous system audit trails to short summaries of user behaviors, anomalous events, and security incidents. This is designed to help the System Security Officer (SSO) detect and investigate intusions, particulary by insiders (authorized users.) Haystack’s operation is based on behavioral constraints imposed by security policies and on models of typical behavior for user groups and individual users.
Flaws due to race conditions in which the binding of a name to an object changes between repeated references occur in many programs. We examine one type of this flaw in the UNIX operating system, and describe a semantic method for detecting possible instances of this problem. We present the results of one such analysis in which previouly undiscovered race condition flaw was found.
CSM is designed to handle intrusions as opposed to simply detetecting and reporting on them, resulting in a comprehensive approach to individual system and network intrusions. Tests of the initial prototype have shown the cooperative methodology to perform favorably.
Software Test Environments (STEs) provide a means of automating the test process and integrating testing tools to support required testing capabilities across the test process. Specifically, STEs may support test planning, test management, test measurement, test failure analysis, test development, and tests execution. The software architechture of an STE describes the allocation of the environment’s functions to specific implementation structures. An STE’s architecture can facilitate or impede modifications such as changes to processing algorithms, data representation, or functionality. Performance and reusability are also subject to architecturally imposed constraints. Evaluation of an STE’s architecture can provide insight into modifiability, extensibility, portability and reusability of the STE. This paper proposes a reference architecture for STEs. Its analytical value is demonstrated by using SAAM (Software Architectural Analysis Method) to compare three software test environments: PROTest II (Prolog Test Environment, Version II), TAOS (Testing with Analysis and Oracle support), and CITE (CONVEX Integrated Test Environ- ment).
Scenario diagrams are a well-known notation for visualizing the message flow in object- oriented systems. Traditionally, they are used in the analysis and design phases of software development to prototype the expected behavior of a system. We show how they can be used reversely for understanding and browsing existing software. We have implemented a tool called Scene that automatically produces scenario diagrams for existing object- oriented systems. The tool makes extensive use of an active text framework providing the basis for various hypertext-like facilities. It allows the user to browse not only scenarios but also various kinds of associated documents, such as source code (method definitions and calls), class interfaces, class diagrams, and call matrices.
The Q system provides interoperability support for multilingual, heterogenous component- based software systems. Initial development of Q began in 1988, and was driven by the very pragmatic need for a communication mechanism between a client program written in Ada and a server written in C. The initial design was driven by language features present in C, but not in Ada, or vice-versa. In time our needs and aspirations grew and Q evolved to support other languages, such as C++, Lisp, Prolog, Java, and Tcl. As a result of pervasive usage by the Arcadia SDE research project, usage levels and modes of the Q system grew and so more emphasis was placed upon portability, reliability, and performance. In that context we identified specific ways in which programming language support systems can directly impede effective interoperablility. This necessitated extensive changes to both our conceptual model and our implementation of the Q system. We also discovered the need to support modes of interoperability far more complex than the usual client-server. The continued evolution of Q has allowed the architecture of Arcadia software to become highly distributed and component-based, exploiting components written in a variety of languages. in addition to becoming an Aradia project mainstay, Q has also been made available to over 100 other sites, and it is currently in use in a variety of other projects. This paper summarizes key points that have been learned from this considerable base of experience.
Directed testing methods, such as functional or structual testing, have been criticized for a lack of quantifiable results. Representative testing permits reliability modeling, which provides the desired quantification. Over time, however, representative testing becomes inherently less effective as a means of improving the actual quality of the software under test. A model is presented which permits representative and directed testing to be used in conjunction. Representative testing can be used early, when the rate of fault revelation is high. Later results from directed testing can be used to update the reliability estimates conventionally associated with representative methods. The key to this combination is shifting the observed random variable from the interfailure time to a post-mortem analysis of the debugged faults, using order statistics to combine the observed failure rates of faults no matter how those faults were detected.
Data flow testing is a well-known technique, and it is proved to be better than the commercially used branch testing. The problem with data flow testing is that except scalar variables only approximate information is available. This paper presents an algorithm that determines the definition use pairs for arrays precisely within a large domain. There are numerous methods addressing array data flow problem, however, requires at least one real solution of the problem for which the necessary program path is executed. On the contrary to former precise methods, we avoid negation in formulae, which seems to be the biggest problem in all previous methods.
Data flow relies on static analysis for computing the def-use pairs that serve as the test case requirements for a program. When testing large programs, the individual procedures are first tested in isolation during unit testing. Integration testing is performed to specifically test the procedure interfaces. The procedures in a program are integrated and tested in several steps. Since each integration step requires data flow analysis to determine the new test requirements, the accumulated cost of repeatedly analyzing a program can considerably contribute to the overhead of testing. Data flow analysis is typically computed using an exhaustive approach or by using incremental data flow updates. This paper presents a new and more efficient approach to data flow integration testing that is based on demand-driven analysis. We developed an implemented a demand-driven analyzer and experimentally compared its performance of (i) a traditional exhaustive analyzer and (ii) an incremental analyzer. Our experiments show that demand-driven analysis is faster that exhaustive analysis by up to a factor of 25. The demand-driven analyzer also outperforms the incremental analyzer by up to a factor of 5.