Assertions are recognized as a powerful tool for automatic run-time detection of software errors. However, exsisting testing methods do not use assertions to generate test cases. In this paper we present a novel approach of automated test data generation in which assertions are used to generate test cases. In this approach the goal is to identify test cases on which an assertion is violated. If such a test is found then this test uncovers an error in the program. The problem of finding program input on which an assertion is violated may be reduced to the problem of finding the program input on which a selected statement is executed. As a result, the exsisting methods of automated test data generation for white-box testing may be used to generate tests to violate assertions. The experiments have shown that his approach may significantly improve the chances of finding software errors as compared to the existing methods of test generation.
Size and code coverage are important attributes of a set of tests. When a program P is executed on elements of the test set T, we can observe the fault detecting capability of T for P. We can also observe the degree to which T induces code coverage on P according to some coverage criterion. We would like to know whether it is the size of T or the coverage of T on P which determines the fault detection effectiveness of T for P. To address this issue we ask the following question: While keeping coverage constant, what is the effect on fault detection of reducing the size of a test set? We report results from an empirical study using the block and all-uses criteria as the coverage measures.
Since the inception of the SSE-CMM program in 1993, there have been some misconceptions within the computer security and evaluations communities regarding its intended purpose. Evaluators in particular have expressed strong resistance to this effort due to the perception that the SSE-CMM is intended to replace evaluated assurance with developmental assurance. That has not and never will be the case. The SSE-CMM efforts can greatly enhance government, corporate, developer, user and integrator knowledge of security in general. As such,the efforts of the SSE-CMM development team are intended to provide significantly improved input to system developers (internal assessments) and the higher level assurance activities (e.g. evaluations, certification, accreditation) efforts (third party assessments). To best address the needs of our customers, the efforts of SSE-CMM and other assurance efforts must grow to complement each other. It will take focused effort from the security community and developmental assurance organizations, as well as industry partners to achieve this goal. Evaluated assurance, provided by programs like the Trusted Product Evaluation Program(TPEP), has become widely accepted throughout the computer security industry. However, as the state of technology has advanced, the current process and methodology used by the evaluation community have been unable to keep pace with the accelerated development cycles of the advanced products that computer-security customers desire. The deficit of security expertise, as well as unclear and at times inadequate guidance and requirements within the industry and from government agencies has lead to the persistent practice among development organizations developing security as an afterthought or add-on to an existing product. Such practices make correcting security flaws that affect the underlying product expensive, difficult, and time consuming. All of these factors have forced evaluators to carry out duties and activities for beyond the scope of pure evaluations and to take on the roles of trainer, developer, writer, and quality assurance inspector for the various products that they have been evaluating. Given these sometimes conflicting demands on the evaluation process, it has become problematic if not impossible (in some cases) to expect the current evaluation approach to continue providing all the product security assurance and keep pace with the increasing demands of computer security customers (i.e. they can not produce enough evaluated products to meet the demand). That is where the concept of an Assurance Framework comes in. Each activity within the security arena (e.g. CMMs, ISO9000, Evaluations) brings with it a certain level of assurance. The composite view forms the Assurance Framework in which a customer can pick and choose products to support their mission based on their risk tolerance and product cost. by allowing certain activities, like the CMM efforts, to address specific assurance needs, the strain on the evaluation community may be alleviated a little thereby allowing evaluators to focus on the high assurance products while the lower assurance products undergo a less rigorous assessment / certification process. In the form of the SSE-CMM, developmental assurance can accomplish many needed improvments in the way that INFOSEC products and systems are produced. These improvements may well have a direct impact on the quality of the product’s security development and can assist vendors by better preparring their teams for an evaluation. At the higher maturity levels, some of the work now required of evaluators for low assurance products, such as IV&V functions and general security knowledge, can be accomplished during the initial product development. This will allow evaluators to concentrate more of their efforts on evaluation activities and less on security education and or product development for the vendors. The SSE-CMM is a metric for an organization’s capability to develop a secure system. Wouldn’t it be nice to know an organization has the capability to build secure systems prior to accepting them into a rigorous evaluation activity?
Reverse engineering of large legacy software systems generally cannot meet its objectives because it cannot be cost-effective. There are two main reasons for this. First, it is very costly to “understand” legacy code sufficiently well to permit changes to be made safely, because reverse engineering of legacy code is intractable in the usual computational complexity sence. Second, even if legacy code could be cost-effective reverse engineered, te ultimate objective - re-engineering code to create a system that will not need to be reverse engineered again in the future - is presently unattainable. Not just crusty old systems, but even ones engineered today, from scratch, cannot escape the clutches of intractability until software engineers learn to design systems that support modular reasoning about their behavior. We hope these observations serve as a wake-up call to those who dream of developing high-quality software systems by transforming them from defective raw materials.
Distributed object systems are increasingly popular, and considerable effort is being expended to develop standards for interaction between objects. Some high-level requirements for secure distributed object interactions have been identified. However, there are no guidelines for developing the secure objects themselves. Some aspects of object-oriented design do not translate directly to traditional methods of developing secure systems. In this paper, we identify features of object oriented design that affect secure system development. In addition, we explore ways to derive security, and provide techniques for developing secure COTS libraries with easily modifiable security policies.
The World Wide Web (WWW) indtroduces exciting possibilities for the use of new technology in the formal evaluation of trusted systems. This is a report of a work in progress. It discusses the conceptual foundations of the WWW use in formal evaluations of the security properties of a system, and offers some of the initial insights gained in its use. Silicon Graphics is using this structure for the submittal of documentation for the formal evaluation of the Trusted IRIX/CMW 6.2 operating system.
In biological systems, diversity is an important source of robustness. A stable ecosystem, for example, contains many different species which occur in highly conserved frequency distributions. If this diversity is lost and a few species become dominant, the ecosystem becomes susceptible to perturbations such as catastrophic fires, infestations, and disease. Similarly, health problems often emerge when there is low genetic diversity within a species, as in the case of endangered species or animal breeding programs. The verebrate immune system offers a third example, providing each individual with a unique set of immunological defenses, helping to control the spread of disease within a population.
Intrusion detection is a significant focus of research in the security of computer systems and networks. This paper presents an analysis of the progress being made in the development of effective intrusion detection systems for computer systems and distributed computer networks. The technologies which are discussed are designed to detect instances of the access of computer systems by unauthorized individuals and the misuse of system resources by authorized system users. A review of the foundations of intrusion detection systems and the methodologies which are the focus of current development efforts are discussed. The results of an informal survey of security and network professionals is discussed to offer a real-world view of intrusion detection. Finally, a discussion of the future technologies and methodologies which promise to enhance the ablility of computer systems to detect intrusions is provided.
A new practical attack on a widely used bus encryption microprocessor, which decrypts software on-the-fly when bytes are fetched from RAM, is presented. It allows easy unauthorized access to clear memory.
Embedded sensors for intrusion detection consist of code added to the operating system and the programs of the hosts where monitoring will take place. The sensors check for specific conditions that indicate an attack is taking place, or an intrusion has occurred. Embedded sensors have advantages over other data collection techniques (usually implemented as separate processes) in terms of reduced host impact, resistance to attack, efficiency and effectiveness of detection. We describe the use of embedded sensors in general, and their application to the detection of specific network-based attacks. The sensors were implemented in the OpenBSD operating system, and our tests show a 100% success rate in the detection of the attacks for which sensors were instrumented. We discuss the sensors implemented and the results obtained, as well as current and future work in the area.
This paper describes an Internet security attack that could endanger the privacy of World Wide Web users and the integrity of their data. The attack can be carried out on today’s systems endangering users of the most common Web browsers, including Netscape Navigator and Microsoft Internet Explorer. Web spoofing allows an attacker to create a “shadow copy” of the entire World Wide Web. Accesses to the shadow Web are funneled through the attackers machine, allowing the attacker to monitor all of the victim’s activities including any passwords or account numbers the victim enters. The attacker can also cause false or misleading data to be sent to Web servers in the victim’s name, or to the victim in the name of any Web server. In short, the attacker observes and controls everything the victim does on the Web. We have implemented a demonstration version of this attack.
A valuable tool is going relatively unnoticed by information security professionals - the conducting of risk assessment/analysis within their organizations. In Datapro’s “Computer Security Issues: 1995 Survey” between 21 to 31 of the total survey respondents conducted a risk assessment/ analysis as one of their security measures. The percentages varied slightly depending on the environment being protected - microcomputer, data network, or midrange/mainframes. Information security is too broad an issue and resources are too short supply for security professionals to be guessing where to spend the money. Risk management is the practice of defining and analyzing the threats to organizational assets and capabilities, and for assisting management in optimizing the return on investment of information security resources. This report provides a methodology for developing an information security risk management program. The necessary steps needed to develop a plan are presented and a process for the plan’s maintenance are discussed.
The Internet has become a massive commercial environment. It provides intellectual property owners with an unprecedented marketing opportunities. Unfortunately, it also presents them with unprecedented, time-critical licensing and enforcement challenges. This update looks at recent cases and legislation relating to the legal challenges presented by the Internet.