We consider the problem of distributing potentially dangerous information to a number of competing parties. As a prime examplem, we focus on the issue of distributing security patches to software. These patches implicitly contain vulnerablility information that may be abused to jeopardize the security of other systems. When a vedor supplies a binary program patch, different users may receive it at different times. The differential application times of the patch create a window of vulnerablility until all users have installed the patch. An abuser might analyze the binary patch before others install it. Armed with this information, he might be able to abuse another user’s machine. A related situation occurs in the deployment of security tools. However, many tools will necessarily encode vulnerability information or explicit information about security “` “localisms”. This information may be reverse-engineered and used against systems. We discuss several ways in which security patches and tools may be made safer. Among these are: customizing patches to apply only to one machine, disguising patches to hinder thier interpretation, synchronizing patch distribution to shrink the window of vulnerablility, applying patches automatically, and using cryptoprocessors with enciphered operating systems. We conclude with some observations on the utility and effectiveness of these methods.
This report presents a prototype architecture of a defense mechanism for computer systems. The intrusion detection problem is introduced and some of the key aspects of any solution are explained. Standard intrusion detection systems are built as a single monolithic module. A finer-grained approach is proposed, where small, independent agents monitor the system. These agents are taught how to recognise intrusive behaviour. The learning mechanism in the agents is built using Genetic Programming. This is explained, and some sample agents are described. The flex- ibility, scalability and resilience of the agent approach are discussed. Future issues are also outlined.
This manual gives a detailed technical description of the IDIOT intrusion detection system from the COAST Laboratory at Purdue University. It is intended to help any- one who wishes to use, extend or test the IDIOT system. Familiarity with security issues, and intrusion detection in particular, is assumed.
This paper examines human rights and policy frameworks supporting the right of access to cyberspace.
One of the commonly accepted principles of software design for security is that making the source code openly available leads to better security. The presumption is that the open publications of source code will lead others to review the code for errors, however, this openness is no guarantee of correctness. One of the most widely published and used pieces of security software in recent memory is the MIT implementation of the Kerberos authentication protocol. In the design of the protocol, random session keys are the basis for establishing the authenticity of sevice requests. Because of the way that the Kerberos Version 4 implementation selected its random keys, the secret keys could easily by guessed in a matter of seconds. This paper discusses the difficulty of generating good random numbers, the mistakes that were made in implementing Kerberos Version 4, and the breakdown of software engineering that allowed this flaw to remain unfixed for ten years. We discuss this as a particularly notable example of the need to examine security-critical code carefully, even when it is made publicly available.