This document is a report on an Internet architecture workshop, initiated by the IAB and held at USC Information Sciences Institute on Feb 8-10, 1994. This workshop generally focused on security issues in the Internet architecture. This document should be regarded as a set of working notes containing ideas about security that were developed by Internet experts in a broad spectrum of areas, including routing, mobility, realtime service, and provider requirements, as well as security. It contains some significant diversity of opinions on some important issues. This memo is offered as one input in the process of developling viable security mechanisms and procedures for the Internet.
Recent anecdotal reports of novel principles of illumination have stressed qualitative aspects. This note presents a quantitative study of an organic illumination system, characterizing the temperature and current-flow properties of the system as functions of time and device parameters. Theoretical and practical implications of these measurements are discussed.
Management is often dissatisfied with the performance of many information security efforts. After investment of considerable resources, and prolonged waiting for results, many efforts can demonstrate little if any significant improvement. This is largely due to a lack of planning. Many efforts lack explicitly articulated plans as well as specific performance milestones. Although many are loathe to admit it, information security efforts at many organizations lack formal planning and performance monitoring….. This article examines why information security efforts are often ineffective and why more formal planning efforts can alleviate this condition. It discusses tools best usedto prepare an action plan for information security and gives some tips on how to sell such a plan to management. Also discussed are organizational design, policies, standards, and guidelines and other elements of a foundation that is required if an effective information security planning process is to be sustained. The article dwells on the establishment of a context for effective information security planning.
Money wants to be anonymous and that’s just the first rule of a brave new electronic society, says David Chaum, the guru of digital cash. If he has his way, checks and coins will be obsolete and you’ll e-mail your kids their allowance.
NetRanger is a real-time security management system that detects, analyzes responds to, and dtecs unauthorized network activity. The NetRanger architectrue supports large-scale information protection via centralized monitoring and management of remote dynamic packet filtering devices that plug into networks. Communication is maintained via WheelGroup Corporation’s (WGC) proprietary secure communications architecture. Network activity can also be logged for more in-depth analysis.
Electronic commerce presents a number of seemingly contradictory requirements. On the one hand, we must be able to account for funds and comply with laws requiring disclosure of certain sorts of transaction information (e.g., taxable transactions, transactions of more than $10,000). On the other hand, it is often socially desirable to limit exposer of transaction information to protect the privacy of the participants. In this thesis, I address the following issues:
*I develop a new analysis technique for measuring the exposure of transaction
information
*I analyze various privacy and disclosure configurations to determine which
are technically feasible and which are logically impossible.
*I apply this analysis to the Information Networking Institute’s proposed
“NetBill” billing server protocol.
*I consider the use of intermediary agents to protect anonymity and the
implications of various arrangements of intermediaries.
*I develop an encoding technique that can reveal the order of magnitude of
a transaction without revealing the exact value of the transaction itself.
This paper summarizes the EMERALD (Event Monioring Enabling Responses to Anomalous Live Disturbances) environment, a distributed scalable tool suite for tracking malicious activity through and across large networks. EMERALD introduces a highly distributed, building-block approach to network surveillance, attack isolation, and automated response. It combines models from research in distributed high-volume event-correlation methodologies with over a decade worth of intrusion-detection research and engineering experience. The approach is novel in its use of highly distributed, independently tunable, surveillance and response monitors that are deployable polymorphically at various abstract layers in a large network. These monitors demonstrate a streamlined intrusion-detection design that combines signature-analysis with statistical profiling to provide localized real-time protection of the most widely used network services on the Internet. Equally important, EMERALD introduces a recursive framework for coordinating the dissemination of analyses from the distributed monitors to provide a global detection and response capability to counter attacks occurring across and entire network enterprise. Further, EMERALD introduces a versatile application programmers’ interface that enhances its ability to integrate with the target hosts and provides a high degree of interoperability with third-party tool suites.
In research conducted over the last year, we have concluded that the class of attacks known as data driven attacks have become somewhat popular with the interloper population. These attacks generally are transmitted in the guise of an innocent data structure, such as a document, spreadsheet or image, while in reality the data is an object in the modern sense. That is, the data object consists of potentially passive and potentially active portions, the latter generally acting as a collection of methods that support viewing the passive portions of the object.
This paper presents the prelimiary architechture of a network level-intrusion detection system. The proposed system will monitor base level information in network packets (source, destination, packet size, and time), learning the ‘normal’ patterns and announcing anomalies as they occur. The goal of this research is to determine the applicability of current intrusion detection technology to the detection of network level intrusions. In particular, we are investigating the possibility of using this technology to detect and react to worm programs.
This paper presents a potential solution to the intrusion detection problem in computer security. It uses a combiniation of work in the fields of Artificial Life and computer security. It shows how an intrusion detection system can be implemented using autonomous agents, and how these agents can be built using Genetic Programming. It also shows how Automatically Defined Functions (ADF’s) can be used to evolve genetic programs that contain multiple data types and yet retain type-safety. Future work arising from this is also discussed.
This paper examines how Genetic Programming has shortcomings in an event-driven environment. The need for event-driven programming is motivated by some examples. We then describe the difficulty in handling these examples using the traditional genetic programming approach. A potential solution that uses colored Petri nets is outlined. We present an experimental setup to test our theory.
Digital signatures provide a mechanism for guaranteeing integrity and authenticity of Web content but not more general notions of security or trust. Web-aware applications must permit users to state clearly their own security policies and, of course, must provide the cryptographic tools for manipulating digital signatures. This paper describes the REFEREE trust management system for Web applications; REFEREE provides both a general policy-evaluation mechanism for Web clients and servers and a language for specifying trust policies. REFEREE places all trust decisions under explicit policy control; in the REFEREE model, every action, including evaluation of compliance with policy, happens under the control of some policy. That is, REFEREE is a system for writing policies about policies, as well as policies about cryptographic keys, PICS label bureaus, cerification authorities, trust delegation, or anything else. In this paper, we flesh out the need for ‘trust management’ in Web applications, explain the design philosophy of the REFEREE trust management system, and describe a prototype implementation of REFEREE.
We address the problem of ‘trust management in information labeling’. The Platform Internet Content Selection (PICS), proposed by Resnick and Miller, establishes a flexible way to label documents according to various aspects of their contents, thus permitting a large and diverse group of potential viewers to make (automated) informed judgements about whether or not to view them. For some viewers, the relevant aspects may be quantity or quality of material in certain topical areas, and for others, they may be the presence or absence of potentially offensive language or images. Thus PICS users need a language in which to specify their PICS profiles, i.e., the aspects according to which they want documents to be labeled, the acceptable values of those labels, and the parties whom they trust to do the labeling. Furthermore, PICS compliant client software (e.g., a web browser) needs a mechanism for checking whether a document meets the requirements set forth in a viewer’s profile. A trust management solution for the PICS information-labeling system must provide both a language for specifying profiles and a mechanism for checking whether a document meets the requirements given in a profile. This paper describes our design and implementation of a PICS profile language and our experience integrating the PolicyMaker trust managment engine with a PICS- compliant browser to provide a checking mechanism. PolicyMaker was originally designed to address trust management problems in network services that process signed requests for action and use public-key cyrptography. Because information labeling is not inherently a cryptographically based service, and thus is outside the original scope of the PolicyMaker framework, our work on information labeling is evidence of PolicyMaker’s power and adaptability.