In this dissertation, we examine the machine learning issues raised by the domain of anomaly detection for computer security. The anomaly detection task is to recognize the presence of an unusual and potentially hazardous state within the activities of a computer user, system, or network. "Unusual" is defined with respect to some model of "normal" behavior which may be either hard-coded or learned from observation. We focus here on learning models of normalcy at the user behavioral level, as observed through command line data. An anomaly detection agent faces many learning problems including learning from streams of temporal data, learning from instances of a single class, and adaptation to a dynamically changing concept. We describe two approaches to the construction of such models: one that employs instance-based models of user behaviors and one that uses hidden Markov models. We demonstrate the performance of sensors based on these models under a wide range of parameter settings and show conditions under which maximal classification performance is achieved. Using provided labels of users' job descriptions, we demonstrate that users can be roughly divided into behavioral classes related to their experience level, Finally, we study methods for adapting user models to changing behavioral patterns and show the methods' performance strengths and weaknesses.