It has been argued that, since the 1960’s, an emphasis on individualism and personal autonomy have shaped public policy debates, including debates about the right to personal privacy. While many scholars and advocacy groups claim that privacy is under siege, an alternate view of privacy exists, one in which it is weighed against other public interests. In The Limits of Privacy, Amitai Etzioni espouses a communitarian approach to determining the relative value and, as the title suggests, the limits of privacy. Privacy, the author argues, is not an absolute right, but is a right that must be carefully measured against the “common good,” which for Etzioni is defined as public health and safety. At the heart of this book is the question of if and when we are justified in implementing measures that diminish privacy in the service of the common good.
To answer this question and to identify criteria for evaluating the relative trade-offs between privacy and the common good, Etzioni examines several examples in which privacy, depicted as an individual right, is in conflict with societal responsibilities. Five public policy issues—namely the HIV testing of newborn babies, Megan’s Laws, encryption and government wiretapping, biometric national ID cards, and the privacy of medical records—are examined in detail. Through his analysis, Etzioni attempts to prove that, in most cases, champions of privacy have actually done more harm than good by stifling innovation and curbing necessary democratic discussions about privacy. A notable exception is in the case of personal medical records: The author notes that, while “Big Brother” is normally associated with privacy violation, in the case of medical records, unregulated private industry, which Etzioni aptly coins “Big Bucks,” is a pertinent and immediate threat.
Etzioni’s analysis, while flawed in several respects (e.g. Etzioni largely ignores evidence suggesting that national IDs will do more harm than good from a security perspective), results in four criteria that can be used in examining the tension between liberty and the public interest, or in this case privacy and public health and safety. The four criteria are as follows:
The Limits of Privacy is necessary reading for anyone involved in accepting, shaping, debating, and enacting privacy policies, both at the organizational and public-policy level. While many readers, including this reviewer, disagree with many of Etzioni’s proposed solutions to the problems he examines, his four criteria are useful for anyone attempting to understand the intricacies involved. Likewise, while Etzioni’s views are contrary to many of his peers, whose arguments he credits in his analysis, his arguments for justifiable invasions of privacy are a useful foil for privacy advocates and a useful reminder that privacy issues will always present real and costly trade-offs.
King and Chen (2005) write about their BackTracker software. The idea is interesting: let’s log everything needed to relate a sequence of events leading to an intrusion. Everything in this case is processes, files, and filenames. It can generate dependency graphs, once an anomalous process or event has been identified. That is, something else must raise an alert, and then BackTracker helps find the cause. It’s an interesting representation of an attack.
Taken one step further than they do, perhaps these dependency graphs could be used for intrusion detection?
Suh et al. (2004) propose a wonderful method for tracking taintedness, and denying dangerous operations. It’s elegant, easy to understand, cheap in terms of performance hit, and effective. The only problem is… it would require re-designing the hardware (CPUs) to support it.
I wish it would happen, but I’m not holding my breath. Perhaps virtual machines could help until it happens, and even make it happen?
Kiriansky et al. (2002) wrote an interesting paper on what they call “program sheperding”. The basic idea is to control how the program counter changes and where it points to. The PC should not point to data areas (this is somewhat similar in concept to non-executable stacks or memory pages). The PC should enter library code through approved entry points only. It would be capable in principle to enforce that the return target of a function should be the instruction located right after the call.
Their solution keeps track of “code origins”, which resembles a multi-level taint tracking. The authors argue that this is better than execute flags on memory pages, because those could be “inadvertently or maliciously changed” (and they have three states instead of only two). I thought those flags were managed by the kernel and could not be changed in user space? If the kernel is compromised, then program sheperding will be compromised too. The mechanism tracking code origins heavily uses write-protected memory pages, so the question that comes to mind is why couldn’t those also be “inadvertently or maliciously changed” if we have to worry about that for execute flags? I must be missing something.
The potential versatility of this technology is impressive. The authors test only one policy. Policies have to be written, tested and approved; it is not clear to me why that policy was chosen and the compromises it implies.
The crux of the whole system is code interpretation, which, despite the use of advanced optimizations, slows the execution. It would be interesting to see how it would fare inside the framework of a virtual machine (e.g., VMWare). Enterprises are already embracing VMWare and virtual machine solutions for its easier management of hardware, software, and disaster recovery. With a price already paid for sandboxing, using this new sandboxing technology may not be so expensive after all. Whereas it may not be as appealing as some solutions requiring hardware support, it may be easier to deploy.
No, not our esteemed director of research. It turned off my ELISA project, Enterprise-Level Information Security Assurance, due to lack of interest from the public at large. The idea for this web application was to keep track of patches and basically support NIST’s recommendation on managing patches to use such a system. I believe this indicates that the process was too heavy; people don’t like to spend so much effort and money managing patches.