Posts tagged Reviews

Page Content

Review: The Limits of Privacy

It has been argued that, since the 1960’s, an emphasis on individualism and personal autonomy have shaped public policy debates, including debates about the right to personal privacy.  While many scholars and advocacy groups claim that privacy is under siege, an alternate view of privacy exists, one in which it is weighed against other public interests.  In The Limits of Privacy, Amitai Etzioni espouses a communitarian approach to determining the relative value and, as the title suggests, the limits of privacy.  Privacy, the author argues, is not an absolute right, but is a right that must be carefully measured against the “common good,” which for Etzioni is defined as public health and safety.  At the heart of this book is the question of if and when we are justified in implementing measures that diminish privacy in the service of the common good.

To answer this question and to identify criteria for evaluating the relative trade-offs between privacy and the common good, Etzioni examines several examples in which privacy, depicted as an individual right, is in conflict with societal responsibilities.  Five public policy issues—namely the HIV testing of newborn babies, Megan’s Laws, encryption and government wiretapping, biometric national ID cards, and the privacy of medical records—are examined in detail.  Through his analysis, Etzioni attempts to prove that, in most cases, champions of privacy have actually done more harm than good by stifling innovation and curbing necessary democratic discussions about privacy.  A notable exception is in the case of personal medical records:  The author notes that, while “Big Brother” is normally associated with privacy violation, in the case of medical records, unregulated private industry, which Etzioni aptly coins “Big Bucks,” is a pertinent and immediate threat.

Etzioni’s analysis, while flawed in several respects (e.g. Etzioni largely ignores evidence suggesting that national IDs will do more harm than good from a security perspective), results in four criteria that can be used in examining the tension between liberty and the public interest, or in this case privacy and public health and safety.  The four criteria are as follows:

  • First, society should take steps to limit privacy only if it faces a “well-documented and macroscopic threat” to the common good;
  • second that society should identify and try any and all means that do not endanger privacy before restricting privacy;
  • third, that privacy intrusions should have minimal impact;
  • and fourth, that the undesirable side effects of privacy violations for the common good are treated (i.e. if a patient’s medical record must be digitized and shared, the confidentiality of the record must be guaranteed).

The Limits of Privacy is necessary reading for anyone involved in accepting, shaping, debating, and enacting privacy policies, both at the organizational and public-policy level.  While many readers, including this reviewer, disagree with many of Etzioni’s proposed solutions to the problems he examines, his four criteria are useful for anyone attempting to understand the intricacies involved.  Likewise, while Etzioni’s views are contrary to many of his peers, whose arguments he credits in his analysis, his arguments for justifiable invasions of privacy are a useful foil for privacy advocates and a useful reminder that privacy issues will always present real and costly trade-offs.

Review:  Secure Execution via Program Sheperding

Kiriansky et al. (2002) wrote an interesting paper on what they call “program sheperding”.  The basic idea is to control how the program counter changes and where it points to.  The PC should not point to data areas (this is somewhat similar in concept to non-executable stacks or memory pages).  The PC should enter library code through approved entry points only.  It would be capable in principle to enforce that the return target of a function should be the instruction located right after the call.

Their solution keeps track of “code origins”, which resembles a multi-level taint tracking.  The authors argue that this is better than execute flags on memory pages, because those could be “inadvertently or maliciously changed” (and they have three states instead of only two).  I thought those flags were managed by the kernel and could not be changed in user space?  If the kernel is compromised, then program sheperding will be compromised too.  The mechanism tracking code origins heavily uses write-protected memory pages, so the question that comes to mind is why couldn’t those also be “inadvertently or maliciously changed” if we have to worry about that for execute flags?  I must be missing something.

The potential versatility of this technology is impressive.  The authors test only one policy.  Policies have to be written, tested and approved;  it is not clear to me why that policy was chosen and the compromises it implies.

The crux of the whole system is code interpretation, which, despite the use of advanced optimizations, slows the execution.  It would be interesting to see how it would fare inside the framework of a virtual machine (e.g., VMWare).  Enterprises are already embracing VMWare and virtual machine solutions for its easier management of hardware, software, and disaster recovery.  With a price already paid for sandboxing, using this new sandboxing technology may not be so expensive after all.  Whereas it may not be as appealing as some solutions requiring hardware support, it may be easier to deploy.