Posts in Reviews

VMworld 2006:  Teaching (security) using virtual labs

This talk by Marcus MacNeill (Surgient) discussed the Surgient Virtual Training Lab used by CERT-US to train military personnel in security best practices, etc…  I was disappointed because the talk didn’t discuss the challenges of teaching security, and the lessons learned by CERT doing so, but instead focused on how the product could be used in a teaching environment.  Not surprisingly, the Surgient product resembles both VMware’s lab manager and ReAssure.  However, the Surgient product doesn’t support the sharing of images, and stopping and restarting work, e.g. development work by users (from what I saw—if it does it wasn’t mentioned).  They mentioned that they had patented technologies involved, which is disturbing (raise your hand if you like software patents).  ReAssure meets (or will soon, thanks to the VIX API) all of the requirements he discussed for teaching, except for student shadowing (seeing what a student is attempting to do).  So, I would be very interested in seeing teaching labs using ReAssure as a support infrastructure.  There are of course other teaching labs using virtualization that have been developed at other universities and colleges;  the challenge is of course to be able to design courses and exercises that are portable and reusable.  We can all gain by sharing these, but for that we need a common infrastructure where all these exercises would be valid.

OSCON 2006: Where’s the Security?

Energizing the IndustryOSCON 2006 was a lot of fun for a lot of reasons, and was overall a very positive experience.  There were a few things that bugged me, though.

I met a lot of cool people at OSCON.  There are too many folks to list here without either getting really boring or forgetting someone, but I was happy to put a lot of faces to names and exchange ideas with some Very Smart People.  The PHP Security Hoedown BOF that I moderated was especially good in that respect, I thought.  There were also a lot of good sessions, especially Theo Schlossnagle’s Big Bad PostgreSQL: A Case Study, Chris Shiflett’s PHP Security Testing, and the PHP Lightning Talks (“PHP-Nuke is a honeypot” - thank you for the best quote of the convention, Zak Greant).

On the other hand, I was very surprised that the Security track at OSCON was almost nonexistent.  There were four sessions and one tutorial, and for a 5-day event with lots of sessions going on at the same time, that seems like a really poor showing.  The only other tracks that has security-related sessions were:

  • Linux (including one shared with the Security track)
  • PHP

which leaves us with the following tracks with no security-oriented sessions:

  • Business
  • Databases
  • Desktop Apps
  • Emerging Topics
  • Java
  • JavaScript/Ajax
  • Perl
  • Products and Services
  • Programming
  • Python
  • Ruby
  • Web Apps
  • Windows

I can certainly think of a few pertinent security topics for each of these tracks.  I’m not affiliated with O’Reilly, and I have no idea whether the OSCON planners just didn’t get very many security-related proposals, or they felt that attendees wouldn’t be interested in them.  Either way, it’s worrisome.

Security is an essential part of any kind of development: as fundamental as interface design or performance.  Developers are stewards of the data of their users, and if we don’t take that responsibility seriously, all our sweet gradient backgrounds and performance optimizations are pointless.  So to see, for one reason or another, security relegated to steerage at OSCON was disappointing.  I hope O’Reilly works hard to correct this next year, and I’m going to encourage other CERIAS folk like Pascal Meunier and Keith Watson to send in proposals for 2007.

Useful Awareness Videos

The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest.  Topics covered include spyware, phishing, and patching.  The winning video,  Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.

The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest.  In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.

Managing Web Browser risks with the NoScript extension

It is very risky to enable all client-side scripting technologies when browsing the web (plugins/ActiveX/ JavaScript/Flash etc…).  I installed the “NoScript” extension for Firefox, which allows JavaScript to run only on some whitelisted sites. It is a wonderful idea, except that it comes with a list of pre-enabled sites with some that you can’t delete (the arrogance of dictating unerasable sites!), and the defaults are to not block Flash and other plugins. Moreover, it’s only as secure as DNS, unless you require the “full addresses” option through which I presume you could require an https (SSL) url. Unfortunately there is no way to enable “base 2nd level domains” *and* require SSL, to say for example that I want to trust all *.purdue.edu sites that I contact through SSL and that have valid SSL certificates. It is better than nothing, but needs SSL support to be really useful.  Most people don’t understand the limitations and vulnerabilities of DNS, and the need for SSL, and will therefore have an unwarranted feeling of security while using this plugin.

Review: The Limits of Privacy

It has been argued that, since the 1960’s, an emphasis on individualism and personal autonomy have shaped public policy debates, including debates about the right to personal privacy.  While many scholars and advocacy groups claim that privacy is under siege, an alternate view of privacy exists, one in which it is weighed against other public interests.  In The Limits of Privacy, Amitai Etzioni espouses a communitarian approach to determining the relative value and, as the title suggests, the limits of privacy.  Privacy, the author argues, is not an absolute right, but is a right that must be carefully measured against the “common good,” which for Etzioni is defined as public health and safety.  At the heart of this book is the question of if and when we are justified in implementing measures that diminish privacy in the service of the common good.

To answer this question and to identify criteria for evaluating the relative trade-offs between privacy and the common good, Etzioni examines several examples in which privacy, depicted as an individual right, is in conflict with societal responsibilities.  Five public policy issues—namely the HIV testing of newborn babies, Megan’s Laws, encryption and government wiretapping, biometric national ID cards, and the privacy of medical records—are examined in detail.  Through his analysis, Etzioni attempts to prove that, in most cases, champions of privacy have actually done more harm than good by stifling innovation and curbing necessary democratic discussions about privacy.  A notable exception is in the case of personal medical records:  The author notes that, while “Big Brother” is normally associated with privacy violation, in the case of medical records, unregulated private industry, which Etzioni aptly coins “Big Bucks,” is a pertinent and immediate threat.

Etzioni’s analysis, while flawed in several respects (e.g. Etzioni largely ignores evidence suggesting that national IDs will do more harm than good from a security perspective), results in four criteria that can be used in examining the tension between liberty and the public interest, or in this case privacy and public health and safety.  The four criteria are as follows:

  • First, society should take steps to limit privacy only if it faces a “well-documented and macroscopic threat” to the common good;
  • second that society should identify and try any and all means that do not endanger privacy before restricting privacy;
  • third, that privacy intrusions should have minimal impact;
  • and fourth, that the undesirable side effects of privacy violations for the common good are treated (i.e. if a patient’s medical record must be digitized and shared, the confidentiality of the record must be guaranteed).

The Limits of Privacy is necessary reading for anyone involved in accepting, shaping, debating, and enacting privacy policies, both at the organizational and public-policy level.  While many readers, including this reviewer, disagree with many of Etzioni’s proposed solutions to the problems he examines, his four criteria are useful for anyone attempting to understand the intricacies involved.  Likewise, while Etzioni’s views are contrary to many of his peers, whose arguments he credits in his analysis, his arguments for justifiable invasions of privacy are a useful foil for privacy advocates and a useful reminder that privacy issues will always present real and costly trade-offs.

Review:  Secure Execution via Program Sheperding

Kiriansky et al. (2002) wrote an interesting paper on what they call “program sheperding”.  The basic idea is to control how the program counter changes and where it points to.  The PC should not point to data areas (this is somewhat similar in concept to non-executable stacks or memory pages).  The PC should enter library code through approved entry points only.  It would be capable in principle to enforce that the return target of a function should be the instruction located right after the call.

Their solution keeps track of “code origins”, which resembles a multi-level taint tracking.  The authors argue that this is better than execute flags on memory pages, because those could be “inadvertently or maliciously changed” (and they have three states instead of only two).  I thought those flags were managed by the kernel and could not be changed in user space?  If the kernel is compromised, then program sheperding will be compromised too.  The mechanism tracking code origins heavily uses write-protected memory pages, so the question that comes to mind is why couldn’t those also be “inadvertently or maliciously changed” if we have to worry about that for execute flags?  I must be missing something.

The potential versatility of this technology is impressive.  The authors test only one policy.  Policies have to be written, tested and approved;  it is not clear to me why that policy was chosen and the compromises it implies.

The crux of the whole system is code interpretation, which, despite the use of advanced optimizations, slows the execution.  It would be interesting to see how it would fare inside the framework of a virtual machine (e.g., VMWare).  Enterprises are already embracing VMWare and virtual machine solutions for its easier management of hardware, software, and disaster recovery.  With a price already paid for sandboxing, using this new sandboxing technology may not be so expensive after all.  Whereas it may not be as appealing as some solutions requiring hardware support, it may be easier to deploy.