Posts in Secure IT Practices

Word documents being used in new attacks

I have repeatedly pointed out (e.g., this post) to people that sending Word files as attachments is a bad idea. This has been used many, many times to circulate viruses, worms, and more. People continue to push back because (basically) it is convenient for them. How often have we heard that convenience trumps good security (and good sense)?

Now comes this story of yet another attack being spread with Word documents.

There are multiple reasons why I don’t accept Word documents in email. This is simply one of the better reasons.

If you want to establish a sound security posture at your organization, one of the things you should mandate is no circulation of executable formats—either out or in. “.doc” files are in this category. I am unsure if the new “.docx” format is fully immune to these kinds of things but it seems “.rtf” is.

 

Failures in the Supply Chain

[This is dervied from a posting of mine to Dave Farber’s Interesting People list.]

There is an article in the October Businessweek that describes the problem of counterfeit electronic components being purchased and used in critical Defense-related products.

This is not a new threat. But first let’s reflect on the past.

Historically, the military set a number of standards (MIL-SPEC) to ensure that materials they obtained were of an appropriate level of quality, as well as interoperable with other items. The standards helped ensure a consistency for everything from food to boots to tanks to software, as well as ensuring performance standards (quality).

The standards process was not without problems, however. Among issues often mentioned were:

     
  • Standards were sometimes not revised often enough to reflect changes in technology. The result was that the military often had to acquire and use items that were generations behind the commercial marketplace (esp. in software/computers);
  •  
  • Knowing and complying with so many standards often caused companies considerable extra time and effort in supplying items, thus raising their cost well above comparable commercial equivalents;
  •  
  • Incompatible standards across military agencies and services, especially when compared with commercial items used by civilian agencies, led to waste and increased cost, and lack of flexibility in implementation;
  •  
  • Imposition of rigid standards cut down on innovation and rapid development/acquisition/deployment cycles;
  •  
  • The rigidity and complexity of the standards effectively shut out new vendors, especially small vendors because they could not match the standards-compliance efforts of large, entrenched defense vendors.

Thus, in June of 1994, William Perry, the then Secretary of Defense, issued a set of orders that effectively provide a pathway to move away from the standards and adopt commercial standards and performance goals in their place. (cf. the Wikipedia article on MIL-SPEC). One of the rationales expressed then, especially as regarded computing software and hardware, was that the competition of the marketplace would lead to better quality products. (Ironically, the lack of vendor-neutral standards then led to a situation where we have large monocultures of software/hardware platforms throughout government, and the resultant lack of meaningful competition has almost certainly not served the goals of better quality and security.)

In some cases, the elimination of standards has indeed helped keep down costs and improve innovation. I have been told, anecdotally, that stealth technology might not have been fielded had those aircraft been forced within the old MIL-SPEC regime.

As a matter of cost and speed many MIL-SPEC standards seem to have been abandoned to choose COTS whenever possible without proper risk analysis. Only recently have policy-makers begun to realize some of the far-reaching problems that have resulted from the rush to abandon those standards.

As the Businessweek article details, counterfeit items and items with falsified (or poorly conducted) quality control have been finding their way into critical systems, including avionics and weapons control. The current nature of development means that many of those systems are assembled from components and subsystems supplied by other contractors, so a fully-reputable supplier may end up supplying a faulty system because of a component supplied by a vendor with which they have no direct relationship. One notable example of this was the “Cisco Raider” effort from a couple of years ago where counterfeit Cisco router boards were being sold in the US.

As noted in several press articles (such as the ones linked in, above) there is considerable price motive to supply less capable, “grey market” goods in large bids. The middlemen either do not know or care where the parts come from or where they are being used—the simply know they are making money. The problem is certainly not limited to Defense-related parts, of course. Fake “Rolex” watches that don’t keep time, fake designer shoes that fall apart in the rain, and fake drugs that either do nothing or actually cause harm are also part of the “gray market.” Adulteration of items or use of prohibited materials is yet another aspect of the problem: think “lead paint” and “melamine” for examples. Of course, this isn’t a US-only problem; people around the world are victimized by gray-market, adulterated and counterfeit goods.

These incidents actually illustrate some of the unanticipated future effects of abandoning strong standards. One of the principal values of MIL-SPEC standards was that it established a strict chain of accountability for products. I suspect that little thought has been given by policy-makers to the fact that there is considerable flow of items across borders from countries where manufacturing expertise and enforcement of both IP laws and consumer-protection statutes may not be very stringent. Buying goods from countries where IP violations are rampant (If there is little fear over copying DVDs, then there is little fear over stamping locally-produced items as “Cisco”), and where bribes are commonplace, is not a good strategy for uniform quality.

Of course, there are even more problems than simply quality. Not every country and group has the same political and social goals as we do in the US (or any other country—this is a general argument). As such, if they are in a position to produce and provide items that may be integrated into our defense systems or critical infrastructure, it may be in their interests to produce faulty goods—or carefully doctored goods. Software with hidden ‘features” or control components with hidden states could result in catastrophe. That isn’t fear-mongering—we know of cases where this was done, such as to the Soviets in the 1980s. Even if the host country isn’t subtly altering the components, it may not have the resources to protect the items being produced from alteration by third parties. After all, if the labor is cheaper in country X, then it will also be cheaper to bribe the technicians and workers to make changes to what they are producing.

The solution is to go back to setting high standards, require authentication of supply chain, and better evaluation of random samples. Unfortunately, this is expensive, and we’re not in a state nationally where extra expense (except to line the pockets of Big Oil and Banking) is well tolerated by government. Furthermore, this alters the model where many small vendors acting as middlemen are able to get a “piece of the action.” Their complaints to elected representatives who may not understand the technical complexities adds even further pressure against change.

In some cases, the risk posed in acquisition of items may warrant subsidizing the re-establishment of some manufacturing domestically (e.g., chip fabs). This doesn’t need to be across the board, but it does required judicious risk-analysis to determine where critical points are—or will be in the future. We must realize that the rapid changes in technology may introduce new patterns of production and acquisition that we should plan for now. For instance, once elements of nanotechnology become security-critical, we need to ensure that we have sufficient sources of controlled, quality production and testing.

I’m not going to hold my breath over change, however. Some of us have been complaining about issues such as this for decades. The usual response is that we are making a big deal out of “rare events” or are displaying xenophobia. The sheer expense frightens off many from even giving it more than a cursory thought. I know I have been dismissed as an “over-imaginative academic” more times than I can count when I point out the weaknesses.

One of the factors that allegedly led to the decline of the Roman empire was the use of lead in pipes, and lead salts to make cheap wine more palatable for the masses. The Romans knew there was a health problem associated with lead, but the vendors saw more profit from using it.

Once we have sufficiently poisoned our own infrastructure to save money and make the masses happier, how long do we last?

[If you are interested in being notified of new entries by spaf on cyber and national security policy issues, you can either subscribe to the RSS feed for this site, or subscribe to the notification list.]

 

PHPSecInfo talk at OSCON 2008

OSCON 2006: Energizing the Industry

If you’re at OSCON, and you love security, you may or may not enjoy my talk on PHPSecInfo, a security auditing tool for the PHP environment. I’m actually going to try to show some new code, so if you’ve seen it before, you can see it again – for the first time.

The talk is at 1:45pm Thursday, 07/24/2008.

Virtualization Is Successful Because Operating Systems Are Weak

It occurred to me that virtual machine monitors (VMMs) provide similar functionality to that of operating systems.  Virtualization supports functions such as these:

  1. Availability
    • Minimized downtime for patching OSes and applications
    •          
    • Restart a crashed OS or server

  2. Scalability
    • More or different images as demand changes

  3. Isolation and compartmentalization
  4. Better hardware utilization
  5. Hardware abstraction for OSes
    • Support legacy platforms

Compare it to the list of operating system duties:

  1. Availability
    • Minimized downtime for patching applications
    •          
    • Restart crashed applications
    •      

  2. Scalability
    • More or different processes as demand changes

  3. Isolation and compartmentalization
    • Protected memory
    • Accounts, capabilities

  4. Better hardware utilization (with processes)
  5. Hardware abstraction for applications

The similarity suggests that virtualization solutions compete with operating systems.  I now believe that a part of their success must be because operating systems do not satisfy these needs well enough, not taking into account the capability to run legacy operating systems or entirely different operating systems simultaneously.  Typical operating systems lack security, reliability and ease of maintenance.  They have drivers in kernel space;  Windows Vista thankfully now has them in user space, and Linux is moving in that direction.  The complexity is staggering.  This is reflected in the security guidance;  hardening guides and “benchmarks” (essentially an evaluation of configuration settings) are long and complex.  The attempt to solve the federal IT maintenance and compliance problem created the SCAP and XCCDF standards, which are currently ambiguously specified, buggy and very complex.  The result of all this is intensive, stressful and inefficient maintenance in an environment of numerous and unending vulnerability advisories and patches.

What it looks like is that we have sinking boats, so we’re putting them inside a bigger, more powerful boat, virtualization.  In reality, virtualization typically depends on another, full-blown operating system. 
more OSes
VMWare ESX Server runs its own OS with drivers.  Xen and offerings based on it have a full, general purpose OS in domain 0, in control and command of the VMM (notwithstanding disaggregation).  Microsoft’s “Hyper-V” requires a full-blown Windows operating system to run it.  So what we’re doing is really exchanging an untrusted OS for another, that we should trust more for some reason.  This other OS also needs patches, configuration and maintenance.  Now we have multiple OSes to maintain!  What did we gain?  We don’t trust OSes but we trust “virtualization” that depends on more OSes?  At least ESX is “only” 50 MB, simpler and smaller than the others, but the number of defects/MB of binary code as measured by patches issued is not convincing.

I’m now not convinced that a virtualization solution + guest OS is significantly more secure or functional than just one well-designed OS could be, in theory.  Defense in depth is good, but the extent of the spread of virtualization may be an admission that we don’t trust operating systems enough to let them stand on their own.  The practice of wiping and reinstalling an OS after an application or an account is compromised, or deploying a new image by default suggests that there is little trust in the depth provided by current OSes. 

As for ease of management and availability vs patching, I don’t see why operating systems would be unable to be managed in a smart manner just like ESX is, migrating applications as necessary.  ESX is an operating system anyway…  I believe that all the special things that a virtualization solution does for functionality and security, as well as the “new” opportunities being researched, could be done as well by a trustworthy, properly designed OS;  there may be a thesis or two in figuring out how to implement them back in an operating system. 

What virtualization vendors are really doing is a clever way to smoothly replace one operating system with another. This may be how an OS monopoly could be dislodged, and perhaps would explain the virtualization-unfriendly clauses in the licensing options for Vista:  virtualization could become a threat to the dominance of Windows, if application developers started coding for the underlying OS instead of the guest.  Of course, even with a better OS we’d still need virtualization for testbeds like ReAssure,  and for legacy applications.  Perhaps ReAssure could help test new, better operating systems.
(This text is the essence of my presentation in the panel on virtualization at the 2008 CERIAS symposium).

Related reading:
Heiser G et al. (2007) Towards trustworthy computing systems: Taking microkernels to the next level.  ACM Operating Systems Review, 41
Tanenbaum AS, Herder JN and Bos H (2006) Can we make operating systems reliable and secure?  Computer, 39

Another Round on Passwords

[tags]passwords, security practices[/tags]
The EDUCAUSE security mailing list has yet (another) discussion on password policies.  I’ve blogged about this general issue several times in the past, but maybe it is worth revisiting.

Someone on the list wrote:

Here is my question - does anyone have the data on how many times a hack (attack) has occurred associated to breaking the “launch codes” from outside of the organization?  The last information I gleaned from the FBI reports (several years ago) indicated that 70 percent of hackings (attacks) were internal.

My most recent experience with intrusions has had nothing to do with a compromised password, rather an exploit of some vunerability in the OS, database, or application.

I replied:

I track these things, and I cannot recall the last time I saw any report of an incident caused by a guessed password.  Most common incidents are phishing, trojans, snooping, physical theft of sensitive media, and remote exploitation of bugs.

People devote huge amounts of effort to passwords because it is one of the few things they think they can control. 

Picking stronger passwords won’t stop phishing.  It won’t stop users downloading trojans.  It won’t stop capture of sensitive transmissions.  It won’t bring back a stolen laptop (although if the laptop has proper encryption it *might* protect the data).  And passwords won’t ensure that patches are in place but flaws aren’t.

Creating and forcing strong password policies is akin to being the bosun ensuring that everyone on the Titanic has locked their staterooms before they abandon ship.  It doesn’t stop the ship from sinking or save any lives, but it sure does make him look like he’s doing something important…..

That isn’t to say that we should be cavalier about setting passwords.  It is important to try to set strong passwords, but once reasonably good ones are set in most environments the attacks are going to come from other places—password sniffing, exploitation of bugs in the software, and implantation of trojan software.

As a field, we spend waaaaay too much time and resources on palliative measures rather than fundamental cures.  In most cases, fiddling with password rules is a prime example.  A few weeks ago, I blogged about a related issue.

Security should be based on sound risk assessment, and in most environments weak passwords don’t present the most significant risk.

What did you really expect?

[tags]reformed hackers[/tags]
A news story that hit the wires last week was that someone with a history of breaking into systems, who had “reformed” and acted as a security consultant, was arrested for new criminal behavior.  The press and blogosphere seemed to treat this as surprising.  They shouldn’t have.

I have been speaking and writing for nearly two decades on this general issue, as have others (William Hugh Murray, a pioneer and thought leader in security,  is one who comes to mind).  Firms that hire “reformed” hackers to audit or guard their systems are not acting prudently any more than if they hired a “reformed” pedophile to babysit their kids.  First of all, the ability to hack into a system involves a skill set that is not identical to that required to design a secure system or to perform an audit.  Considering how weak many systems are, and how many attack tools are available, “hackers” have not necessarily been particularly skilled.  (The same is true of “experts” who discover attacks and weaknesses in existing systems and then publish exploits, by the way—that behavior does not establish the bona fides for real expertise.  If anything, it establishes a disregard for the community it endangers.)

More importantly, people who demonstrate a questionable level of trustworthiness and judgement at any point by committing criminal acts present a risk later on.  Certainly it is possible that they will learn the error of their ways and reform.  However, it is also the case that they may slip later and revert to their old ways.  Putting some of them in situations of trust with access to items of value is almost certainly too much temptation.  This has been established time and again in studies of criminals of all types, especially those who commit fraud.  So, why would a prudent manager take a risk when better alternatives are available?

Even worse, circulating stories of criminals who end up as highly-paid consultants are counterproductive, even if they are rarely true.  That is the kind of story that may tempt some without strong ethics to commit crimes as a shortcut to fame and riches.  Additionally, it is insulting to the individuals who work hard, study intently, and maintain a high standard of conduct in their careers—hiring criminals basically states that the honest, hardworking real experts are fools.  Is that the message we really want to put forward?

Luckily, most responsible managers now understand, even if the press and general public don’t, that criminals are simply that—criminals.  They may have served their sentences, which now makes them former criminals…but not innocent.  Pursuing criminal activity is not—and should not be—a job qualification or career path in civilized society.  There are many, many historical examples we can turn to for examples, including those of hiring pirates as privateers and train robbers as train guards.  Some took the opportunity to go straight, but the instances of those who abused trust and made off with what they were protecting illustrate that it is a big risk to take.  It also is something we have learned to avoid.  We are long past the point where those of us in computing should get with the program.

So, what of the argument that there aren’t enough real experts, or they cost too much to hire?  Well, what is their real value? If society wants highly-trained and trustworthy people to work in security, then society needs to devote more resources to support the development of curriculum and professional standards.  And it needs to provide reasonable salaries to those people, both to encourage and reward their behavior and expertise.  We’re seeing more of that now than a dozen years ago, but it is still the case that too many managers (and government officials) want security on the cheap, and then act surprised when they get hacked.  I suppose they also buy their Rolex and Breitling watches for $50 from some guy in a parking lot and then act surprised and violated when the watch stops a week later.  What were they really expecting?

Fun with Internet Video

[tags]network crime, internet video, extortion, streaming video[/tags]

Here’s an interesting story about what people can do if they gain access to streaming video at a poorly-protected site.  If someone on the other end of the phone is really convincing, what could she get the victims to do? 

FBI: Strip Or Get Bombed Threat Spreads - Local News Story - KPHO Phoenix:

Items In the news

[tags]news, cell phones, reports, security vulnerabilities, hacking, computer crime, research priorities, forensics, wiretaps[/tags]
The Greek Cell Phone Incident
A great story involving computers and software, even though the main hack was against cell phones:
IEEE Spectrum: The Athens Affair.  From this we can learn all sorts of lessons about how to conduct a forensic investigation, retention of logs, wiretapping of phones, and more.

Now, imagine VoIP and 802.11 networking and vulnerabilities in routers and…. —the possibilities get even more interesting.  I suspect that there’s a lot more eavesdropping going on than most of us imagine, and certainly more than we discover.

NRC Report Released
Last week, the National Research Council announced the release of a new report: Towards a Safer and More Secure Cyberspace.  The report is notable in a number of ways, and should be read carefully by anyone interested in cyber security.  I think the authors did a great job with the material, and they listened to input from many sources.

There are 2 items I specifically wish to note:

  1. I really dislike the “Cyber Security Bill of Rights” listed in the report.  It isn’t that I dislike the goals they represent—those are great.  The problem is that I dislike the “bill of rights” notion attached to them.  After all, who says they are “rights”?  By what provenance are they granted?  And to what extremes do we do to enforce them?  I believe the goals are sound, and we should definitely work towards them, but let’s not call them “rights.”
  2. Check out Appendix B.  Note all the other studies that have been done in recent years pointing out that we are in for greater and greater problems unless we start making some changes.  I’ve been involved with several of those efforts as an author—including the PITAC report, the Infosec Research Council Hard Problems list, and the CRA Grand Challenges. Maybe the fact that I had no hand in authoring this report means it will be taken seriously, unlike all the rest. grin  More to the point, people who put off the pain and expense of trying to fix things because “Nothing really terrible has happened yet” do not understand history, human nature, or the increasing drag on the economy and privacy from current problems.  The trends are fairly clear in this report: things are not getting better.

Evolution of Computer Crime
Speaking of my alleged expertise at augury, I noted something in the news recently that confirmed a prediction I made nearly 8 years ago at a couple of invited talks: that online criminals would begin to compete for “turf.”  The evolution of online crime is such that the “neighborhood” where criminals operate overlaps with others.  If you want the exclusive racket on phishing, DDOS extortion, and other such criminal behavior, you need to eliminate (or absorb) the competition in your neighborhood.  But what does that imply when your “turf” is the world-wide Internet?

The next step is seeing some of this spill over into the physical world.  Some of the criminal element online is backed up by more traditional organized crime in “meat space.”  They will have no compunction about threatening—or disabling—the competition if they locate them in the real world.  And they may well do that because they also have developed sources inside law enforcement agencies and they have financial resources at their disposal.  I haven’t seen this reported in the news (yet), but I imagine it happening within the next 2-3 years.

Of course, 8 years ago, most of my audiences didn’t believe that we’d see significant crime on the net—they didn’t see the possibility.  They were more worried about casual hacking and virus writing.  As I said above, however, one only needs to study human nature and history, and the inevitability of some things becomes clear, even if the mechanisms aren’t yet apparent.

The Irony Department
GAO reported a little over a week ago that DHS had over 800 attacks on their computers in two years.  I note that the report is of detected attacks.  I had one top person in DC (who will remain nameless) refer to DHS as “A train wreck crossed with a nightmare, run by inexperienced political hacks” when referring to things like TSA, the DHS cyber operations, and other notable problems.  For years I (and many others) have been telling people in government that they need to set an example for the rest of the country when it comes to cyber security.  It seems they’ve been listening, and we’ve been negligent.  From now on, we need to stress that they need to set a good example.

[posted with ecto]

Diversity

[tags]diversity, complexity, monocultures[/tags]
In my last post, I wrote about the problems brought about by complexity.  Clearly, one should not take the mantra of “simplification” too far, and end up with a situation where everything is uniform, simple, and (perhaps) inefficient.  In particular, simplification shouldn’t be taken to the point where diversity is sacrificed for simple uniformity.


Nature penalizes monocultures in biological systems.  Monocultures are devastated by disease and predators because they have insufficient diversity to resist.  The irish potato famine, the emerald ash borer, and the decimation of the Aztecs by smallpox are all examples of what happens when diversity is not present. Nature naturally promotes diversity to ensure a robust population.

We all practice diversity in our everyday lives.  Diversity of motor vehicles, for instance supports fitness for purpose—a Camero, is not useful for hauling dozens of large boxes of materials.  For that, we use a truck.  However, for one person to get from point A to point B in an economical fashion, a truck is not the best choice.  It might be cheaper and require less training to use the same vehicle for everything, but there are advantages to diversity.  Or tableware—we have (perhaps) too many forks and spoon types in a formal placesetting, but try eating soup with a fork and you discover that some differentiation is useful!

In computing, competition has resulted in advances in hardware and software design.  Choice among products has kept different approaches moving forward.  Competition for research awards from DARPA and NSF has encouraged deeper thought and more focused proposals (and resultant development).  Diversity in operating systems and programming languages brought many advancements in the era 1950-2000.  However, expenses and attempts to cut staff have led to widespread homogenization of OS, applications, and languages over approximately the last decade.

Despite the many clear benefits of promoting diversity, too many organizations have adopted practices that prevent diversity of software and computing platforms.  For example, the OMB/DoD Common Desktop initiative is one example where the government is steering personnel towards a monoculture that is more maintainable day-to-day, but which is probably more vulnerable to zero-day attacks and malware.

Disadvantages of homogeneity:

  • greater susceptibility to zero-day vulnerabilities and attacks
  • “box canyon” effect of being locked into a vendor for future releases
  • reduced competition to improve quality
  • reduced competition to reduce price and/or improve services
  • reduced number of algorithms and approaches that may be explored
  • reduced fitness for particular tasks
  • simpler for adversaries to map and understand networks and computer use
  • increased likelihood that users will install unauthorized software/hardware from outside sources


Advantages of homogeneity:

  • larger volume for purchases
  • only one form of tool, training, etc needed for support
  • better chance of compatible upgrade path
  • interchangeability of users and admins
  • more opportunities for reuse of systems

Disadvantages of heterogeneity:

  • more complexity so possibly more vulnerabilities
  • may not be as interoperable
  • may require more training to administer
  • may not be reusable to the same extent as homogeneous systems


Advantages of heterogeneity:

  • when at a sufficient level greater resistance to malware
  • highly unlikely that all systems will be vulnerable to a single new attack
  • increased competition among vendors to improve price, quality and performance
  • greater choice of algorithms and tools for particular tasks
  • more emphasis on standards for interoperability
  • greater likelihood of customization and optimization for particular tasking
  • greater capability for replacement systems if a vendor discontinues a product or support


Reviewing the above lists makes clear that entities concerned with self-continuation and operation will promote diversity, despite some extra expense and effort.  The potential disadvantages of diversity are all things that can be countered with planning or budget.  The downsides of monocultures, however, cannot be so easily addressed.

Dan Geer wrote an interesting article for Queue Magazine about diversity, recently.  It is worth a read.

The simplified conclusion: diversity is good to have.

Optional Client-Side Input Validation That Matches Server-side Validation

It is common practice to make forms more user-friendly by giving immediate feedback on the inputs with client-side scripting.  Everyone with a bit of secure programming knowledge knows, however, that the server side needs to do the final input validation.  If the two validations are not equivalent, then an input that passes client-side validation may be rejected later, confusing and annoying the customer, or the client-side validation may be needlessly restrictive.  Another problem is when the form stops working if JavaScript is disabled, due to the way input validation was attempted.

I was delighted to discover that the regular expression syntax in JavaScript and Ruby match, and the matching differs only in greedy vs non-greedy behavior, and not whether a match is possible or not.  This means that regular expressions describing a white list of correct inputs can be used for both (this probably works for Perl, Python and PHP as well but I haven’t checked).

In the code for ReAssure, all inputs are defined by classes that create the html for forms, as well as perform input validation.  This means that the regular expression can be defined in a single place, when the class is instantiated:

   def initialize(...)
      (...)
      @regexp = Regexp.new(/^\d+$/) # positive integer
   end

This regular expression can be used to perform the initial server-side input validation:

   def validate(input) 
      if input == nil
         unescaped = default()
      else 
         unescaped = CGI.unescapeHTML(input.to_s.strip)
      end
      unescaped.scan(@regexp) { |match|
         return @value = match.untaint
      }
      if input != ''
         raise 'Input "' + @ui_name + '" is not valid'
      end
   end

To perform client-side input validation, the onblur event is used to trigger validation when focus is lost.  The idea is to make the input red and bold (for color-blind people) when validation fails, and green when it passes.  The onfocus event is used to restore the input to a neutral state while editing (this is the Ruby code that generates the form html):

   def form
      $cgi.input('NAME'=>@name, 'VALUE'=>to_html(), 'onblur' => onblur(),
          'onfocus' => onfocus())
   end

   def onblur()
      return "if (this.value.search(/" + @regexp.source + "/) < 0) 
          {this.className = 'bad'} else {this.className = 'good'};"
   end

   def onfocus()
      return "this.className = 'normal';"
   end

where the classes “bad”, “good” and “normal” are specified in a style sheet (CSS). 
There are cases when more validation may happen later on the server side, e.g., if an integer must match an existing key in a database that the user may be allowed to reference.  Could the extra validation create a mismatch?  Perhaps.  However, in these cases the client-side interface should probably be a pre-screened list and not a free-form input, so the client would have to be malicious to fail server-side validation.  It is also possible to add range (for numbers) and size (for strings) constraints in the “onblur” JavaScript.  In the case of a password field, the JavaScript contains several checks matching the rules on the server side.  So, a lone regular expression may not be sufficient for complete input validation, but it is a good starting point.

Note that the form still works even if JavaScript is disabled!  As you can see, it is easy to perform client-side validation without forcing everyone to turn on JavaScript wink