Posts in Secure IT Practices

The biggest mistake of Myspace

Myspace, the super-popular web site that your kid uses and you don’t, was once again hit by a worm, this time utilizing Macromedia Flash as its primary vector.  This was a reminder for me of just how badly Myspace has screwed up when it comes to input filtering:

  • They use a “blacklist” approach, disallowing customized markup that they know could be an issue.  How confident are you that they covered all their bases, and could anticipate future problems?  I don’t trust my own code that much, let alone theirs.
  • They allow embed HTML tags.  That means letting folks embed arbitrary content that utilizes plugins, like… Flash. While Myspace filters Javascript, they seem to have forgotten that Flash has Javascript interaction and DOM manipulation capabilities.  If you’re a Myspace user, you may have noticed Javascript alert()-style pop-up windows appearing on some profiles—those are generated by embedding an offsite Flash program into a profile, which then generates Javascript code.

Even if they can plug these holes, it’s unlikely that anything short of a full rewrite/refactorization of their profile customization system can ever be considered moderately secure.

So will Myspace get their act together and modify their input filtering approaches? Very unlikely.  A large portion of Myspace’s appeal relies upon the customization techniques that allow users to decorate their pages with all manner of obnoxious flashing, glittery animations and videos.  Millions of users use cobbled-together hacks to twist their profiles into something fancier than the default, and a substantial cottage industry has sprung up around the subject.  Doing proper input filtering means undoing much of that.

Even if relatively secure equivalent techniques are offered, Myspace would certainly find themselves with a disgruntled user base that’s more likely to bail to a competitor.  That’s an incredibly risky move in the social networking market, and will likely lead Myspace to continue plugging holes rather than building a dam that works.

This is why you can’t design web applications with security as an afterthought.  Myspace has, and I think it will prove to be their biggest mistake.

Hacking the MacBook for Biometric Security

Via Infinite Loop, I came across an interesting post from a hawdcore MacBook Pro user who bellied up to the bar and retrofitted a Sony fingerprint scanner into his precious Apple laptop.  No indication that the hardware actually interfaces at all with OS X, but it’s pretty cool, and maybe Apple will get some inspiration from this. 8)

OSCON 2006: PHP Security BOF

So who’s going to OSCON 2006?  I am, and if you are too, drop me a line so we can meet up.  I’m also going to be “moderating” a PHP Security BOF meet, so if you have some interest in PHP Security or secure web dev in general, come by and participate in the chaos.

If you’re planning on going, make sure to check out the official wiki and the OSCamp wiki.

Security expert recommends ‘Net diversity - Network World

I recently did an interview with Network World magazine.  The topics discussed might well be of interest to readers of this blog.
[tags]network security,risk management,diversity,security trends[/tags]

[posted with ecto]

More Useful Firefox Security Extensions

As promised, I’m following up my previous post about security extensions for Firefox with suggestions from readers.  Some of these are basically different solutions to similar problems—which is great, because some users will prefer one approach over another.  A couple of these are very useful, though, and should be considered essential parts of a secure browsing platform.  And one seems very useful, but raises privacy issues that are a little troubling.

(An aside: I wonder if a “more secure” version of Firefox is being built and distributed by someone, one that includes some of these extensions out of the box.  If so, give us a heads-up.)

  • McAfee SiteAdvisor

    McAfee SiteAdvisor, started at MIT, is a project to classify the “safety” of a site into green (safe), yellow (caution) and red (warning) categories.  Testing is done by a system of bot programs that interact with web sites, doing things like submitting email signup forms, testing downloads for adware and viruses, and looking at the safety levels of linked sites.  Users can also submit reports manually.

    The safety level of a site is displayed as a button in Firefox’s status bar, which I’m not sure was the best place.  My eyes tend to spend more time at the top half of my browser window (maybe because I have 1920x1200 display), so more often than not I found myself forgetting that I had SiteAdvisor installed.  I would have appreciated an option for display as a toolbar, like Netcraft’s extension.

    McAfee SiteAdvisor info in Google search resultsI did, however, really dig the integration with search result pages from Google, Yahoo! and MSN.  Links to result pages—even sponsored links—have a green, yellow or red icon appended to the end, and mousing over the icon displays a popup with additional info.  This was very clear and easy to grasp without being intrusive or overbearing.

    (McAfee also maintains a SiteAdvisor blog that’s quite interesting.)

  •  

     

  • Stanford SafeCache and SafeHistory

    SafeCache and SafeHistory are extensions developed to address methods where users can be tracked via browser features that don’t apply a “same origin” policy: specifically the browser cache and browsing history.  Details of this problem are available at at Same Origin Policy: Protecting Browser State from Web Privacy Attacks, a report created by the Stanford Security Lab.  It’s a good read.

    The SafeCache and SafeHistory extensions apply a proper “same origin” policy to these features, only allowing access to scripts that originate from the same domain as the cached content/history info.  This isn’t perfect, as “cooperative” tracking where two sites pass info back and forth between each other isn’t addressed, but it’s certainly better than the current situation for out of the box browser installs.  Honestly, this is something I think should be a default part of every browser install, because it’s a significant security hole that needs to be addressed.  I hope that the Firefox, IE, Safari and Opera devs are addressing these problems.

  • Netcraft Toolbar

    The Netcraft Toolbar is a useful anti-phishing tool.  A “risk rating” is calculated for your current site’s domain based on criteria like the age of the domain, known phishing sites within the domain, the ISP’s history re: phishing sites, and the like.  Additional info, such as the site’s age and ISP, are displayed in the toolbar, linked to more detailed data on Netscraft’s site.

    What’s a bit worrisome about the Netcraft Toolbar is its site popularity ranking functionality.  Netcraft appears to keep a database of sites visited by toolbar users to provide popularity data.  Their privacy policy does state that no personal information is collected, but it’s something users should be aware of before installing.

  • PasswordMaker

    The plethora of web-based accounts we maintain can get out of hand quickly, and maintaining separate passwords for each one becomes pretty challenging.  PasswordMaker is an interesting solution to this problem, in that it doesn’t store passwords anywhere, but instead takes a single master password and generates a site-specific password based on 10 criteria, including personal encryption settings and the site itself.  The combination of these criteria makes for an enormous number of possibilties, so typical attacks are not likely to be effective (see their FAQ for more info).  Site passwords are generated on the fly, and are proactively wiped from RAM.  By default it doesn’t store your master password either.

    The program itself isn’t too hard to use, although you’ll probably need to help Grandma get used to it when you set it up for her.  I was able to get it working pretty quickly with some of my existing web app accounts.

    Source code is available (it uses an LGPL license), and versions of PasswordMaker exist for IE, all Mozilla browsers, a Yahoo! Widget, CLI, PHP, and mobile devices.  You can also use an online version if none of those fit the bill.

  • Form SSL Indicator (Greasemonkey script)

    This is a handy Greasemonkey script that scratches an important itch: indicating if a form’s action target is SSL-encrypted.  I liked the implementation here better than the FormFox extension, which pops up a title/alt-style label if you hover over the submission button for a moment.  This script pops up an indicator immediately, and I appreciate the responsiveness.  Still, I wish that the submission button would just have a lock icon layered over it for quicker visual recognition, and this doesn’t do anything for forms with no submission button.

  • Cookie Button

    The Cookie Button extension is really three extensions that offer the same functionality, but in different interface contexts.  All three allow you to quickly see and change the current cookie permissions on a site, with one displaying a Navigation Toolbar button, one adding a right-click context menu, and the third showing a status bar button.  I’m not sure I entirely understand the need to separate these into three different extensions, but it does allow the user to pick the one that best fits his or her interface habits.

  •  

     

  • Prefbar

    Firefox has a bevy of “hidden” preferences, and Prefbar brings them out into the open.  Many of these settings are really a matter of user preference and browser performance, but some—like toggling Javascript, Flash, and User Agent settings, are handy and made much more accessible with this extension.  My personal fav is turning on “Cookie Warning,” which tells you whenever a cookie is being set or modified.  This was one thing I liked about IE’s cookie handling, and I missed it in Firefox.  It’s there in the cookie prefs (set “Keep Cookies” to “ask me every time”), but I didn’t realize how to set it until I researched it a bit—Prefbar made it a lot clearer.

    One little annoyance I found with Prefbar was that it doesn’t seem to “group” itself with the rest of your toolbars.  I like to right-click on on the Navigation Toolbar to swap in and out the 5 or 6 toolbars I have installed, but Prefbar refuses to show up in this list, instead mapping itself to F8 (which will annoy folks who use that key for other functions, like Exposé on OS X), and appearing in the View menu.  *grumble*

  •  

As before, if you have suggestions for useful security/privacy related addons for Firefox, please let me know.

Reporting Vulnerabilities is for the Brave

I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information).  As luck would have it, the web site got hacked.  I had to talk to a detective in the resulting police investigation.  Nothing bad happened to me, but it could have, for two reasons. 

The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out.  Police also wonders if you found one vulnerability, could you have found more and not reported them?  Who did you disclose that information to?  Did you get into the web site, and do anything there that you shouldn’t have?  It’s normal for the police to think that way.  They have to.  Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof.  This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you!  Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time.  We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing.  I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…).  Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means.  If there had been an overlap in time, we could have become suspects.

The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism.  Why anonymously?  Because student vulnerability reporters are akin to whistleblowers.  They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading).  In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem.  Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either).  They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions.  Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities. 

So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited.  I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student.  My superiors also requested that I cooperate with the detective.  Was this worth losing my job?  Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized?  Thankfully, the student bravely decided to step forward and defused the situation. 

As a consequence of that experience, I intend to provide the following instructions to students (until something changes):

  1. If you find strange behaviors that may indicate that a web site is vulnerable, don’t try to confirm if it’s actually vulnerable.
  2. Try to avoid using that system as much as is reasonable.
  3. Don’t tell anyone (including me), don’t try to impress anyone, don’t brag that you’re smart because you found an issue, and don’t make innuendos.  However much I wish I could, I can’t keep your anonymity and protect you from police questioning (where you may incriminate yourself), a police investigation gone awry and miscarriages of justice.  We all want to do the right thing, and help people we perceive as in danger.  However, you shouldn’t help when it puts you at the same or greater risk.  The risk of being accused of felonies and having to defend yourself in court (as if you had the money to hire a lawyer—you’re a student!) is just too high.  Moreover, this is a web site, an application;  real people are not in physical danger.  Forget about it.
  4. Delete any evidence that you knew about this problem.  You are not responsible for that web site, it’s not your problem—you have no reason to keep any such evidence.  Go on with your life.
  5. If you decide to report it against my advice, don’t tell or ask me anything about it.  I’ve exhausted my limited pool of bravery—as other people would put it, I’ve experienced a chilling effect.  Despite the possible benefits to the university and society at large, I’m intimidated by the possible consequences to my career, bank account and sanity.  I agree with HD Moore, as far as production web sites are concerned: “There is no way to report a vulnerability safely”.



Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond.  After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer.  There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service. 

Using mod_security to block PHP injection attacks

mod_security is an essential tool for securing any apache-based hosting environment.  The Pathfinder High Performance Infrastructure blog has posted a good starter piece on using mod_security to block email injections.

One of the more common problems with PHP-based applications is that they can allow the injection of malicious content, such as SQL or email spam. In some cases we find that over 95% of a client’s ISP traffic is coming from spam injection. The solution? Grab an industrial size helping of Apache mod_security.

BTW, Ivan Ristic’s (the developer of mod_security) Web Security Blog is well worth a spot in your blogroll.

(Edit: fixed title.  Duh.)

Passwords and human memory

[tags]passwords, human factors, general security[/tags]
Today, I found a pointer to this short news story: Password Security is Her Game.  Here’s a quote from that story:

Many users have half a dozen passwords to remember. That’s why the most common password is “password.” The usual solution is to write it down. But how secure is that? Practicality wins. The probability of remembering six passwords is not that great. Half the people who say they never write down their passwords need to have their passwords reset because of forgetting.

I wasn’t going to post anything else on passwords so soon, but this seemed particularly pertinent.  Plus, the researcher is a Purdue alumna. grin

Passwords and Myth

When I posted earlier about passwords and best practices, I had no idea it would elicit such a response!  So, now that my class’s final exams and papers are graded, I will return to the topic and attempt to address some of the points raised in comments—or, at least those comments that were related to the original blog entry.
[tags] best practices, passwords, awareness, general security[/tags]

Best Practices
It was certainly not my intent to disparage all best practices.  I was merely observing that sometimes best practices are viewed as a panacea.  It is important for people to understand the origins of the best practices they espouse, and whether they are indeed “best”!  Sometimes, excellent practices are adopted outside their realm of proper application, or are used too long without proper (re)evaluation of the underlying conditions.  “Best practices” are designed for the average case, but are not meant to be blindly applied in every case—reason should be applied to the situation, but isn’t.  And all too often, folklore and superstition are accepted as “best practice”  because they “seem” correct, or coincidentally produce desired results.

Consider an example of the first of these (understanding the realm of application): showing an ID to get inside a closed facility, proving that you are a current employee of the company or agency.  That is excellent security practice…until you move it to the lobby of every office building!.  At that point, too many guards aren’t really checking the cards to see if someone is really who they claim to be.  Instead of watching for suspicious behavior, many guards now simply look for a laminated card with a picture on it, and something that looks like an official seal.  Security in many places has degraded by accepting what “best practice” is without understanding where it is really best.

The second case (blind application without reasoning) is illustrated by many of the things that TSA does in airline passenger screening.  One example, told to me by a Federal law enforcement agent, is when he showed his badge and papers while passing though security.  They didn’t make him take out his weapon when going through the metal detector…but then they insisted that he run his shoes through the X-ray machine!  They had rules that allowed them to let a law enforcement agent with a semiautomatic handgun through the checkpoint, but they couldn’t appropriately reason about why they had a rule about screening shoes and apply it to this case!  (Of course, several aspects of TSA screening are poorly considered, but that may be a topic for a later post.)

The third case—folklore and superstition accepted as best practice—is rampant in information security, and I intend to say more about this in later postings.

My post about password security was based on the fact that the “change passwords once a month” rule is based on very old practice, and doesn’t really help now in many real-world environments.  In fact, it may result in weaker security in many cases, as users try to find a way around the rules.  At the least, the average user will have the impression reinforced that “Those security guys are idiots and their goal seems to be to make my life more difficult.”  That doesn’t help build a cooperative working environment where the user population is part of the security infrasrtructure!

Risk Assessment
Donn Parker was one of the first people to argue persuasively that traditional risk assessment would not work in modern IT, and that sound design and best practice would have to do.  I greatly respect Donn’s long experience and opinions, but I don’t completely agree.  In many cases it is possible, using recent experience and expert knowledge, to appropriately estimate risk and loss to quartiles or deciles.  Although imperfect, it can help in making choices and understanding priorities.  When there is insufficient experience and knowledge, I agree with Donn that relying on sound practice is the next best thing; of course, sound design should be used at all times!

Some readers commented that they didn’t have the money to do a risk evaluation. Resolving a question such as password change frequency does not require a full-blown audit and risk analysis.  But, as with my previous comment, if you don’t have the resources, experience or knowledge, then pick sound practice—but put in some effort to understand what is sound!

Password Vaults
A number of responses (including several private responses) were directed to the growing number of passwords, PINs, serial numbers and employee IDs we are expected to remember.  Good security practice suggests that authenticators used in different realms of privilege be unique and uncorrelated.  Good privacy practice suggests that we develop independent identifiers for different uses to prevent correlation.  The two combined result in too many things to remember for those of us whose brains are full (to indirectly pay homage to an old Larson cartoon), and especially for the average person who is overly-taxed when remembering anything beyond who was voted off of American Idol this week.  Now, add frequent requirements to change some of those values, and the situation becomes well-nigh impossible.

Several readers mentioned password vault programs that they use, either on PDAs or the WWW.  I was asked my opinion of some of these.

I use several password vaults myself.  They have 4 characteristics that I believe are important:

  1. The programs use published, strong ciphers to encrypt the contents. (e.g., AES). I don’t need to worry about some random person getting the encrypted database and then decrypting all my keys.
  2. The programs are cross-platform so that I can use the same program on my PDA, my laptop, and my home system.  This keeps me from creating keys and passwords then forgetting them because I don’t have the vault program at hand.
  3. The different versions of the program sync with each other, and allow the database to be backed up.  If I lose my PDA, I’m not completely locked out of everything—I can do a restore, unencrypt, and carry on as before.
  4. I don’t store the database and the encryption routines on someone else’s machine.  That way, I don’t have to worry about the owner of a remote site altering the encryption routines, or making a surreptitious copy of my keys.  It is still possible for someone to intercept my interaction with the program on my local machine, but I have other mechanisms in place to monitor and verify those.

Needless to say, I don’t use a web-based password vault service, nor would I necessarily recommend it to anyone who has sensitive passwords.

One other thing—I escrow some of my passwords.  No, I’m not talking about the ill-fated government key escrow scheme that gave the idea a bad name.  I am referring to self-escrow.  Some of my important passwords at work, which would need to be recovered by the staff if I were to be abducted (again grin) by a UFO crew, have been encrypted and escrowed in a safe place that can be accessed in an emergency.  As more things get locked up with extreme encryption, it is all the more critical that we each consider self-escrow.

So, What’s the Frequency, Kenneth?
How often should passwords be changed?  Many of you asked that, and many of you volunteered your own experience, ranging from monthly to “hardly ever.”  These times were backed up with anecdotes.  Of course, this simply serves to reinforce my comment that the time period should be based on risk assessment of your particular, including access to the system, strength of mechanism, usage, sensitivity of protected information, security of the underlying system, and sophistication of the users…to name a few factors.

Basically, I would suggest you start with an assumption that passwords should be changed every quarter.  If the passwords are used over a lightly protected communications link, then change them more often.  If someone could break the password and use the account without being noticed, then further accelerate the change interval.  If users get guidance on strong password selection, and are motivated to help ensure good security, then maybe you can extend the time period.  In many cases, without due care, you realize that any reuse of passwords is risky.  Instead of dismissing that and imposing monthly password changes, use that knowledge to address the underlying problems.

Several of you mentioned the problem of people sharing passwords and only finding out about it after a mandatory password change.  If that’s the case, you have deeper problems than stale passwords!

I continue to advocate use of a one-time password token for highly sensitive or at-risk resources.  Otherwise, use your judgement and professional evaluation of the risks and benefits of change frequencies.

[posted with ecto]

 

Security Myths and Passwords

(This is an updated version of a contribution I made to the Educause security mailing list last week.)

In the practice of security we have accumulated a number of “rules of thumb” that many people accept without careful consideration.  Some of these get included in policies, and thus may get propagated to environments they were not meant to address.  It is also the case that as technology changes, the underlying (and unstated) assumptions underlying these bits of conventional wisdom also change.  The result is a stale policy that may no longer be effective…or possibly even dangerous.

Policies requiring regular password changes (e.g., monthly) are an example of exactly this form of infosec folk wisdom.

From a high-level perspective, let me observe that one problem with any widespread change policy is that it fails to take into account the various threats and other defenses that may be in place.  Policies should always be based on a sound understanding of risks, vulnerabilities, and defenses.  “Best practice” is intended as a default policy for those who don’t have the necessary data or training to do a reasonable risk assessment.

Consider the underlying role of passwords: authentication.  Good authentication is intended to support access control, accountability and (in some cases) accounting.  Passwords provide a cost-effective and user-familiar form of authentication.  However, they have a number of failure modes depending on where they are used and the threats arrayed against them.  Failure modes include disclosure, inference, exposure, loss, guessing, cracking, and snooping.  In the most general case, passwords (such as the security numbers on credit cards, or mother’s maiden name) are not sufficiently secret and are simply too weak to use as strong authenticators.  I’ll skip this case, although it is far too pervasive to actually ignore.

Disclosure is a systemic threat on the platforms involved, as well as in the operational methods used to generate and transmit the passwords.  This cannot be addressed through changing the password.  Instead, the methods used to generate and distribute passwords needs to be examined to ensure that the passwords are not disclosed to the wrong parties.  Most operating systems are currently designed so that passwords are not stored “in the clear” and this reduces the chance of disclosure.  Unfortunately, some 3rd-party applications (including web-based systems) fail to adequately guard the passwords as they are entered, stored, or compared, resulting in potential disclosure.

Another form of disclosure is when the holder of the password discloses the password on purpose.  This is an education and enforcement issue.  (Anecdote:  at one location where a new policy was announced that passwords must be changed every month, a senior administrator was heard to moan “Do you know how much time I’m going to waste each month ensuring that everyone on my staff knows my new password?”)

Inference occurs when there is a pattern to the way the passwords are generated/chosen and thus can be inferred.  For instance, knowing that someone uses the same password with a different last character for each machine allows passwords to be inferred, especially if coupled with disclosure of one.  Another example is where generated passwords are employed and the generation algorithm is predictable.

Exposure is the case where accident or unintended behavior results in a sporadic release of a password.  As an example, think of someone accidentally typing her password as the user name in login, and it is captured in the audit trail.  Another example is when someone accidentally types his password during a demonstration and it is exposed on a projection screen to a class.

Loss is when someone forgets his or her password, or (otherwise) loses whatever is used to remind/recreate the password.  This introduces overhead to recover the password, and may induce the user to keep extra reminders/copies of the password around—leading to greater exposure—or to use more memorable passwords—leading to more effective guessing attacks.  It is also the case that frequent loss opens up opportunities for eavesdropping and social engineering attacks on the reset system as it becomes more frequently used: safeguards on reset may be relaxed because they introduce too much delay on a system under load.

Guessing is self-explanatory.  Guessing is limited to choices that can be guessed.  After a certain limited number of choices, the guessing can only transform into a cracking attempt.

Cracking is when an intermediate form of the password (e.g., an encrypted form stored in the authentication database) is captured and attacked algorithmically, or where iterated attempts are made to generate the password algorithmically.  The efficacy of this approach is determined by the strength of the obfuscation used (e.g., encryption), the checks on bad attempts, and the power and scope of the resources brought to bear (e.g., parallel computing, multi-lingual databases).

Snooping (eavesdropping) is when someone intercepts a communication employing the password, either in cleartext or in some intermediate form.  The password is then extracted.  Network sniffing and keyloggers are both forms of snooping.  Various technical measures, such as network encryption, can help reduce the threat.

Now, looking back over those, periodic password changing really only reduces the threats posed by guessing, and by weak cracking attempts.  If any of the other attack methods succeed, the password needs to be changed immediately to be protected—a periodic change is likely to be too late to effectively protect the target system.  Furthermore, the other attacks are not really blunted by periodic password changes.  Guessing can be countered by enforcing good password selection, but this then increases the likelihood of loss by users forgetting the passwords.  The only remaining threat is that periodic changes can negate cracking attempts, on average.  However, that assumes that the passwords choices are appropriately random, the algorithms used to obfuscate them (e.g., encryption) are appropriately strong, and that the attackers do not have adequate computing/algorithmic resources to break the passwords during the period of use.  This is not a sound assumption given the availability of large-scale bot nets, vector computers, grid computing, and so on—at least over any reasonable period of time.

In summary, forcing periodic password changes given today’s resources is unlikely to significantly reduce the overall threat—unless the password is immediately changed after each use.  This is precisely the nature of one-time passwords or tokens, and these are clearly the better method to use for authentication, although they do introduce additional cost and, in some cases, increase the chance of certain forms of lost “password.”

So where did the “change passwords once a month” dictum come from?  Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking.  Resources, however, were limited.  As best as I can find, some DoD contractors did some back-of-the-envelope calculation about how long it would take to run through all the possible passwords using their mainframe, and the result was several months.  So, they (somewhat reasonably) set a password change period of 1 month as a means to defeat systematic cracking attempts.  This was then enshrined in policy, which got published, and largely accepted by others over the years.  As time went on, auditors began to look for this and ended up building it into their “best practice” that they expected.  It also got written into several lists of security recommendations.

This is DESPITE the fact that any reasonable analysis shows that a monthly password change has little or no end impact on improving security!    It is a “best practice” based on experience 30 years ago with non-networked mainframes in a DoD environment—hardly a match for today’s systems, especially in academia!

The best approach is to determine where the threats are, and choose defenses accordingly.  Most important is to realize that all systems are not the same!  Some systems with very sensitive data should probably be protected with two-factor authentication: tokens and/or biometrics.  Other systems/accounts, with low value, can still be protected by plain passwords with a flexible period for change.  Of course, that assumes that the OS is strong enough to protect against overall compromise once a low-privilege account is compromised….not always a good bet in today’s operating environment!

And, btw, I’ve got some accounts where I’ve used the same password for several years with nary an incident.  But in the spirit of good practice, that’s all I’m going to say about the passwords, the accounts, or how I know they are still safe. grin

One of my favorite Dilbert cartoons (from 9/10/05) ends with the pointy-haired boss saying “...and starting today, all passwords must contain letters, numbers, doodles, sign language and squirrel noises.”  Sound familiar to anyone?

[A follow-up post is available.]