The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog - April 2006

Page Content

Security Myths and Passwords

(This is an updated version of a contribution I made to the Educause security mailing list last week.)

In the practice of security we have accumulated a number of “rules of thumb” that many people accept without careful consideration.  Some of these get included in policies, and thus may get propagated to environments they were not meant to address.  It is also the case that as technology changes, the underlying (and unstated) assumptions underlying these bits of conventional wisdom also change.  The result is a stale policy that may no longer be effective…or possibly even dangerous.

Policies requiring regular password changes (e.g., monthly) are an example of exactly this form of infosec folk wisdom.

From a high-level perspective, let me observe that one problem with any widespread change policy is that it fails to take into account the various threats and other defenses that may be in place.  Policies should always be based on a sound understanding of risks, vulnerabilities, and defenses.  “Best practice” is intended as a default policy for those who don’t have the necessary data or training to do a reasonable risk assessment.

Consider the underlying role of passwords: authentication.  Good authentication is intended to support access control, accountability and (in some cases) accounting.  Passwords provide a cost-effective and user-familiar form of authentication.  However, they have a number of failure modes depending on where they are used and the threats arrayed against them.  Failure modes include disclosure, inference, exposure, loss, guessing, cracking, and snooping.  In the most general case, passwords (such as the security numbers on credit cards, or mother’s maiden name) are not sufficiently secret and are simply too weak to use as strong authenticators.  I’ll skip this case, although it is far too pervasive to actually ignore.

Disclosure is a systemic threat on the platforms involved, as well as in the operational methods used to generate and transmit the passwords.  This cannot be addressed through changing the password.  Instead, the methods used to generate and distribute passwords needs to be examined to ensure that the passwords are not disclosed to the wrong parties.  Most operating systems are currently designed so that passwords are not stored “in the clear” and this reduces the chance of disclosure.  Unfortunately, some 3rd-party applications (including web-based systems) fail to adequately guard the passwords as they are entered, stored, or compared, resulting in potential disclosure.

Another form of disclosure is when the holder of the password discloses the password on purpose.  This is an education and enforcement issue.  (Anecdote:  at one location where a new policy was announced that passwords must be changed every month, a senior administrator was heard to moan “Do you know how much time I’m going to waste each month ensuring that everyone on my staff knows my new password?”)

Inference occurs when there is a pattern to the way the passwords are generated/chosen and thus can be inferred.  For instance, knowing that someone uses the same password with a different last character for each machine allows passwords to be inferred, especially if coupled with disclosure of one.  Another example is where generated passwords are employed and the generation algorithm is predictable.

Exposure is the case where accident or unintended behavior results in a sporadic release of a password.  As an example, think of someone accidentally typing her password as the user name in login, and it is captured in the audit trail.  Another example is when someone accidentally types his password during a demonstration and it is exposed on a projection screen to a class.

Loss is when someone forgets his or her password, or (otherwise) loses whatever is used to remind/recreate the password.  This introduces overhead to recover the password, and may induce the user to keep extra reminders/copies of the password around—leading to greater exposure—or to use more memorable passwords—leading to more effective guessing attacks.  It is also the case that frequent loss opens up opportunities for eavesdropping and social engineering attacks on the reset system as it becomes more frequently used: safeguards on reset may be relaxed because they introduce too much delay on a system under load.

Guessing is self-explanatory.  Guessing is limited to choices that can be guessed.  After a certain limited number of choices, the guessing can only transform into a cracking attempt.

Cracking is when an intermediate form of the password (e.g., an encrypted form stored in the authentication database) is captured and attacked algorithmically, or where iterated attempts are made to generate the password algorithmically.  The efficacy of this approach is determined by the strength of the obfuscation used (e.g., encryption), the checks on bad attempts, and the power and scope of the resources brought to bear (e.g., parallel computing, multi-lingual databases).

Snooping (eavesdropping) is when someone intercepts a communication employing the password, either in cleartext or in some intermediate form.  The password is then extracted.  Network sniffing and keyloggers are both forms of snooping.  Various technical measures, such as network encryption, can help reduce the threat.

Now, looking back over those, periodic password changing really only reduces the threats posed by guessing, and by weak cracking attempts.  If any of the other attack methods succeed, the password needs to be changed immediately to be protected—a periodic change is likely to be too late to effectively protect the target system.  Furthermore, the other attacks are not really blunted by periodic password changes.  Guessing can be countered by enforcing good password selection, but this then increases the likelihood of loss by users forgetting the passwords.  The only remaining threat is that periodic changes can negate cracking attempts, on average.  However, that assumes that the passwords choices are appropriately random, the algorithms used to obfuscate them (e.g., encryption) are appropriately strong, and that the attackers do not have adequate computing/algorithmic resources to break the passwords during the period of use.  This is not a sound assumption given the availability of large-scale bot nets, vector computers, grid computing, and so on—at least over any reasonable period of time.

In summary, forcing periodic password changes given today’s resources is unlikely to significantly reduce the overall threat—unless the password is immediately changed after each use.  This is precisely the nature of one-time passwords or tokens, and these are clearly the better method to use for authentication, although they do introduce additional cost and, in some cases, increase the chance of certain forms of lost “password.”

So where did the “change passwords once a month” dictum come from?  Back in the days when people were using mainframes without networking, the biggest uncontrolled authentication concern was cracking.  Resources, however, were limited.  As best as I can find, some DoD contractors did some back-of-the-envelope calculation about how long it would take to run through all the possible passwords using their mainframe, and the result was several months.  So, they (somewhat reasonably) set a password change period of 1 month as a means to defeat systematic cracking attempts.  This was then enshrined in policy, which got published, and largely accepted by others over the years.  As time went on, auditors began to look for this and ended up building it into their “best practice” that they expected.  It also got written into several lists of security recommendations.

This is DESPITE the fact that any reasonable analysis shows that a monthly password change has little or no end impact on improving security!    It is a “best practice” based on experience 30 years ago with non-networked mainframes in a DoD environment—hardly a match for today’s systems, especially in academia!

The best approach is to determine where the threats are, and choose defenses accordingly.  Most important is to realize that all systems are not the same!  Some systems with very sensitive data should probably be protected with two-factor authentication: tokens and/or biometrics.  Other systems/accounts, with low value, can still be protected by plain passwords with a flexible period for change.  Of course, that assumes that the OS is strong enough to protect against overall compromise once a low-privilege account is compromised….not always a good bet in today’s operating environment!

And, btw, I’ve got some accounts where I’ve used the same password for several years with nary an incident.  But in the spirit of good practice, that’s all I’m going to say about the passwords, the accounts, or how I know they are still safe. :-)

One of my favorite Dilbert cartoons (from 9/10/05) ends with the pointy-haired boss saying “...and starting today, all passwords must contain letters, numbers, doodles, sign language and squirrel noises.”  Sound familiar to anyone?

[A follow-up post is available.]

What is Secure Software Engineering?

A popular saying is that “Reliable software does what it is supposed to do.  Secure software does that and nothing else” (Ivan Arce).  However, how do we get there, and can we claim that we have achieved the practice of an engineering science?  The plethora of vulnerabilities found every year (thousands, and that’s just in software that matters or is publicly known) suggests not.  Does that mean that we don’t know how, or that it is just not put into practice for reasons of ignorance, education, costs, market pressures, or something else?

The distinction between artisanal work and engineering work is well expressed in the SEI (Software Engineering Institute) work on capability maturity models.  Levels of maturity range from 1 to 5:

  1. Ad-hoc, individual efforts and heroics
  2. Repeatable
  3. Defined
  4. Managed
  5. Optimizing (Science)

 
  Artisanal work is individual work, entirely dependent on the (unique) skills of the individual and personal level of organization.  Engineering work aims to be objective, independent from one individual’s perception and does not require unique skills.  It should be reproducible, predictable and systematic.

  In this context, it occurred to me that the security community often suggests using methods that have artisanal characteristics.  We are also somewhat hypocritical (in the academic sense of the term, not deceitful, just not thinking through critically enough).  The methods that are suggested to increase security actually rely on practices we decry.  What am I talking about?  I am talking about black lists.

  A common design error is to create a list of “bad” inputs, bad characters, or other undesirable things.  This is a black list;  it often fails because the enumeration is incomplete, or because the removal of bad characters from the input can result in the production of another bad input which is not caught (and so on recursively).  It turns out more often than not that there is a way to circumvent or fool the black list mechanism.  Black lists fail also because they are based on previous experience, and only enumerate *known* bad input.  The recommended practice is the creation of white lists, that enumerate known good input.  Everything else is rejected. 

  When I teach secure programming, I go through often repeated mistakes, and show students how to avoid them.  Books on secure programming show lists upon lists of “sins” and errors to avoid.  Those are blacklists that we are in effect creating in the minds of readers and students!  It doesn’t stop there.  Recommended development methods (solutions for repeated mistakes) also often take the form of black lists.  For example, risk assessment and threat modeling require expert artisans to imagine, based on past experience, what are likely avenues of attack, and possible damage and other consequences.  The results of those activities are dependent upon unique skill sets, are irreproducible (ask different people and you will get different answers), and attempt to enumerate known bad things.  They build black lists into the design of software development projects. 

  Risk assessment and threat modeling are appropriate for insurance purposes in the physical world, because the laws of physics and gravity on earth aren’t going to change tomorrow.  The experience is collected at geographical, geological and national levels, tabulated and analyzed for decades.  However, in software engineering, black lists are doomed to failure, because they are based on past experience, and need to face intelligent attackers inventing new attacks.  How good can that be for the future of secure software engineering? 

  Precious few people emphasize development and software configuration methods that result (with guarantees) in the creation of provably correct code.  This of course leads into formal methods (and languages like SPARK and the correctness by construction approach), but not necessarily so.  For example, I was recently educated on the existence of a software solution called AppArmor (Suse Linux, Crispin Cowan et al.).  This solution is based on fairly fine-grained capabilities, and granting to an application only known required capabilities;  all the others are denied.  This corresponds to building a white list of what an application is allowed to do;  the developers even say that it can contain and limit a process running as root.  Now, it may still be possible for some malicious activity to take place within the limits of the granted capabilities (if an application was compromised), but their scope is greatly limited.  The white list can be developed simply by exercising an application throughout its normal states and functions, in normal usage.  Then the list of capabilites is frozen and provides protection against unexpected conditions. 

  We need to come up with more white list methods for both development and configuration, and move away from black lists.  This is the only way that secure software development will become secure software engineering.

Edit (4/16/06): Someone pointed out the site http://blogs.msdn.com/threatmodeling/ to me.  It is interesting because it shows awareness of the challenge of getting from an art to a science.  It also attempts to abstract the expert knowledge into an “attack library”, which makes explicit its black list nature.  However, they don’t openly acknowledge the limitations of black lists.  Whereas we don’t currently have a white list design methodology that can replace threat modeling (it is useful!), it’s regrettable that the best everyone can come up with is a black list. 

Also, it occurred to me since writing this post that AppArmor isn’t quite a pure white list methodology, strictly speaking.  Instead of being a list of known *safe* capabilities, it is a list of *required* capabilities.  The difference is that the list of required capabilities, due to the granularity of capabilities and the complexity emerging from composing different capabilities together, is a superset of what is safe for the application to be able to do.  What to call it then?  I am thinking of “permissive white list” for a white list that allows more than necessary, vs a “restrictive white list” for a white list that possibly prevents some safe actions, and an “exact white list” when the white list matches exactly what is safe to do, no more and no less.

ID Theft Resource

CNET has published an excellent resource for protecting oneself from identity theft.  The site includes an ID theft FAQ with many good tips, a roundtable debate, and a few little multimedia gems.

One of my favorite pieces is the pie graph in the sidebar that illustrates risks to ID theft.  The most prevalent risks still come from offline.

I gave an ID Theft talk several months ago, and the audience was looking for any way to protect themselves online, to the point of absurdity.  But when I suggested that they cut down on all the stuff they carry in their wallets and/or purses, they nearly revolted: “What if I need to do XYZ and don’t have my ID/credit card/library card/customer card/social security card/insurance card/etc.?”

To me, this illustrates that we have a long way to go in educating users about risks. It also illustrates that we need to push back from all the noise created in the infotainment industries, who are perpetuating the online myth and ignoring the brick-and-mortar threats.

 

What is Higher Education’s Role in Regards to ID Theft?

A recent study by the US Justice Department notes that households headed by individuals between the ages of 18 and 24 are the most likely to experience identity theft.  The report does not investigate why this age group is more susceptible, so I’ve started a list:

  • Willingness To Share Information: If myspace, facebook, and the numerous blog sites like livejournal are any indication, younger adults tend to be more open about providing personal information.  While these sites may not be used by identity thieves, they nonetheless illustrate students’ willingness to divulge intimate details of their personal lives.  Students might be more forthcoming with their SSN, account information and credit card numbers than are their elders.
  • Financial Inexperience: Many college students are out on their own for the first time.  Many also are in “control” of their finances for the first time.  With that lack of experience comes a lack of experience with and knowledge about tracking expenditures and balancing checkbooks.  College students are an easier target for identity thieves who can ring up several purchases before being noticed.
  • Access to Credit: A walk around campus during the first few weeks of the year also reveals another contributing factor. Students are lured into applying for credit cards by attractive young men and women handing out free T-shirts and other junk.  It is not unusual for a college freshman to have three or four credit cards with limits of $1000 to $5000.
  • Lost Credit Cards and Numbers: This might be a stretch, but I know many college students who periodically loose their wallets, purses, etc. and who did not act quickly in canceling their debit and credit cards.  I also know many who have accidentally left a campus bar without closing their tab.  It would be trivial to get access to someone else’s card at these establishments.  Along with this reason comes access to friends’ and roommates’ cards.

I’m sure there are many more contributing factors.  What interests me is determining the appropriate role of the university in helping to prevent identity theft among this age group.  Most colleges and universities now engage in information security awareness and training initiatives with the goal of protecting the university’s infrastructure and the privacy of information covered by regulations such as FERPA, HIPPA, and so on.  Should higher education institutions extend infosec awareness campaigns so that they deal with issues of personal privacy protection and identity theft?  What are the benefits to universities?  What are their responsibilities to their students?

For educational organizations interested in educating students about the risks of identity theft, the U.S. Department of Education has a website devoted to the topic as does EDUCAUSE.

 

Useful Awareness Videos

The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest.  Topics covered include spyware, phishing, and patching.  The winning video,  Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.

The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest.  In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.

Blog Archive

Get Your Degree with CERIAS