Posts in Infosec Education

PHPSecInfo v0.2 now available

PHPSecInfo Screenshot PHPSecInfo Screenshot

The newest version of PHPSecInfo, version 0.2, is now available.  Here are the major changes:

  • Added link to “more info” in output.  These lead to pages on the phpsec.org site giving more details on the test and what to do if you have a problem
  • Modified CSS to improve readability and avoid license issue with PHP (the old CSS was derived from the output of phpinfo())
  • New test: PhpSecInfo_Test_Session_Save_Path
  • Added display of “current” and “recommended” settings in test result output
  • Various minor changes and bug fixes; see the CHANGELOG for details

-Download now

-Join the mailing list

 

What’s New at CERIAS

I haven’t posted an update lately of new content on our site, so here’s a bit of a make-up post:

CERIAS Reports & Papers

CERIAS Hotlist

CERIAS News

CERIAS Security Seminar Podcast

VMworld 2006:  Teaching (security) using virtual labs

This talk by Marcus MacNeill (Surgient) discussed the Surgient Virtual Training Lab used by CERT-US to train military personnel in security best practices, etc…  I was disappointed because the talk didn’t discuss the challenges of teaching security, and the lessons learned by CERT doing so, but instead focused on how the product could be used in a teaching environment.  Not surprisingly, the Surgient product resembles both VMware’s lab manager and ReAssure.  However, the Surgient product doesn’t support the sharing of images, and stopping and restarting work, e.g. development work by users (from what I saw—if it does it wasn’t mentioned).  They mentioned that they had patented technologies involved, which is disturbing (raise your hand if you like software patents).  ReAssure meets (or will soon, thanks to the VIX API) all of the requirements he discussed for teaching, except for student shadowing (seeing what a student is attempting to do).  So, I would be very interested in seeing teaching labs using ReAssure as a support infrastructure.  There are of course other teaching labs using virtualization that have been developed at other universities and colleges;  the challenge is of course to be able to design courses and exercises that are portable and reusable.  We can all gain by sharing these, but for that we need a common infrastructure where all these exercises would be valid.

The New Security Seminar Podcast

We’ve made some significant changes to how people can view our Security Seminar Series:

  • We’re now offering h.264/mp4 versions of the seminar videos, both as downloadable files and in a spanking-new video podcast.  Look us up in iTunes or the Democracy channel guide, and you’ll find us.  The 320x240 videos are not only higher-quality than what we’ve previously offered, but are also playable on portable players than support h.264 (we’ve tested it on 5G iPods)
  •      
  • We will also look at encoding all of our previous recorded seminars to h.264/mp4 in the next few months.  Those that we have on DVD will be easy, but the ones more than a couple years old we only have on VHS, so they will likely take a lot longer.
  • In the near future — at latest by summer 2007 — we will stop encoding our videos in RealMedia format.  The popularity of Real has faded a lot over the years, and most folks (including us) aren’t interested in installing it.  This would leave us without a streaming video format, but we’re not sure there’s a lot of demand for one now.  If there is, we will likely go with an embedded Flash video player rather than something like Windows Media.

If there is strong interest in providing other video formats, please let us know.  We may consider moving to 640x480 resolution for our videos now that iPods support the larger size, but we don’t want to push the file size to high and make for lengthy downloads.

If you have problems or feedback, please let us know in the comments section.

OSCON 2006: PHP Security BOF

So who’s going to OSCON 2006?  I am, and if you are too, drop me a line so we can meet up.  I’m also going to be “moderating” a PHP Security BOF meet, so if you have some interest in PHP Security or secure web dev in general, come by and participate in the chaos.

If you’re planning on going, make sure to check out the official wiki and the OSCamp wiki.

Free End-User Multimedia Training for Teachers

CERIAS is pleased to announce the launch of a new initiative to increase the security of K-12 information systems nationwide.  We’ve developed a comprehensive set of self-paced multimedia training modules for K-12 educators and support staff titled Keeping Information Safe: Practices for K-12 Schools.  The goal of these modules is to increase the security of K-12 school information systems and the privacy of student data by increasing teacher awareness of pertinent threats and vulnerabilities as well as their responsibilities in keeping information safe.

The modules are available for free for K-12 teachers, institutions, and outreach organizations.

Reporting Vulnerabilities is for the Brave

I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information).  As luck would have it, the web site got hacked.  I had to talk to a detective in the resulting police investigation.  Nothing bad happened to me, but it could have, for two reasons. 

The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out.  Police also wonders if you found one vulnerability, could you have found more and not reported them?  Who did you disclose that information to?  Did you get into the web site, and do anything there that you shouldn’t have?  It’s normal for the police to think that way.  They have to.  Unfortunately, it makes it very uninteresting to report any problems.

A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof.  This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you!  Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time.  We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing.  I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…).  Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means.  If there had been an overlap in time, we could have become suspects.

The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism.  Why anonymously?  Because student vulnerability reporters are akin to whistleblowers.  They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading).  In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem.  Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either).  They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions.  Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities. 

So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited.  I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student.  My superiors also requested that I cooperate with the detective.  Was this worth losing my job?  Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized?  Thankfully, the student bravely decided to step forward and defused the situation. 

As a consequence of that experience, I intend to provide the following instructions to students (until something changes):

  1. If you find strange behaviors that may indicate that a web site is vulnerable, don’t try to confirm if it’s actually vulnerable.
  2. Try to avoid using that system as much as is reasonable.
  3. Don’t tell anyone (including me), don’t try to impress anyone, don’t brag that you’re smart because you found an issue, and don’t make innuendos.  However much I wish I could, I can’t keep your anonymity and protect you from police questioning (where you may incriminate yourself), a police investigation gone awry and miscarriages of justice.  We all want to do the right thing, and help people we perceive as in danger.  However, you shouldn’t help when it puts you at the same or greater risk.  The risk of being accused of felonies and having to defend yourself in court (as if you had the money to hire a lawyer—you’re a student!) is just too high.  Moreover, this is a web site, an application;  real people are not in physical danger.  Forget about it.
  4. Delete any evidence that you knew about this problem.  You are not responsible for that web site, it’s not your problem—you have no reason to keep any such evidence.  Go on with your life.
  5. If you decide to report it against my advice, don’t tell or ask me anything about it.  I’ve exhausted my limited pool of bravery—as other people would put it, I’ve experienced a chilling effect.  Despite the possible benefits to the university and society at large, I’m intimidated by the possible consequences to my career, bank account and sanity.  I agree with HD Moore, as far as production web sites are concerned: “There is no way to report a vulnerability safely”.



Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond.  After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer.  There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service. 

What is Secure Software Engineering?

A popular saying is that “Reliable software does what it is supposed to do.  Secure software does that and nothing else” (Ivan Arce).  However, how do we get there, and can we claim that we have achieved the practice of an engineering science?  The plethora of vulnerabilities found every year (thousands, and that’s just in software that matters or is publicly known) suggests not.  Does that mean that we don’t know how, or that it is just not put into practice for reasons of ignorance, education, costs, market pressures, or something else?

The distinction between artisanal work and engineering work is well expressed in the SEI (Software Engineering Institute) work on capability maturity models.  Levels of maturity range from 1 to 5: 

  1. Ad-hoc, individual efforts and heroics
  2. Repeatable
  3. Defined
  4. Managed
  5. Optimizing (Science)

 
  Artisanal work is individual work, entirely dependent on the (unique) skills of the individual and personal level of organization.  Engineering work aims to be objective, independent from one individual’s perception and does not require unique skills.  It should be reproducible, predictable and systematic.

  In this context, it occurred to me that the security community often suggests using methods that have artisanal characteristics.  We are also somewhat hypocritical (in the academic sense of the term, not deceitful, just not thinking through critically enough).  The methods that are suggested to increase security actually rely on practices we decry.  What am I talking about?  I am talking about black lists.

  A common design error is to create a list of “bad” inputs, bad characters, or other undesirable things.  This is a black list;  it often fails because the enumeration is incomplete, or because the removal of bad characters from the input can result in the production of another bad input which is not caught (and so on recursively).  It turns out more often than not that there is a way to circumvent or fool the black list mechanism.  Black lists fail also because they are based on previous experience, and only enumerate *known* bad input.  The recommended practice is the creation of white lists, that enumerate known good input.  Everything else is rejected. 

  When I teach secure programming, I go through often repeated mistakes, and show students how to avoid them.  Books on secure programming show lists upon lists of “sins” and errors to avoid.  Those are blacklists that we are in effect creating in the minds of readers and students!  It doesn’t stop there.  Recommended development methods (solutions for repeated mistakes) also often take the form of black lists.  For example, risk assessment and threat modeling require expert artisans to imagine, based on past experience, what are likely avenues of attack, and possible damage and other consequences.  The results of those activities are dependent upon unique skill sets, are irreproducible (ask different people and you will get different answers), and attempt to enumerate known bad things.  They build black lists into the design of software development projects. 

  Risk assessment and threat modeling are appropriate for insurance purposes in the physical world, because the laws of physics and gravity on earth aren’t going to change tomorrow.  The experience is collected at geographical, geological and national levels, tabulated and analyzed for decades.  However, in software engineering, black lists are doomed to failure, because they are based on past experience, and need to face intelligent attackers inventing new attacks.  How good can that be for the future of secure software engineering? 

  Precious few people emphasize development and software configuration methods that result (with guarantees) in the creation of provably correct code.  This of course leads into formal methods (and languages like SPARK and the correctness by construction approach), but not necessarily so.  For example, I was recently educated on the existence of a software solution called AppArmor (Suse Linux, Crispin Cowan et al.).  This solution is based on fairly fine-grained capabilities, and granting to an application only known required capabilities;  all the others are denied.  This corresponds to building a white list of what an application is allowed to do;  the developers even say that it can contain and limit a process running as root.  Now, it may still be possible for some malicious activity to take place within the limits of the granted capabilities (if an application was compromised), but their scope is greatly limited.  The white list can be developed simply by exercising an application throughout its normal states and functions, in normal usage.  Then the list of capabilites is frozen and provides protection against unexpected conditions. 

  We need to come up with more white list methods for both development and configuration, and move away from black lists.  This is the only way that secure software development will become secure software engineering.

Edit (4/16/06): Someone pointed out the site http://blogs.msdn.com/threatmodeling/ to me.  It is interesting because it shows awareness of the challenge of getting from an art to a science.  It also attempts to abstract the expert knowledge into an “attack library”, which makes explicit its black list nature.  However, they don’t openly acknowledge the limitations of black lists.  Whereas we don’t currently have a white list design methodology that can replace threat modeling (it is useful!), it’s regrettable that the best everyone can come up with is a black list. 

Also, it occurred to me since writing this post that AppArmor isn’t quite a pure white list methodology, strictly speaking.  Instead of being a list of known *safe* capabilities, it is a list of *required* capabilities.  The difference is that the list of required capabilities, due to the granularity of capabilities and the complexity emerging from composing different capabilities together, is a superset of what is safe for the application to be able to do.  What to call it then?  I am thinking of “permissive white list” for a white list that allows more than necessary, vs a “restrictive white list” for a white list that possibly prevents some safe actions, and an “exact white list” when the white list matches exactly what is safe to do, no more and no less.

Useful Awareness Videos

The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest.  Topics covered include spyware, phishing, and patching.  The winning video,  Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.

The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest.  In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.

Web App Security - The New Battlefront

Well, we’re all pretty beat from this year’s Symposium, but things went off pretty well.  Along with lots of running around to make sure posters showed up and stuff, I was able to give a presentation called Web Application Security - The New Battlefront.  People must like ridiculous titles like that, because turnout was pretty good.  Anyway, I covered the current trend away from OS attacks/vandalism and towards application attacks for financial gain, which includes web apps.  We went over the major types of attacks, and I introduced a brief summary of what I feel needs to be done in the education, tool development, and app auditing areas to improve the rather poor state of affairs.  I’ll expand on these topics more in the future, but you can see my slides and watch the video for now: