The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

The PHP App Insecurity Top 20

Share:

I’ve spent some of my down time in the past couple weeks working with the NIST NVD data to get stats on PHP application vulnerabilities.  What follows is a breakdown of the 20 PHP-based applications that had the highest aggregate vulnerability scores (NIST assigns a score from 1-10 for the severity of each entry), and the highest total number of vulnerabilities, over the past 12 months.  Of the two, I feel that the aggregate score is a better indicator of security issues.

A few caveats:

  • The data here covers the period between April 1 2006 and April 1 2007.
  • This obviously only includes reported vulnerabilities.  There are surely a lot more applications that are very insecure, but for one reason or another haven’t had as many reports.
  • I chose 20 as the cutoff mainly for the sake of making the data a little easier to swallow (and chart nicely). There are about 1,800 distinct apps in the NIST NVD that are (as far as I could determine) PHP-based.

Without further ado, here are the tepid Excel charts:

Nist NVD Data - April 1 2006 to April 1 2007 - PHP Apps by Score Count

Nist NVD Data - April 1 2006 to April 1 2007 - PHP Apps by Entry Count

A couple notes:

  • There are 25 entries in the top “20” by vulnerability count, due to matching vulnerability counts.
  • I’d never even heard of MyBulletinBoard, the top entry in both lists.  It hasn’t had any vulnerabilities in the NVD since September of 2006, which says something about how numerous and severe the entries between April and September 2006 were.  This appears to be the same product as “MyBB,” so perhaps the situation has improved, as MyBB only has one NVD entry in the entire period (CVE-2007-0544).
  • Wordpress has had a bad start to 2007, with numerous vulnerabilities that significantly increased its ranking.  March 2007 was particularly bad, with 7 new vulnerabilities reported.
  • Bulletin board/forum software is by far the most common type of application in the top 20.  A couple forum apps that have very low numbers of vulnerability reports: Vanilla and FUDForum.

I do intend to keep this data up-to-date if people find it interesting, so let me know if you’d like me to do so, or if you’d like to see other types of analysis.

[tags]php, security, application security, vulnerabilities, nist, nvd, statistics[/tags]

 

What security push?

Share:

[tags]Vista, Windows, security,flaws,Microsoft[/tags]

Update: additions added 4/19 and 4/24, at the end.

Back in 2002, Microsoft performed a “security standdown” that Bill Gates publicly stated cost the company over $100 million.  That extreme measure was taken because of numerous security flaws popping up in Microsoft products, steadily chipping away at MS’s reputation, customer safety, and internal resources.  (I was told by one MS staffer that response to major security flaws often cost close to $1 million each for staff time, product changes, customer response, etc.  I don’t know if that is true, but the reality certainly was/is a substantial number.)

Without a doubt, people inside Microsoft took the issue seriously.  They put all their personnel through a security course, invested heavily in new testing technologies, and even went so far as to convene an advisory board of outside experts (the TCAAB)—including some who have not always been favorably disposed towards MS security efforts.  Security of the Microsoft code base suddenly became a Very Big Deal.

Fast forward 5 years: When Vista was released a few months ago, we saw lots of announcements that it was the most secure version of Windows ever, but that metric was not otherwise qualified; a cynic might comment that such an achievement would not be difficult.  The user population has become habituated to the monthly release of security patches for existing products, with the occasional emergency patch.  Bundling all the patches together undoubtedly helps reduce the overhead in producing them, but also serves to obscure how many different flaws are contained inside each patch set.  The number of flaws maybe hasn’t really decreased all that much from years ago.

Meanwhile, reports from inside MS indicate that there was no comprehensive testing of personnel to see how the security training worked and no follow-on training.  The code base for new products has continued to grow, thus opening new possibilities for flaws and misconfiguration.  The academic advisory board may still exist, but I can’t find a recent mention of it on the Microsoft web pages, and some of the people I know who were on it (myself included) were dismissed over a year ago.  The external research program at MSR that connected with academic institutions doing information security research seems to have largely evaporated—the WWW page for the effort lists John Spencer as contact, and he retired from Microsoft last year.  The upcoming Microsoft Research Faculty Summit has 9 research tracks, and none of them are in security.

Microsoft seems to project the attitude that they have solved the security problem.

If that’s so, why are we still seeing significant security flaws appear that not only affect their old software, but their new software written under the new, extra special security regime, such as Vista and Longhorn?  Examples such as the ANI flaw and the recent DNS flaw are both glaring examples of major problems that shouldn’t have been in the current code: the ANI flaw is very similar to a years-old flaw that was already known inside Microsoft, and the DNS flaw is another buffer overflow!!  There are even reports that there may be dozens (or hundreds) of patches awaiting distribution for Vista.

Undoubtedly, the $100 million spent back in 2002 was worth something—the code quality has definitely improved.  There is greater awareness inside Microsoft about security and privacy issues.  I also know for a fact that there are a lot of bright, talented and very motivated people inside Microsoft who care about these issues.  But questions remain: did Microsoft get its money’s worth?  Did it invest wisely and if so, why are we still seeing so many (and so many silly) security flaws?  Why does it seem that security is no longer a priority?  What does that portend for Vista, Longhorn, and Office 2007?  (And if you read the “standdown” article, one wonders also about Mr. Nash’s posterior. grin )

I have great respect for many of the things Microsoft has done, and admiration for many of the people who work there.  I simply wish they had some upper management who would realize that security (and privacy) are ongoing process needs, not one-time problems to overcome with a “campaign.”

What do you think?

[posted with ecto]

Update 4/19: The TCAAB does still continue to exist, apparently, but with a greater focus on privacy issues than security.  I do not know who the current members might be.

Update 4/24: I have heard (informally) from someone inside Microsoft in informal response to this post.  He pointed out several issues that I think are valid and deserve airing here;

  1. Security training of personnel is on-going.  It still is unclear to me whether they are employing good educational methods, including follow-up testing, to optimize their instruction.
  2. The TCABB does indeed continue (and was meeting when I made the original post!).  It has undergone some changes since it was announced, but is largely the same as when it was formed.  What they are doing, and what effect they are having (if any), is unclear.
  3. Microsoft’s patch process is much smoother now, and bundled patches are easier to apply than lots of individual ones.  (However, there are still a lot of patches for things that shouldn’t be in the code.)
  4. The loss of outreach to academia by MSR does not imply they aren’t still doing research in security issues.

Many of my questions still remain unanswered, including Mr. Nash’s condition….

Insecure when run on Vista, thanks to symbolic links

Share:

I was surprised to learn a few weeks ago that Vista added symlink support to Windows.  Whereas I found people rejoicing at the new feature, I anticipate with dread a number of vulnerability announcements in products that worked fine under XP but are now insecure in the presence of symlinks in the file system.  This should continue for some time still, as Windows programmers may take time to become familiar with the security issues that symlinks pose.  For example, in the CreateFile function call, “If FILE_FLAG_REPARSE_POINT is not specified and:

  * If an existing file is opened and it is a symbolic link, the handle returned is a handle to the target.
  * If CREATE_ALWAYS, TRUNCATE_EXISTING, or FILE_FLAG_DELETE_ON_CLOSE are specified, the file affected is the target.”
(reference:  MSDN, Symbolic link effects on File system functions, at:  http://msdn2.microsoft.com/en-au/library/aa365682.aspx)

So, unless developers update their code to use that flag, their applications may suddenly operate on unintended files.  Granted, the intent of symbolic links is to be transparent to applications, and being aware of symbolic links is not something every application needs.  However, secure Windows applications (such as software installers and administrative tools) will now need to be ever more careful about race conditions that could enable an attacker to unexpectedly create symlinks.  They will also need to be more careful about relinquishing elevated privileges as often as possible. 

In addition, it is easy to imagine security problems due to traps planted for administrators and special users, to trick them into overwriting unintended files.  UNIX administrators will be familiar with these issues, but now Windows administrators may learn painful lessons as well. 

Hopefully, this will be just a temporary problem that will mostly disappear as developers and administrators adjust to this new attack vector.  The questions are how quickly and how many vulnerabilities and incidents will happen in the meantime.  One thing seems certain to me:  MITRE’s CWE will have to add a category for that under “Windows Path Link problems”, ID 63.

The Vulnerability Protection Racket

Share:

TippingPoint’s Zero Day Initiative (ZDI) gives interesting data.  TippingPoint’s ZDI has made public its “disclosure pipeline” on August 28, 2006.  As of today, it has 49 vulnerabilities from independent researchers, which have been waiting on average 114 days for a fix.  There are also 12 vulnerabilities from TippingPoint’s researchers as well.  With those included, the average waiting time for a fix is 122 days, or about 4 months!  Moreover, 56 out of 61 are high severity vulnerabilities.  These are from high profile vendors: Microsoft, HP, Novell, Apple, IBM Tivoli, Symantec, Computer Associates, Oracle…  Some high severity issues have been languishing for more than 9 months.

Hum.  ZDI is supposed to be a “best-of-breed model for rewarding security researchers for responsibly disclosing discovered vulnerabilities. ”  How is it responsible to take 9 months to fix a known but secret high severity vulnerability?  It’s not directly ZDI’s fault that the vendors are taking so long, but then it’s not providing much incentive either to the vendors.  This suggests that programs like ZDI’s have a pernicious effect.  They buy the information from researchers, who are then forbidden from disclosing the vulnerabilities.  More vulnerabilities are found due to the monetary incentive, but only people paying for protection services have any peace of mind.  The software vendors don’t care much, as the vulnerabilities remain secret.  The rest of us are worse off than before because more vulnerabilities remain secret for an unreasonable length of time.

Interestingly, this is what was predicted several years ago in “Market for Software Vulnerabilities?  Think Again” (2005) Kannan K and Telang R, Management Science 51, pp. 726-740.  The model predicted worse social consequences from these programs than no vulnerability handling at all due to races with crackers, increased vulnerability volume, and unequal protection of targets.  This makes another conclusion of the paper interesting and likely valid:  CERT/CC offering rewards to vulnerability discoverers should provide the best outcomes, because information would be shared systematically and equally.  I would add that CERT/CC is also in a good position to find out if a vulnerability is being exploited in the wild, in which case it can release an advisory and make vulnerability information public sooner.  A vendor like TippingPoint has a conflict of interest in doing so, because it decreases the value of their protection services.

I tip my hat to TippingPoint for making their pipeline information public.  However, because they provide no deadlines to vendors or incentives for responsibly patching the vulnerabilities, the very existence of their services and similar ones from other vendors are hurting those who don’t subscribe.  That’s what makes vulnerability protection services a racket. 

 

On standard configurations

Share:

[tags]monocultures, compliance, standard configurations, desktops, OMB[/tags]

Another set of news items, and another set of “nyah nyah” emails to me.  This time, the press has been covering a memo out of the OMB directing all Federal agencies to adopt a mandatory baseline configuration for Windows machines.  My correspondents have misinterpreted the import of this announcement to mean that the government is mandating a standard implementation of Windows on all Federal machines.  To the contrary, it is mandating a baseline security configuration for only those machines that are running Windows.  Other systems can still be used (and should be).

What’s the difference? Quite a bit. The OMB memo is about ensuring that a standard, secure baseline is the norm on any machine running Windows.  This is because there are so many possible configuration options that can be set (and set poorly for secure operation), and because there are so many security add-ons, it has not been uncommon for attacks to occur because of weak configurations.  As noted in the memo, the Air Force pioneered some work in decreeing security baseline configurations.  By requiring that certain minimum security configuration settings were in place on every Windows machines, there was a reduction in incidents.

From this, and other studies, including some great work at NIST to articulate useful policies, we get the OMB memo.

This is actually an excellent idea.  Unfortunately, the minimum is perhaps a bit too “minimum.”  For instance, replacing IE 6 under XP with Firefox would probably be a step up in security.  However, to support common applications and uses, the mandated configuration can only go so far without requiring lots of extra (costly) work or simply breaking things.  And if too many things get broken, people will find ways around the secure configuration—after all, they need to get their work done!  (This is often overlooked by novice managers focused on “fixing” security.)

Considering the historical problems with Linux and some other systems, and the complexity of their configuration, minimum configurations for those platforms might not be a bad idea, either.  However, they are not yet used in large enough numbers to prompt such a policy.  Any mechanism or configuration where the complexity is beyond the ken of the average user should have a set, minimum, safe configuration. 

Note my use of the term “minimum” repeatedly.  If the people in charge of enforcing this new policy prevent clueful people from setting stronger configurations, then that is a huge problem.  Furthermore, if there are no provisions for understanding when the minimum configuration might lead to weakness or problems and needs to be changed, that would also be awful.  As with any policy, implementation can be good or be terrible.

Of course, mandating the use of Windows (2000, XP, Vista or otherwise) on all desktops would not be a good idea for anyone other than Microsoft and those who know no other system.  In fact, mandating the use of ANY OS would be a bad idea.  Promoting diversity and heterogeneity is valuable for many reasons, not least of which are:

  1. limit the damage possible from attacks targeting a new or unpatched vulnerability
  2. limit the damage possible from a planted vulnerability
  3. limit the spread of automated attacks (malware)
  4. increase likelihood of detection of attacks of all kinds
  5. provide incentive in the marketplace for competition and innovation among vendors & solutions
  6. enhance capability to quickly switch to another platform in the event a vendor takes a turn harmful to local interests
  7. encourages innovation and competition in design and structure of 3rd-party solutions
  8. support agility—allow testing and use of new tools and technologies that may be developed for other platforms

These advantages are not offset by savings in training or bulk purchasing, as some people would claim.  They are 2nd order effects and difficult to measure directly, but their absence is noted….usually too late.

But what about interoperability?  That is where standards and market pressure come to bear.  If we have a heterogeneous environment, then the market should help ensure that standards are developed and adhered to so as to support different solutions.  That supports competition, which is good for the consumer and the marketplace.

And security with innovation and choice should really be the minimum configuration we all seek.

[posted with ecto]