For the past few years, PHP security experts have been pounding on the heads of sysadmins to turn off register_globals. While default installs of PHP turn it off, some popular web apps (especially older versions) insist on using it, so some webhost sysadmins will turn it on, presumably to make things go smoothly for their customers. Oops!
CVE-2007-0233, what seems like the 300th Wordpress vulnerability in the last two weeks, reports an sql injection vulnerability in Wordpress 2.0.6 (which was only released 11 days ago). The exploit appears to rely on register_globals being enabled, though:
funkatron@foo > php xpl.php foo.com /wp/ --------------------------------------------------------------------------- Wordpress < = 2.0.6 wp-trackback.php Zend_Hash_Del_Key_Or_Index / / sql injection admin hash disclosure exploit (needs register_globals=on, 4 <= PHP < 4.4.3,< 5.1.4) by rgod dork: "is proudly powered by WordPress" mail: retrog at alice dot it site: http://retrogod.altervista.org --------------------------------------------------------------------------- pwd hash -> admin user -> exploit failed…This is a good example of why web app security (and any security, for that matter) must be multilayered: on the hardware level, on the server daemon level, on the language environment level, and on the code level. So, for the love of god, STOP ENABLING REGISTER_GLOBALS, upgrade to Wordpress 2.0.7, and (shameless plug) use PhpSecInfo to audit your PHP environment.
One of the most convincing arguments for full disclosure is that while the polite security researcher is waiting for the vendor to issue a patch, that vulnerability MAY have been sold and used to exploit systems, so all individuals in charge of administering a system have a right to know ALL the details so that they can protect themselves, and that right trumps all other rights.
That argument rests upon the premise that if one person found the vulnerability, it is possible for others to find it as well. The key word here is “possible”, not “likely”, or so I thought when I started writing this post. After all, vulnerabilities can be hard to find, which is a reason why products are released with vulnerabilities. How likely is it that two security researchers will find the same vulnerability?
Mathematically speaking, the chance that two successful security researchers (malicious or not) will find the same flaw is similar to the birthday problem. Let’s assume that there are X security researchers, each finding a vulnerability out of N vulnerabilities to be found. In 2006, 6560 vulnerabilities were found, and 4876 in 2005 (according to the national vulnerability database). Let’s assume that the number of vulnerabilities available to be found in a year is about 10 000; this is most surely an underestimation. I’ll assume that all of these are equally likely to be found. An additional twist on the birthday problem is that people are entering and leaving the room; not all X are present at the same time. This is because we worry about two vulnerabilities being found within the grace period given to a vendor.
If there are more successful researchers in the room than vulnerabilities, then necessarily there has been a collision. Let’s say that the grace period given to a vendor is one month, so Y = X/12. Then, there would need to be 120,000 successful security researchers for collisions to be guaranteed. For fewer researchers, the likelihood of two vulnerabilities being the same is then 1- exp(-(Y(Y-1))/2N) (c.f. Wikipedia). Let’s assume that there are 5000 successful researchers in a given year, to match the average number of vulnerabilities reported in 2005 and 2006. The probability that two researchers can find the same vulnerability over a given time period is:
| Grace Period | Probability |
|---|---|
| 1 month | 0.9998 |
| 1 week | 0.37 |
| 1 day | 0.01 |
In other words, nowadays the grace period given to a vendor should be on the order of one or two days, if we only take this risk into account. Has it always been like this?
Let’s assume that in any given year, there are twice as many vulnerabilities to be found than there are reported vulnerabilities. If we make N = 2X and fix the grace period to one week, what was the probability of collision in different years? The formula becomes 1- exp(-(X/52(X/52-1))/4X), where we take the ceiling of X/52.
| Year | Vulnerabilities Reported | Probability |
|---|---|---|
| 1988-1996 | 0 | |
| 1997 | 252 | 0.02 |
| 1998 | 246 | 0.02 |
| 1999 | 918 | 0.08 |
| 2000 | 1018 | 0.09 |
| 2001 | 1672 | 0.15 |
| 2002 | 1959 | 0.16 |
| 2003 | 1281 | 0.11 |
| 2004 | 2363 | 0.20 |
| 2005 | 4876 | 0.36 |
| 2006 | 6560 | 0.46 |
So, according to this table, a grace period of one week would have seemed an acceptable policy before 2000, perhaps fair in 2000-2003, but is now unacceptably long. These calculations are of course very approximative, but they should be useful enough to serve as guidelines. They show, much to my chagrin, that people arguing for the full and immediate disclosure of vulnerabilities may have a point.
In any case, we can’t afford, as a matter of national and international cyber-security, to let vendors idly waste time before producing patches; vendors need to take responsibility, even if the vulnerability is not publicly known. This exercise also illustrates why a patch-it-later attitude could have seemed almost excusable years ago, but not now. These figures are a serious problem for managing security with patches, as opposed to secure coding from the start: I believe that it is not feasible anymore for traditional software development processes to issue patches before the threat of malicious disclosure and exploits becomes significant. Finally, the grace period that we can afford to give vendors may be too short for them to issue patches, but that doesn’t mean it should be zero.
Note: the astute reader will remark that the above statistics is for any two vulnerabilities to match, whereas for patching we are talking about a specific vulnerability being discovered independently. The odds of that specific ocurrence are much smaller. However, we need to consider all vulnerabilities in a systematic management by patches, which reverts to the above calculations.
Vulnerability disclosure is such a painful issue. However, some people are trying to make it as painful as possible. They slap and kick people with the release of 0-day vulnerabilities, and tell them it’s for their own good. In their fantasies, sometime in the future, we’ll be thanking them. In reality, they make me feel sympathy for the vendors.
They cite disillusionment with the “responsible disclosure” process. They believe that this process forces them somehow to wait indefinitely on the pleasure of the vendor. Whereas it is true that many vendors won’t and don’t fix known issues unless they are known publicly or are threatened with a public disclosure, it bemuses me that these people are unwilling to give the vendor a chance and wait a few weeks. They use the excuse of a few bad vendors, or a few occurrences of delays in fixes, even “user smugness”, to systematically treat vendors and their clients badly. This shows recklessness, impatience, intransigence, bad judgment and lack of discernment.
I agree that reporting vulnerabilities correctly is a thankless task. Besides my previous adventure with a web application, when reporting a few vulnerabilities to CERT/CC, I received no replies ever, not even an automated receipt. It was like sending messages into a black hole. Some vendors can become defensive and unpleasant instead. However, that doesn’t provide a justification for not being gallant, and first giving an opportunity for the opposite side to behave badly. If you don’t do at least that, then you are part of the problem. As in many real life problems, the first one to use his fists is the loser.
What these security vigilantes are really doing is using as hostages the vendor’s clients, just to make an ideological point. That is, they use the threat of security exploits to coerce or intimidate vendors and society for the sake of their objectives. They believe that the ends justify the means. Blackmail is done for personal gain, so what they are doing doesn’t fit the blackmail category, and it’s more than simple bullying. Whereas the word “terrorism” has been overused and brandished too often as a scarecrow, compare the above to the definition of terrorism. I realize that using this word, even correctly, can raise a lot of objections. If you accept that a weaker form of terrorism is the replacement of physical violence with other threats, then it would be correct to call these people “small-time terrorists” (0-day pun intended). Whatever you want to call them, in my opinion they are no longer just vigilantes, and certainly not heroes. The only thing that can be said for them is, at least they didn’t try to profit directly from the disclosures.
Finally, let me make clear that I want to be informed, and I want disclosures to happen. However, I’m certain that uncivil 0-day disclosures aren’t part of the answer. There is an interesting coverage of this and related issues at C/NET.
[tags]vulnerabilities,microsoft word, email attachments[/tags]
So far this year, a number of vulnerabilities in Microsoft’s Word have been discovered. Three critical (“zero day”) vulnerabilities have been discovered—and as yet, unpatched—this month. (Vulnerability 1, Vulnerability 2, and Vulnerability 3.) These are hardly the first vulnerabilities reported for Word. There has actually been quite a history of problems associated with Word documents containing malformed (or maliciously formed) content.
For years now, I have had my mailer configured to reject Word documents when they are sent to me in email and also send back an explanatory “bounce” message. In part, this is because I have not had Word installed on my system, nor do I normally use it. As such, Word documents sent to me in email have largely been so much binary noise. Yes, I could install some converters that do a halfway reasonable job of converting Word documents, or I could install something like OpenOffice to read Word files without installing Word itself, but that would continue to (tacitly) encourage dangerous behavior by my correspondents.
People who send me Word documents tend to get a bounce message that points out that Word:
If you want more details on this, including links to other essays, see my explanatory bounce text, as cited above.
The US-CERT has warned that people shouldn’t open unexpected Word documents in email. As general policy, they actually warn not to open email with attachments such as Word documents appearing to be from people you know. This is because malicious software may have infected an acquaintance’s machine and is sending you something infected, or the return address is faked—it may not be from the user you think!
If there was a mad bomber sending out explosives in packages, and you got a box with your Aunt Sally’s name on it, would you possibly pause before opening it? Most people would, but inexplicably, those same people exhibit no hesitation in opening Word documents (and other executable content), thereby endangering their own machines—and often everyone in the same enterprise.
There is almost no reason to email Word documents!! They certainly should be used in email FAR LESS than they currently are.
If you need to send a simple memo or note in email, use plain text (or RichText or even HTML). It is more likely to be readable on most kinds of platform, is compact, and is not capable of carrying a malicious payload.
If you need to send something out that has special formatting or images, consider PDF. It may not be 100% safe (although I know of no current vulnerabilities), but it is historically far safer than Word is or has been. Putting it as an image or PDF on a local WWW site and mailing the URL is also reasonable.
If you must send Word documents back and forth (and there are other word processing systems than Word, btw), then consider sending plain RTF. Or arrange a protocol so all parties know what is being sent and received, and be sure to use an up-to-date antivirus scanner! (See the CERT recommendations.)
The new version of Word 2007 uses XML for encoding, and this promises to be safer than the current format. That remains to be seen, of course. And it may be quite some time before it is installed and commonplace on enough machines to make a difference.
You can help make the community safer—stop sending Word messages in email, and consider bouncing back any email sent to you in Word! If enough of us do it, we might actually be able to make the Internet a little bit safer.
An additional note
So, what do I use for word processing? For years, I have used TeX/LaTeX for papers. Before that I also used troff on Unix. I have used FrameMaker on both Mac and Unix, and wrote several books (including all three editions of Practical Unix Security et al.) with it. I used ClarisWorks on the Mac for some years, and now use Apple’s Pages for many of my papers and documents.
I have installed and used Word under two extraordinary circumstances. Once was for a large project proposal I was leading across 5 universities where there was no other good common alternative that we could all use—or that everyone was willing to use. The second case was when I was on the PITAC and was heavily involved in producing the Cyber Security report.
However, I am back to using Pages on the Mac (which can import RTF and, I am told, Word), and LaTeX. I’ve written over 100 professional articles, 5 books, and I don’t know how many memos and letters, and I have avoided Word. It can be done.
Note that I have nothing against Microsoft, per se. However, I am against getting locked into any single solution, and I am especially troubled at the long history of vulnerabilities in Word…which are continuing to occur after years and years of problems. That is not a good record for the future.
[posted with ecto]
[tags]security failures, infosecurity statistics, cybercrime, best practices[/tags]
Back in May, I commented here on a blog posting about the failings of current information security practices. Well, after several months, the author, Noam Eppel, has written a comprehensive and thoughtful response based on all the feedback and comments he received to that first article. That response is a bit long, but worth reading.
Basically, Noam’s essays capture some of what I (and others) have been saying for a while—many people are in denial about how bad things are, in part because they may not really be seeing the “big picture.” I talk with hundreds of people in government, academic, and industry around the world every few months, and the picture that emerges is as bad—or worse—than Noam has outlined.
Underneath it all, people seem to believe that putting up barriers and patches on fundamentally bad designs will lead to secure systems. It has been shown again and again (and not only in IT) that this is mistaken. It requires rigorous design and testing, careful constraints on features and operation, and planned segregation and limitation of services to get close to secure operation. You can’t depend on best practices and people doing the right thing all the time. You can’t stay ahead of the bad guys by deploying patches to yesterday’s problems. Unfortunately, managers don’t want to make the hard decisions and pay the costs necessary to really get secure operations, and it is in the interests of almost all the vendors to encourage them down the path of third-party patching.
I may expand on some of those issues in later blog postings, depending on how worked up I get, and how the arthritis/RSI in my hands is doing (which is why I don’t write much for journals & magazines, either). In the meantime, go take a look at Noam’s response piece. And if you’re in the US, have a happy Thanksgiving.
[posted with ecto]
Someone sent the following to me as an example of how to ensure secure passwords
Microsoft claims this message is an error. However, I think we all can see this is simply a form of extreme password security of the sort I wrote about in this post.
In my earlier posts on passwords, I noted that I approach on-line password “vaults” with caution. I have no reason to doubt that the many password services, secure email services, and other encrypted network services are legitimate. However, I am unable to adequately verify that such is the case for anything I would truly want to protect. It is also possible that some employee has compromised the software, or a rootkit has been installed, so even if the service was designed to be legitimate, it is nonetheless compromised without the rightful owners knowledge.
For a similar reason, I don’t use the same password at multiple sites—I use a different password for each, so if one site is “dishonest” (or compromised) I don’t lose security at all my sites.
For items that I don’t value very much, the convenience of an online vault service might outweigh my paranoia—but that hasn’t happened yet.
Today I ran across this:
MyBlackBook [ver 1.85 live] - Internet’s First Secure & Confidential Online Sex Log!
My first thought is “Wow! What a way to datamine information on potential hot dates!”
That quickly led to the realization that this is an *incredible* tool for collecting blackmail information. Even if the people operating it are legit (and I have no reason to doubt that they are anything but honest), this site will be a prime target for criminals.
It may also be a prime target for lawyers seeking information on personal damages, divorce actions, and more.
My bottom line: don’t store things remotely online, even in “secure” storage, unless you wouldn’t mind that they get published in a blog somewhere—or worse. Of course, storing online locally with poor security is not really that much better…..
See this account of how someone modified some roadside signs that were password protected. Oops! Not the way to protect a password. Even the aliens know that.
Chris Shiflett has posted a good piece in his blog on the potential danger of cross-domain AJAX scripting (digg here). When Chris and I discussed this at OSCON, I was pretty surprised that anyone would think that violating the same-origin restrictions was in any way a good idea. His post gives a good example of how dangerous this would be.
Myspace, the super-popular web site that your kid uses and you don’t, was once again hit by a worm, this time utilizing Macromedia Flash as its primary vector. This was a reminder for me of just how badly Myspace has screwed up when it comes to input filtering:
Even if they can plug these holes, it’s unlikely that anything short of a full rewrite/refactorization of their profile customization system can ever be considered moderately secure.
So will Myspace get their act together and modify their input filtering approaches? Very unlikely. A large portion of Myspace’s appeal relies upon the customization techniques that allow users to decorate their pages with all manner of obnoxious flashing, glittery animations and videos. Millions of users use cobbled-together hacks to twist their profiles into something fancier than the default, and a substantial cottage industry has sprung up around the subject. Doing proper input filtering means undoing much of that.
Even if relatively secure equivalent techniques are offered, Myspace would certainly find themselves with a disgruntled user base that’s more likely to bail to a competitor. That’s an incredibly risky move in the social networking market, and will likely lead Myspace to continue plugging holes rather than building a dam that works.
This is why you can’t design web applications with security as an afterthought. Myspace has, and I think it will prove to be their biggest mistake.