As promised, I’m following up my previous post about security extensions for Firefox with suggestions from readers. Some of these are basically different solutions to similar problems—which is great, because some users will prefer one approach over another. A couple of these are very useful, though, and should be considered essential parts of a secure browsing platform. And one seems very useful, but raises privacy issues that are a little troubling.
(An aside: I wonder if a “more secure” version of Firefox is being built and distributed by someone, one that includes some of these extensions out of the box. If so, give us a heads-up.)
McAfee SiteAdvisor, started at MIT, is a project to classify the “safety” of a site into green (safe), yellow (caution) and red (warning) categories. Testing is done by a system of bot programs that interact with web sites, doing things like submitting email signup forms, testing downloads for adware and viruses, and looking at the safety levels of linked sites. Users can also submit reports manually.
The safety level of a site is displayed as a button in Firefox’s status bar, which I’m not sure was the best place. My eyes tend to spend more time at the top half of my browser window (maybe because I have 1920x1200 display), so more often than not I found myself forgetting that I had SiteAdvisor installed. I would have appreciated an option for display as a toolbar, like Netcraft’s extension.
I did, however, really dig the integration with search result pages from Google, Yahoo! and MSN. Links to result pages—even sponsored links—have a green, yellow or red icon appended to the end, and mousing over the icon displays a popup with additional info. This was very clear and easy to grasp without being intrusive or overbearing.
(McAfee also maintains a SiteAdvisor blog that’s quite interesting.)
SafeCache and SafeHistory are extensions developed to address methods where users can be tracked via browser features that don’t apply a “same origin” policy: specifically the browser cache and browsing history. Details of this problem are available at at Same Origin Policy: Protecting Browser State from Web Privacy Attacks, a report created by the Stanford Security Lab. It’s a good read.
The SafeCache and SafeHistory extensions apply a proper “same origin” policy to these features, only allowing access to scripts that originate from the same domain as the cached content/history info. This isn’t perfect, as “cooperative” tracking where two sites pass info back and forth between each other isn’t addressed, but it’s certainly better than the current situation for out of the box browser installs. Honestly, this is something I think should be a default part of every browser install, because it’s a significant security hole that needs to be addressed. I hope that the Firefox, IE, Safari and Opera devs are addressing these problems.
The Netcraft Toolbar is a useful anti-phishing tool. A “risk rating” is calculated for your current site’s domain based on criteria like the age of the domain, known phishing sites within the domain, the ISP’s history re: phishing sites, and the like. Additional info, such as the site’s age and ISP, are displayed in the toolbar, linked to more detailed data on Netscraft’s site.
The plethora of web-based accounts we maintain can get out of hand quickly, and maintaining separate passwords for each one becomes pretty challenging. PasswordMaker is an interesting solution to this problem, in that it doesn’t store passwords anywhere, but instead takes a single master password and generates a site-specific password based on 10 criteria, including personal encryption settings and the site itself. The combination of these criteria makes for an enormous number of possibilties, so typical attacks are not likely to be effective (see their FAQ for more info). Site passwords are generated on the fly, and are proactively wiped from RAM. By default it doesn’t store your master password either.
The program itself isn’t too hard to use, although you’ll probably need to help Grandma get used to it when you set it up for her. I was able to get it working pretty quickly with some of my existing web app accounts.
Source code is available (it uses an LGPL license), and versions of PasswordMaker exist for IE, all Mozilla browsers, a Yahoo! Widget, CLI, PHP, and mobile devices. You can also use an online version if none of those fit the bill.
This is a handy Greasemonkey script that scratches an important itch: indicating if a form’s
actiontarget is SSL-encrypted. I liked the implementation here better than the FormFox extension, which pops up a
title/alt-style label if you hover over the submission button for a moment. This script pops up an indicator immediately, and I appreciate the responsiveness. Still, I wish that the submission button would just have a lock icon layered over it for quicker visual recognition, and this doesn’t do anything for forms with no submission button.
The Cookie Button extension is really three extensions that offer the same functionality, but in different interface contexts. All three allow you to quickly see and change the current cookie permissions on a site, with one displaying a Navigation Toolbar button, one adding a right-click context menu, and the third showing a status bar button. I’m not sure I entirely understand the need to separate these into three different extensions, but it does allow the user to pick the one that best fits his or her interface habits.
One little annoyance I found with Prefbar was that it doesn’t seem to “group” itself with the rest of your toolbars. I like to right-click on on the Navigation Toolbar to swap in and out the 5 or 6 toolbars I have installed, but Prefbar refuses to show up in this list, instead mapping itself to F8 (which will annoy folks who use that key for other functions, like Exposé on OS X), and appearing in the View menu. *grumble*
As before, if you have suggestions for useful security/privacy related addons for Firefox, please let me know.
I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information). As luck would have it, the web site got hacked. I had to talk to a detective in the resulting police investigation. Nothing bad happened to me, but it could have, for two reasons.
The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.
A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.
The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism. Why anonymously? Because student vulnerability reporters are akin to whistleblowers. They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading). In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem. Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either). They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions. Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities.
So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited. I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student. My superiors also requested that I cooperate with the detective. Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized? Thankfully, the student bravely decided to step forward and defused the situation.
As a consequence of that experience, I intend to provide the following instructions to students (until something changes):
- If you find strange behaviors that may indicate that a web site is vulnerable, don’t try to confirm if it’s actually vulnerable.
- Try to avoid using that system as much as is reasonable.
- Don’t tell anyone (including me), don’t try to impress anyone, don’t brag that you’re smart because you found an issue, and don’t make innuendos. However much I wish I could, I can’t keep your anonymity and protect you from police questioning (where you may incriminate yourself), a police investigation gone awry and miscarriages of justice. We all want to do the right thing, and help people we perceive as in danger. However, you shouldn’t help when it puts you at the same or greater risk. The risk of being accused of felonies and having to defend yourself in court (as if you had the money to hire a lawyer—you’re a student!) is just too high. Moreover, this is a web site, an application; real people are not in physical danger. Forget about it.
- Delete any evidence that you knew about this problem. You are not responsible for that web site, it’s not your problem—you have no reason to keep any such evidence. Go on with your life.
- If you decide to report it against my advice, don’t tell or ask me anything about it. I’ve exhausted my limited pool of bravery—as other people would put it, I’ve experienced a chilling effect. Despite the possible benefits to the university and society at large, I’m intimidated by the possible consequences to my career, bank account and sanity. I agree with HD Moore, as far as production web sites are concerned: “There is no way to report a vulnerability safely”.
Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond. After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer. There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service.
mod_security is an essential tool for securing any apache-based hosting environment. The Pathfinder High Performance Infrastructure blog has posted a good starter piece on using mod_security to block email injections.
One of the more common problems with PHP-based applications is that they can allow the injection of malicious content, such as SQL or email spam. In some cases we find that over 95% of a client’s ISP traffic is coming from spam injection. The solution? Grab an industrial size helping of Apache mod_security.
BTW, Ivan Ristic’s (the developer of mod_security) Web Security Blog is well worth a spot in your blogroll.
(Edit: fixed title. Duh.)
This is a great blog posting: Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security. The data and links are comprehensive, and the message is right on. There is a tone of rant to the message, but it is justified.
I was thinking of writing something like this, but Noam has done it first, and maybe more completely in some areas than I would have. I probably would have also said something about the terrible state of Federal support for infosec research, however, and also mentioned the PITAC report on cyber security.
[posted with ecto]
- October, 2017
- August, 2017
- April, 2017
- March, 2017
- November, 2016
- October, 2016
- July, 2016
- June, 2016
- March, 2016
- December, 2015
- October, 2015
- August, 2015
- June, 2015
- May, 2015
- April, 2015
- September, 2014
- July, 2014
- May, 2014
- April, 2014
- March, 2014
- February, 2014
- January, 2014
- November, 2013
- October, 2013
- September, 2013
- June, 2013
- April, 2013
- February, 2013
- January, 2013
- December, 2012
- April, 2012
- February, 2012
- October, 2011
- July, 2011
- June, 2011
- May, 2011
- April, 2011
- March, 2011
- September, 2010
- June, 2010
- April, 2010
- March, 2010
- February, 2010
- December, 2009
- November, 2009
- October, 2009
- September, 2009
- August, 2009
- July, 2009
- June, 2009