This story at the NYT web site (registration might be required—it seems kind of random to me) about the prevalence of “piggybacking” on open wireless networks. Most of the article deals with the theft of bandwidth, although there are a couple quotes from David Cole of Symantec about other dangers of people getting into your LAN and accessing the Internet through it.Â Something that really struck me, though, was the following section about a woman who approached a man with a laptop camped outside her condo building:
When Ms. Ramirez asked the man what he was doing, he said he was stealing a wireless Internet connection because he did not have one at home. She was amused but later had an unsettling thought: “Oh my God. He could be stealing my signal.”
Yet some six months later, Ms. Ramirez still has not secured her network.
There are two problems highlighted here, I think:
- We haven’t done enough to make it clear why encrypting your wireless network is important.
- More importantly, wireless routers need to be secure out of the box.Â Users will not change their behavior unless the barrier for wireless network security is lowered as far as possible, and that includes shipping routers with:
- WPA encryption enabled
- a unique shared key
- a unique router admin password (the fact that millions of routers ship with the same default admin password is embarassing)
- a unique SSID
- SSID broadcast disabled
Think about it: if you purchased a car that came with non-functioning locks and keys, and it was your responsibility to get keys cut and locks programmed, would you be satisfied with purchase?Â Would it be realistic to expect most consumers to do this on their own?Â I think it’s not.Â But that’s what the manufacturers of consumer wireless equipment (and related products, like operating systems) ask of the average consumer. With expectations like that, is it really a surprise that most users choose not to bother, even when they know better?
As a web developer, Usability News from the Software Usability Research Lab at Wichita State is one of my favorite sites. Design for web apps can seem pretty arbitrary, but UN presents hard numbers to identify best practices, which comes in handy when you’re trying to explain to your boss why the search box shouldn’t be stuck at the bottom of the page (not that this has ever happened at CERIAS, mind you).
The Feb 2006 issue has lots of good bits, but particularly interesting from an infosec perspective are the results of a study on the gulf between what online users know about good password practice, and what they practice.
“It would seem to be a logical assumption that the practices and behaviors users engage in would be related to what they think they should do in order to create secure passwords. This does not seem to be the case as participants in the current study were able to identify many of the recommended practices, despite the fact that they did not use the practices themselves.”
Some interesting points from the study:
- More than half of users do not vary the complexity of passwords depending on the nature of the data it protects
- More than half of users never change passwords if the system does not force them to do so. Nearly 3/4 of the users stated that they should change their passwords every 3 to 6 months, though
- Half of users believe they should use “special” characters in their passwords (like “&” and “$”), but only 5% do so
Games refuse to install in unprivileged accounts, so they can run their own integrity checkers with spyware qualities with full privileges (e.g., WoW, but others do the same, e.g., Lineage II), that in turn can even deny you the capability to terminate (kill) the game if it hangs (e.g., Lineage II). This is done supposedly to prevent cheating, but allows the game companies full access and control of your machine, which is objectionable. On top of that those games are networked applications, meaning that any vulnerability in them could result in a complete (i.e., root, LocalSystem) compromise.
It is common knowledge that if a worm like MyTob compromises your system, you need to wipe the drive and reinstall everything. This is in part because these worms are so hard to remove, as they attack security software and will prevent firewalls and virus scanners from functioning properly. However there is also a trust issue—a rootkit could have been installed, so you can’t trust that computer anymore. So, if you do any sensitive work or are just afraid of losing your work in progress, you need a dedicated gaming or internet PC. Or do you?
After experiencing this, I am left to wonder, why aren’t all applications like a VMWare “appliance” image, and the operating system like VMWare player? They should be. Efforts to engineer software security have obviously failed to contain the growth of vulnerabilities and security problems. Applying the same solutions the same problems will keep resulting in failures. I’m not giving up on secure programming and secure software engineering, as I can see promising languages, development methods and technologies appearing, but at the same time I can’t trust my personal computers, and I need to compartmentalize by buying separate machines. This is expensive and inconvenient. Virtual machines provide us with an alternative. In the past, storing entire images of operating systems for each application was unthinkable. Nowadays, storage is so cheap and abundant that the size of “appliance” images is no longer an issue. It is time to virtualize the entire machine; all I now require from the base operating system is to manage a file system and be able to launch VMWare player, with at least a browser appliance to bootstrap… Well, not quite. Isolated appliances are not so useful; I want to be able to transfer documents from appliance to appliance. This is easily accomplished with a USB memory stick, or perhaps a virtual drive that I can mount when needed. This shared storage could become a new propagation vector for viruses, but it would be very limited in scope.
Virtual machine appliances, anyone?
Note (March13, 2006): Virtual machines can’t defend against cross-site scripting vulnerabilities (XSS), so they are not a solution for all security problems.
We have had some success using domain name system lookups to block incoming mail messages that are likely to be junk mail. While many people (including us) use DNS real-time black lists, the checks below are done against values returned by regular forward and reverse lookups on the connecting IP address. We’ve stopped a large amount of traffic based on the name of the connecting host or due to “poorly” configured DNS without many complaints. The number of false positives, while non-zero, have so far been at levels acceptable to us.
The first test is a sanity check of sorts on the name of the connecting host. If no reverse lookup can be done on the IP address of the host, the message is blocked. A forward lookup is then done on the just-returned host name. If the forward and reverse lookups do not match, the message is blocked.
Then the hostname is checked against a regex that tries to match dial-up or home cable/DSL addresses. If it matches that, the message is blocked. The expectation is that this will help block the spam zombies on less than ideally maintained home machines out there.
These checks do generate false positives. There are some domains where outgoing mail comes from hosts with no name in DNS or all hosts of the domain have names of the form ip-NNN-NNN-NNN-NNN.domain. Some smaller companies or individuals are given addresses from their net providers of that form and are unable to change them. Apparently many net providers think it’s a good idea to do a reverse lookup on an IP address but then not have a valid forward lookup for the just returned name. To help with these cases, the SMTP rejection message includes a URL where one can request that their addresses be added to an exception list.
Now to numbers. During the average day last week our SMTP server got 8719 connection requests. The previously mentioned DNS tests resulted in the blocking of 4463 messages, or about 51% of all our incoming traffic. Since this happens before any virus scanning or spam scanning on the content of the messages, it saves quite a bit of CPU and IO time on the server. While this system isn’t perfect, it is so effective as a first pass filter that we put up with the few false positives that have been reported so far.
My s.o. and I watched WarGames last night, and I enjoyed it not only for the kitschy nostalgia of an 8-inch floppy disk, but for some of the lessons of good information security practices that we still have trouble remembering:
- Don’t write down your password. Matthew Broderick’s character is able to break into his high school’s computer system and alter his grades because he reads the password off the secretary’s desk every couple weeks.
- Don’t make high-security systems publicly accessible. The W.O.P.R. computer (wasn’t that a great name?) that controls the launch of the US nuclear arsenal is accessed over a public phone line. Firewalls, anyone? Bueller?
It does seem like folks are generally getting a lot better with #2, but #1 seems to be a tougher nut to crack. It’s understandable, because it’s much more of a human behavior issue, but sometimes you just wonder, have we learned nothing in 20 years?
- October, 2017
- August, 2017
- April, 2017
- March, 2017
- November, 2016
- October, 2016
- July, 2016
- June, 2016
- March, 2016
- December, 2015
- October, 2015
- August, 2015
- June, 2015
- May, 2015
- April, 2015
- September, 2014
- July, 2014
- May, 2014
- April, 2014
- March, 2014
- February, 2014
- January, 2014
- November, 2013
- October, 2013
- September, 2013
- June, 2013
- April, 2013
- February, 2013
- January, 2013
- December, 2012
- April, 2012
- February, 2012
- October, 2011
- July, 2011
- June, 2011
- May, 2011
- April, 2011
- March, 2011
- September, 2010
- June, 2010
- April, 2010
- March, 2010
- February, 2010
- December, 2009
- November, 2009
- October, 2009
- September, 2009
- August, 2009
- July, 2009
- June, 2009