Posts in Secure IT Practices
Page Content
Items In the news
[tags]news, cell phones, reports, security vulnerabilities, hacking, computer crime, research priorities, forensics, wiretaps[/tags]
The Greek Cell Phone Incident
A great story involving computers and software, even though the main hack was against cell phones:
IEEE Spectrum: The Athens Affair. From this we can learn all sorts of lessons about how to conduct a forensic investigation, retention of logs, wiretapping of phones, and more.
Now, imagine VoIP and 802.11 networking and vulnerabilities in routers and.... -- the possibilities get even more interesting. I suspect that there's a lot more eavesdropping going on than most of us imagine, and certainly more than we discover.
NRC Report Released
Last week, the National Research Council announced the release of a new report: Towards a Safer and More Secure Cyberspace. The report is notable in a number of ways, and should be read carefully by anyone interested in cyber security. I think the authors did a great job with the material, and they listened to input from many sources.
There are 2 items I specifically wish to note:
- I really dislike the “Cyber Security Bill of Rights” listed in the report. It isn't that I dislike the goals they represent -- those are great. The problem is that I dislike the “bill of rights” notion attached to them. After all, who says they are “rights”? By what provenance are they granted? And to what extremes do we do to enforce them? I believe the goals are sound, and we should definitely work towards them, but let's not call them “rights.”
- Check out Appendix B. Note all the other studies that have been done in recent years pointing out that we are in for greater and greater problems unless we start making some changes. I've been involved with several of those efforts as an author -- including the PITAC report, the Infosec Research Council Hard Problems list, and the CRA Grand Challenges. Maybe the fact that I had no hand in authoring this report means it will be taken seriously, unlike all the rest. :-) More to the point, people who put off the pain and expense of trying to fix things because “Nothing really terrible has happened yet” do not understand history, human nature, or the increasing drag on the economy and privacy from current problems. The trends are fairly clear in this report: things are not getting better.
Evolution of Computer Crime
Speaking of my alleged expertise at augury, I noted something in the news recently that confirmed a prediction I made nearly 8 years ago at a couple of invited talks: that online criminals would begin to compete for “turf.” The evolution of online crime is such that the “neighborhood” where criminals operate overlaps with others. If you want the exclusive racket on phishing, DDOS extortion, and other such criminal behavior, you need to eliminate (or absorb) the competition in your neighborhood. But what does that imply when your “turf” is the world-wide Internet?
The next step is seeing some of this spill over into the physical world. Some of the criminal element online is backed up by more traditional organized crime in “meat space.” They will have no compunction about threatening -- or disabling -- the competition if they locate them in the real world. And they may well do that because they also have developed sources inside law enforcement agencies and they have financial resources at their disposal. I haven't seen this reported in the news (yet), but I imagine it happening within the next 2-3 years.
Of course, 8 years ago, most of my audiences didn't believe that we'd see significant crime on the net -- they didn't see the possibility. They were more worried about casual hacking and virus writing. As I said above, however, one only needs to study human nature and history, and the inevitability of some things becomes clear, even if the mechanisms aren't yet apparent.
The Irony Department
GAO reported a little over a week ago that DHS had over 800 attacks on their computers in two years. I note that the report is of detected attacks. I had one top person in DC (who will remain nameless) refer to DHS as “A train wreck crossed with a nightmare, run by inexperienced political hacks” when referring to things like TSA, the DHS cyber operations, and other notable problems. For years I (and many others) have been telling people in government that they need to set an example for the rest of the country when it comes to cyber security. It seems they've been listening, and we've been negligent. From now on, we need to stress that they need to set a good example.
[posted with ecto]
Diversity
[tags]diversity, complexity, monocultures[/tags]
In my last post, I wrote about the problems brought about by complexity. Clearly, one should not take the mantra of “simplification” too far, and end up with a situation where everything is uniform, simple, and (perhaps) inefficient. In particular, simplification shouldn't be taken to the point where diversity is sacrificed for simple uniformity.
Nature penalizes monocultures in biological systems. Monocultures are devastated by disease and predators because they have insufficient diversity to resist. The irish potato famine, the emerald ash borer, and the decimation of the Aztecs by smallpox are all examples of what happens when diversity is not present. Nature naturally promotes diversity to ensure a robust population.
We all practice diversity in our everyday lives. Diversity of motor vehicles, for instance supports fitness for purpose -- a Camero, is not useful for hauling dozens of large boxes of materials. For that, we use a truck. However, for one person to get from point A to point B in an economical fashion, a truck is not the best choice. It might be cheaper and require less training to use the same vehicle for everything, but there are advantages to diversity. Or tableware -- we have (perhaps) too many forks and spoon types in a formal placesetting, but try eating soup with a fork and you discover that some differentiation is useful!
In computing, competition has resulted in advances in hardware and software design. Choice among products has kept different approaches moving forward. Competition for research awards from DARPA and NSF has encouraged deeper thought and more focused proposals (and resultant development). Diversity in operating systems and programming languages brought many advancements in the era 1950-2000. However, expenses and attempts to cut staff have led to widespread homogenization of OS, applications, and languages over approximately the last decade.
Despite the many clear benefits of promoting diversity, too many organizations have adopted practices that prevent diversity of software and computing platforms. For example, the OMB/DoD Common Desktop initiative is one example where the government is steering personnel towards a monoculture that is more maintainable day-to-day, but which is probably more vulnerable to zero-day attacks and malware.
Disadvantages of homogeneity:
- greater susceptibility to zero-day vulnerabilities and attacks
- “box canyon” effect of being locked into a vendor for future releases
- reduced competition to improve quality
- reduced competition to reduce price and/or improve services
- reduced number of algorithms and approaches that may be explored
- reduced fitness for particular tasks
- simpler for adversaries to map and understand networks and computer use
- increased likelihood that users will install unauthorized software/hardware from outside sources
Advantages of homogeneity:
- larger volume for purchases
- only one form of tool, training, etc needed for support
- better chance of compatible upgrade path
- interchangeability of users and admins
- more opportunities for reuse of systems
Disadvantages of heterogeneity:
- more complexity so possibly more vulnerabilities
- may not be as interoperable
- may require more training to administer
- may not be reusable to the same extent as homogeneous systems
Advantages of heterogeneity:
- when at a sufficient level greater resistance to malware
- highly unlikely that all systems will be vulnerable to a single new attack
- increased competition among vendors to improve price, quality and performance
- greater choice of algorithms and tools for particular tasks
- more emphasis on standards for interoperability
- greater likelihood of customization and optimization for particular tasking
- greater capability for replacement systems if a vendor discontinues a product or support
Reviewing the above lists makes clear that entities concerned with self-continuation and operation will promote diversity, despite some extra expense and effort. The potential disadvantages of diversity are all things that can be countered with planning or budget. The downsides of monocultures, however, cannot be so easily addressed.
Dan Geer wrote an interesting article for Queue Magazine about diversity, recently. It is worth a read.
The simplified conclusion: diversity is good to have.
Optional Client-Side Input Validation That Matches Server-side Validation
def initialize(...)
(...)
@regexp = Regexp.new(/^\d+$/) # positive integer
end
This regular expression can be used to perform the initial server-side input validation:
def validate(input)
if input == nil
unescaped = default()
else
unescaped = CGI.unescapeHTML(input.to_s.strip)
end
unescaped.scan(@regexp) { |match|
return @value = match.untaint
}
if input != ''
raise 'Input "' + @ui_name + '" is not valid'
end
end
To perform client-side input validation, the onblur event is used to trigger validation when focus is lost. The idea is to make the input red and bold (for color-blind people) when validation fails, and green when it passes. The onfocus event is used to restore the input to a neutral state while editing (this is the Ruby code that generates the form html):
def form
$cgi.input('NAME'=>@name, 'VALUE'=>to_html(), 'onblur' => onblur(),
'onfocus' => onfocus())
end
def onblur()
return "if (this.value.search(/" + @regexp.source + "/) < 0)
{this.className = 'bad'} else {this.className = 'good'};"
end
def onfocus()
return "this.className = 'normal';"
end
where the classes "bad", "good" and "normal" are specified in a style sheet (CSS).
There are cases when more validation may happen later on the server side, e.g., if an integer must match an existing key in a database that the user may be allowed to reference. Could the extra validation create a mismatch? Perhaps. However, in these cases the client-side interface should probably be a pre-screened list and not a free-form input, so the client would have to be malicious to fail server-side validation. It is also possible to add range (for numbers) and size (for strings) constraints in the "onblur" JavaScript. In the case of a password field, the JavaScript contains several checks matching the rules on the server side. So, a lone regular expression may not be sufficient for complete input validation, but it is a good starting point.
Note that the form still works even if JavaScript is disabled! As you can see, it is easy to perform client-side validation without forcing everyone to turn on JavaScript ;)
Do you know where you’re going?
[tags]phishing, web redirection[/tags]
Jim Horning suggested a topic to me a few weeks ago as a result of some email I sent him.
First, as background, consider that phishing and related frauds are increasingly frequent criminal activities on the WWW. The basic mechanism is to fool someone into visiting a WWW page that looks like it belongs to a legitimate organization with which the user does business. The page has fields requesting sensitive information from the user, which is then used by the criminals to commit credit card fraud, bank fraud or identity theft.
Increasingly, we have seen that phishing email and sites are also set up to insert malware into susceptible hosts. IE on Windows is the prime target, but attacks are out there for many different browsers and systems. The malware that is dropped can be bot clients, screen scrapers (to capture keystrokes at legitimate pages), and html injectors (to modify legitimate pages to ask for additional information). It is important to try to keep from getting any of this malware onto your system. One aspect of this is to be careful clicking on URLs in your email, even if they seem to come from trusted sources because email can be spoofed, and mail can be sent by bots on known machines.
How do you check a URL? Well, there are some programs that help, but the low-tech way is to look at the raw text of a URL before you visit it, to ensure that it references the site and domain you expected.
But consider the case of short-cut URLs. There are many sites out there offering variations on this concept, with the two I have seen used most often being “TinyURL” and “SnipURL”. The idea is that if you have a very long URL that may get broken when sent in email, or that is simply too difficult to remember, you submit it to one of these services and you get a shortened URL back. With some services, you can even suggest a nickname. So, for example, short links to the top level of my blog are <http://tinyurl.com/2geym5>, <http://snipurl.com/1ms17> and <http://snurl.com/spafblog>.
So far, this is really helpful. As someone who has had URLs mangled in email, I like this functionality.
But now, let's look at the dark side. If Jim gets email that looks like it is from me, with a message that says “Hey Jim, get a load of this!” with one of these short URLs, he cannot tell by looking at the URL whether it points somewhere safe or not. If he visits it, it could be a site that is dangerous to visit (Well, most URLs I send out are dangerous in one way or another, but I mean dangerous to his computer. :-)). The folks at TinyURL have tried to address this by adding a feature so that if you visit <http://preview.tinyurl.com/2geym5> you will get a preview of what the URL resolves to; you can set this (with cookies) as your default. That helps some.
But now step deeper into paranoia. Suppose one of these sites was founded by fraudsters with the intent of luring people into using it. Or the site gets acquired by fraudsters, or hijacked. The code could be changed so that every time someone visits one of these URLs, some code at the redirect site determines the requesting browser, captures some information about the end system, then injects some malicious javacode or ActiveX before passing the connection to the “real” site. Done correctly, this would result in largely transparent compromise of the user system. According to the SnipURL statistics page, as of midnight on May 30th there have been nearly a billion clicks on their shortened URLs. That's a lot of potential compromises!
Of course, one of the factors to make this kind of attack work is for the victim to be running a vulnerable browser. Unfortunately, there have been many vulnerabilities found for both IE and Firefox, as well as some of the less well-known browsers. With users seeking more functionality in their browsers, and web designers seeking more latitude in what they deliver, we are likely to continue to see browser exploits. Thus, there is likely to be enough of a vulnerable population to make this worthwhile. (And what browser are you using to read this with?)
I should make it clear that I am not suggesting that any of these services really are being used maliciously or for purposes of fraud. I am a happy and frequent user of both TinyURL and SnipURL myself. I have no reason to suspect anything untoward from those sites, and I certainly don't mean to suggest anything sinister. (But note that neither can I offer any assurances about their motives, coding, or conduct.) Caveat emptor.
This post is simply intended as commentary on security practices. Thinking about security means looking more deeply into possible attack vectors. And one of the best ways to commit such attacks is to habituate people into believing something is safe, then exploiting that implicit trust relationship for bad purposes.
Hmm, reminds me of a woman I used to date. She wasn't what she appeared, either.... But that's a story for a different post.
[posted with ecto]
Think OpenOffice is the solution? Think again.
[tags]viruses,OpenOffice,Word,Microsoft,Office,Powerpoint,Excel[/tags]
In my last post, I ranted about a government site making documents available only in Word. A few people have said to me “Get over it -- use OpenOffice instead of the Microsoft products.” The problem is that those are potentially dangerous too -- there is too much functionality (some of it may be undocumented, too) in Word (and Office) documents.
Now, we have a virus specific to OpenOffice. We've had viruses that run in emulators, too. Trying to be compatible with something fundamentally flawed is not a security solution. That's also my objection to virtualization as a “solution” to malware.
I don't mean to be unduly pejorative, but as the saying goes, even if you put lipstick on a pig, it is still a pig.
Word and the other Office components are useful programs, but if MS really cared about security, they would include a transport encoding that didn't include macros and potentially executable attachments -- and encourage its use! RTF is probably that encoding for text documents, but it is not obvious to the average user that it should be used instead of .doc format for exchanging files. And what is there for Excel, Powerpoint, etc?


