The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

More than passive defense


I was watching a video today (more on that later) that reminded me of some history. It also brought to mind that too few defenders these days build forensics capture into their systems to help identify intruders. They also don't have active defenses, countermeasures and chaff in place to slow down attackers and provide more warning of problems.

Back in the late 1980s and early 1990s, I quietly built some counterhacking and beaconing tools that I installed in a "fake front" machine on our local network. People who tried to break into it might get surprises and leave me log info about what they were up to, and things they downloaded would not do what they thought or might beacon me to indicate where the code went. This was long before honeypots were formalized, and before firewalls were in common use. Some of my experiences contributed to me writing the first few papers on software forensics (now called digital forensics), development of Tripwire, and several of my Ph.D. students's theses topics.

I didn't talk about that work much at the time for a variety of reasons, but I did present some of the ideas to students in classes over the years, and in some closed workshops. Tsutomu Shimomura, Dan Farmer and I traded some of our ideas on occasion, along with a few others; a DOD service branch contracted with a few companies to actually built some tools from my ideas, a few of which made it into commercial products. (And no, I never got any royalties or credit for them, either, or for my early work on firewalls, or security scanning, or.... I didn't apply for patents or start companies, unfortunately. It's interesting to see how much of the commercial industry is based around things I pioneered.)

I now regret not having actually written about my ideas at the time, but I was asked by several groups (including a few government agencies) not to do so because it might give away clues to attackers. A few of those groups were funding my grad students, so I complied. You can find a few hints of the ideas in the various editions of Practical Unix & Internet Security because I shared several of the ideas with my co-author, Simson Garfinkel, who had a lot of clever ideas of his own. He went on to found a company, Sandstorm Enterprises, to build and market some professional tools in roughly this space; I was a minor partner in that company. (Simson has continued to have lots of other great ideas, and is now doing wonderful things with disk forensics as a faculty member at the Naval Postgraduate School.)

Some of the ideas we all had back then continue to be reinvented, along with many new and improved approaches. Back in the 1980s, all my tools were in Unix (SunOS, mostly), but now there are possible options in many other systems, with Windows and Linux being the main problems. Of course, back in the 1980s the Internet wasn't used for commerce, Linux hadn't been developed, and Windows was not the widespread issue it it now. There also wasn't a WWW with its problems of cross-site scripting and SQL injection. Nonetheless, there were plenty of attackers, and more than enough unfound bugs in the software to enable attacks.

For the sake of history, I thought I'd document a few of the things I remember as working well, so the memories aren't lost forever. These are all circa 1989-1993:

  • Everything was built on a decoy system. I had control over my own Sun workstation. I configured it to have 2 separate accounts, with one (named spaf) exported via NFS to the CS department Sun, Pyramid and Sequent machines. My students and fellow faculty could access this directory, and I populated it with contents to make it look real. If I wanted to share something with my classes, I'd copy it to this account. A second account (rspaf) was on a separate partition, not exported, and locked down. It was my real account and where I got email. No system had rsh access in, nor any other indication that it existed — the support staff knew it was there, but almost no one else. (For real paranoia, I kept copies of sensitive files and source code on a Mac running OS 7, and it was off the network.)
  • My original idea for Tripwire came several years before Gene Kim arrived as a student and we built the actual Tripwire tool. I built a watcher program for bogus, "bait" mail files and attractively-named source files that would never be touched in normal operation. When accessed, their file times changed. The watcher "noticed" and would start taking repeated snapshots of active network connections and running processes. When lsof was written (at Purdue), I included it in the logging process. After a few minutes, my watcher would freeze the active network connections.
  • As a matter of good security practice, I disabled most of the network services spawned at startup or by inetd. In their places, I put programs that mimicked their output behavior, but logged everything sent to them. Thus, if someone tried to rsh into my system, they'd get output claiming incorrect password for most accounts. For spaf and root it would randomly act like it was connecting but very slow or an error would be printed. At the same time, this would send me email and (later) a page. This would allow me to run other monitoring tools. The tcp wrappers system was later independently developed by Wietse Venema, and the twist option provided the same kind of functionality (still useful).
  • Some attackers would try to "slurp" up interesting sounding directories without looking at all the contents. For instance, a common tactic was to take any directory labeled "security" and the mail spool. For a while, attackers were particularly interested in getting my copy of the Morris Internet Worm, so any directory with "worm" in it would be copied out in its entirety. I built a small utility that would take any file (such as a mail file or binary) and make it HUGE using the Unix sparse file structure. On disk the files might only be a few thousand bytes long, and you could read it normally, but any copy program thought they were gigabytes in length. Thus, any attempt to copy them offsite would result in very long, uncompleted copies (which left network connections open to trace) and sometimes filled up the attackers' disks.
  • The previous idea was partially inspired by one of Tsutomu's tricks. The authd daemon, implementing the ident protocol (RFC 1413), was somewhat popular. It could be useful in a LAN, but in the Internet at large it was not trustworthy; to make this point, several admins had it always return some made-up names when queried. Tsutomu took this a step further and when a remote system connected to the ident port on his machine, it would get an unending string of random bits. Most authd clients would simply accept whatever they were given to write to the local log file. Connecting to Tsutomu's machine was therefore going to lead to a disk full problem. If the client ran as root, it would not be stopped by any limits or quotas — and it usually logged to the main system partition. Crash. As Tsutomu put it, "Be careful what you ask for or you may get more of it than you counted on."
  • I sprinkled altered program source on my fake account, including many that looked like hacking tools. The source code was always subtly altered to neuter the tool. For example, some Unix password cracking tools had permuted S-boxes so that, when presented with a password hash to attack, the program would either run a long time without a result, or would give a result that would not work on any target machine. I also had a partial copy of the Morris Worm there, with subtle permutations made to prevent it from spreading if compiled, and the beacon changed to ping me instead of Berkeley. (See my analysis paper if you don't get the reference.)
  • I also had some booby-trapped binaries with obfuscated content. One was named "God" and had embedded text strings for the usage message that implied that there were options to become root, take over the network, and other tasty actions. However, if run, it would disable all signals, and prompt the user with the name of every file in her home directory, one at a time. Any response to the prompt would result in "Deleted!" followed by the next prompt. Any attempt to stop the program would cause it to print "Entering automatic mode" where every half-second it would state that it was deleting the next file. Meanwhile, it would be sending me logging information about who was running it and where. If run on a Purdue machine, it didn't actually delete anything — I simply used it to see who was poking in my account and running things they didn't know anything about (usually, my students). It also gave them a good scare. If taken and run on a non-Purdue machine, well, it was not so benign.

There were many other tools and tripwires in place, of course, but the above were some of the most successful.

What does successful mean? Well, they helped me to identify several penetrations in progress, and get info on the attackers. I also identified a few new attacks, including the very subtle library substitution that was documented in @Large: The Strange Case of the World's Biggest Internet Invasion. The substitute with backdoor in place had the identical size, dates and simple checksum as the original so as to evade tools such as COPS and rdist. Most victims never knew they had been compromised. My system caught the attack in progress. I was able to share details with the Sun response team — and thereafter they started using MD5 checksums on their patch releases. That incident also inspired some of my design of Tripwire.

In another case, I collected data on some people who had broken into my system to steal the Morris Worm source code. The attacks were documented in the book Underground . The author, Suelette Dreyfus, assisted by Julian Assange (yes, the Wikileaks one), never bothered to contact me to verify what she wrote. The book suggests that my real account was compromised, and source code taken. However, it was the fake account, my security monitors froze the connection after a few minutes, and the software that was accessed was truncated and neutered. Furthermore, the flaws that were exploited to get in were not on my machine — they were on a machine operated by the CS staff.   (Dreyfuss got several other things wrong, but I'm not going to do a full critique.)

There were a half-dozen other incidents where I was able to identify new attacks (now known as zero-day exploits) and get the details to vendors. But after a while, interest dropped off in attacking my machine as new, more exciting opportunities for the kiddies came into play, such as botnets and DDOS attacks. And maybe the word spread that I didn't keep anything useful or interesting on my system. (I still don't.) It's also the case that I got much more interested in issues that don't involve the hands-on, bits & bytes parts of security — I'm now much more interested in fundamental science and policy aspects. I leave the hands-on aspects to the next generation. So, I'm not really a challenge now — especially as I do not administer my system anymore — it's done by staff.

I was reminded of all this when someone on Twitter posted the URL of a video taken at Notacon 2011 (Funnypots and Skiddy Baiting: Screwing with those that screw with you by Adrian "Iron Geek" Crenshaw). It is amusing and reminded me of the stories, above. It also showed that some of the same techniques we used 20 years ago are still applicable today.

Of course, that is also depressing. Now, nearly 20 years later, lots of things have changed but unfortunately, security is a bigger problem, and law enforcement is still struggling to keep up. Too many intrusions occur without being noticed, and too little information is available to track the perps.

There are a few takeaways from all the above that the reader is invited to consider:

  • Assume your systems will be penetrated if they are on the network. Things you don't control are likely to be broken. Therefore, plan ahead and keep the really sensitive items on a platform that is off the net. (If the RSA folks had done this, the SecurID breach might not have resulted in anything.)
  • Install localized tripwires and honeypots that can be monitored for evidence of intrusion. Don't depend on packaged solutions alone — they are known quantities that attackers can avoid or defeat.
  • Don't believe everything you read unless you know the story has been verified with original sources.
  • Consider encrypting or altering critical files so they aren't usable "as is" if taken.
  • Be sure you are proactively logging information that will be useful once you discover a compromise.

Also, you might watch Iron Geek's video to inspire some other ideas if you are interested in this general area — it's a good starting point. (And another, related and funny post on this general topic is here, but is possibly NSFW.)

In conclusion, I'll close with my 3 rules for successful security:

  1. Preparation in advance is always easier than clean up afterwards.
  2. Don't tell everything you know.


Posted by Simson Garfinkel
on Monday, July 4, 2011 at 07:35 PM

What a lovely post. I have only one thing to add:

If your system has something valuable and people want it, you should assume that will be penetrated even if it is not on a network.

A little while ago, I was trying to explain to someone that his “disconnected” machine was actually quite connected—-the links are high latency, but also high bandwidth. If the machine were actually disconnected, then we would have to write all of its software from scratch, and anything it computed would be useless, since no one would ever be able to view the output.

Posted by Brian Snow
on Monday, July 4, 2011 at 09:20 PM

Excellent Post! 

And good timing in posting it, given the current rush to make so many devices that were once stand-alone (cars, phones, medical devices, SCADA systems, etc.) web enabled…

Unless we wise-up, the cyber world is headed for frightening times…

Posted by Clive Robinson
on Tuesday, July 5, 2011 at 02:26 PM

Some of your comment reminded me of Bob Morris (Snr) who died a few days ago.

One of his “rules of effective computer security” was,

1, Don’t own a computer.
2, Don’t turn a computer on.
3, Don’t use a computer.

Which was a more explicit version of “Never underestimate the time and resources a determined adversary will devote to reading your communications”...

On a more uptodate note on network “trap systems” such as Honeypots, many people make the mistake of faking many machines on the network by using one physical machine and multiple virtual machines on it.

As all of the virtual machines have one thing in common (the single physical machine) unless great care is taken they will have certain charecteristics that are the same for all the virtual machines.

One of which is timestamps on network packets, because they all share the same base CPU Xtal the clock drift will be the same for all the virtual machines. This can be spotted with what apears at the trap network as the most rudimentary of script kiddy enumeration techniques and thus may not even get logged…

However an asstute attacker will now have reason to belive the network is not what it is pretending to be and not use their hard earned zero day on it. Thus the trap only catches the less skilled of attackers.

Such is the issues of trying to catch the people you realy want to catch not the script kiddies and other wanabes and pretenders.

Posted by Gladys L.
on Monday, July 11, 2011 at 06:11 AM

Absolutely right. If you try to share all the knowledge you have - such as techniques and strategies. Then, you are just creating your greatest rival. Thank you for the wonderful share.

Posted by Christian Vicars
on Monday, July 11, 2011 at 11:25 AM

Well, I’ve spent the past hour or so reading posts here dating back to 2009, on security issues, and starting with “Do we need a new internet”, and I agree with Gene Spafford in General (pardon the pun, but his short answer “NO”!).

However, fast forward to now…I find it amusing we (society) have not really gotten any further ahead with security issues in cyberspace, perfect example with the millions of online gamer credit card info recently hacked and stolen!

As I’ve said since the www became public domain; “The internet started with the Government, and will end with the Government” I’m just surprised it’s gone on (Un-Controlled) for well over 20 years now^^^

The only sure firewall (pun intended) security is to disconnect! Have one pc for online use with all the best or free zone alarms installed, with another pc/hard-drive or flash drive (not connected) to the www for important files - locked up in a fireproof safe!

Sad but true - this is the only solution until the www gets $HUTDOWN !


Posted by Steve Lodin
on Tuesday, July 12, 2011 at 09:06 PM

I guess that explains a few things grin

Posted by Scott
on Tuesday, July 26, 2011 at 10:43 AM

Very insightful article.  Thanks!

I am using a USB device to do everything online called an IronKey device…you can store data and the personal edition has the Firefox browser builtin which gives you awesome encryption for your data and online shopping and browsing. 

Don’t leave home without it.

Leave a comment

Commenting is not available in this section entry.