The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

COAST, Machine names, Sun, and Microsoft

Share:

I received considerable feedback from people who read the last post on the history of the COAST Lab. Several people asked for more history, and a few former students volunteered some memories.

I'll do a few posts with some specific recollections. If others want to send stories to me or enter them in the comments, we may document a little history. Eventually, I'll get around to the formation of CERIAS and some history of that effort.

COAST & Computers

In the earliest days, we had limited funding to apply to our research infrastructure; my priority for funding was student support. Everyone had an account on CS departmental machines, but we were limited in what we could do -- especially those requiring kernel configuration. Recall that this was in the era of 1992-1997, so neither "cheap" PCs running a Linux clone nor VMs were available. We needed access to workstations and a server or two.

I had contacts at several companies, and Purdue -- having the oldest degree-granting CS department in the world -- was also reasonably well-connected with vendors. I reached out to several of them.

HP stepped up to donate a workstation, but it was underpowered, and we didn't have the money for expansion. As I recall, HP at the time wasn't interested in making a donation beyond what they had already provided. Later, we also got a steep discount on an office laser printer. HP had some very clear divisions internally, so even though several groups wanted to engage, the ones with spending authority weren’t going to help.

I also recall donations of some Intel-based machines (from Intel). Other big vendors of the time -- Sequent, IBM, Pyramid, DEC -- indicated that they weren't concerned with security, so we got nothing from them. (3 of the 4 are now out of business, so go figure.) [Correction: in 1997 we were loaned a Dec ALPHA workstation for about 6 months, but weren't allowed to keep it. It was the primary computation engine for the work that led to the Kerberos 4 flaw paper.]

Sun

The company that helped the most was Sun Microsystems. (The late) Emil Sarpa was one of the people at Sun who took particular interest in what we were doing, although there were quite a few others there who interacted with us. (Mark Graff, head of their response team was one I remember, in particular.)

I don't recall if Emil was among our first contacts at Sun, but he quickly became an internal champion for us as their Manager of External Research Relations. He helped arrange some donations of equipment in return for (a) research results, and (b) access to potential hires. (That has long been the standard quid pro quo for collaboration with universities.).

Over time, including time as CERIAS, we received many workstations, a server, a lab of Sun Rays, a SunScreen firewall, and even some Java rings and readers. In return, Sun got quite a few reports of issues they could fix in their systems, and dozens of hires.

Naming

With upwards of two dozen machines in the lab we needed hostnames for all the computers. CS used names from the Arthurian legends for their machines. We knew that the CS department at Wisconsin used names of cheeses, one university (Davis?) used names of wine varieties, and there were other themes in use elsewhere. I decided that we would use the names of places from myth, legend, and science fiction/fantasy. Not only were there many candidates, but the idea of us working from places that didn't exist seemed like a good inside joke. (This also related to my long-standing interest in using deception defensively.)

Thus, we started naming machines after non-existent places: yavin, narnia, dorsai, trantor, solaria, barnum, xanadu, atlantis, lilliput, and more. We had a few disagreements in the lab when new machines came in ("I want to have Endor!"), but they all resolved amicably. I bought an atlas of imaginary places to serve as additional source material. We never really lacked for new names. Many of those names are still in use today, although the machines have been replaced many times.

COAST received a server-class machine from Sun in the mid-1990s. It had lots more space and memory than anything we had seen before, so naturally, it was named "brobdingnag." It became our central file server and mail machine. However, it soon became apparent that some of the lab denizens couldn't recall how to spell it, and petitioned for an alias. Thus, an alternate name in the host table came into being: "basm," for "big-assed server machine." A server named "basm" still exists at CERIAS to this day.

We decided to use a different naming scheme for printers and named them after Lands in the Oz mythos. Kansas, Oz, and Ix were the three I remember, but we had more.

Microsoft

A few machine names, in particular, have a story associated with them. One of the Intel machines we received was running Windows, and we named it "hades." (We were not Windows fans at the time.) A few years into COAST -- I don't recall when -- we attracted attention and support of Microsoft, in the form of David Ladd. He was (at that time) involved in academic outreach.

David was visiting us and saw all the Sun machines. He asked if we had anything running Windows. Someone pointed to "hades." He didn't say anything about that, but a few weeks later, we received two new Windows machines, fully configured. They went online as "nifilheim" and "tartarus." On his next visit, David quietly noted the machines. A few weeks later, two more showed up. I think those became "hel" and "duzkah." At his next visit, I observed that we were at a university, and I had access to scholars of history, religion, and sociology. I think we got a few more machines periodically to test us, but they all got named in the same scheme.

That isn't to imply that our relationship with Microsoft was adversarial! To the contrary, it was collaborative. In fall 1996, when Windows Server NT 4 came out, I offered a special-topics penetration testing class. About two dozen people enrolled. Under NDA with Microsoft, we proceeded to poke and prod the OS while also reading some of the classic literature on the topic.

Within two days, the class had discovered that NT 4 failed spectacularly if you exhausted memory, disk space, or file descriptors. By the end of the semester, everyone had found at least 4 significant flaws -- significant meaning "crashed the system" or "gained administrative privileges." We thus reported about 100 security flaws to the Windows support team. At that time, Microsoft was not as concerned about security as they are today, so we were told (eventually) that about 80 of the reports were for "expected but undocumented behavior" that would not be addressed. (Those numbers are not exact as they are based on the best of my recollection, but they are about right on the ratio.) That class provided several grads who went to work for Microsoft, as well as at least two who went to work for national agencies. I have not offered the class since that time as there have always been higher-priority needs for my teaching.

Over the years, many COAST (and eventually, CERIAS) graduates went to work at Microsoft. David --and MS -- remained supportive of our efforts until he moved into a new position well into the CERIAS days.

A Test of Time: COAST and an award-winning paper

Share:

The Paper

IEEE Test of Time Award

Today, various awards were announced at the 41st IEEE Symposium on Security & Privacy, including Test of Time Awards. One of the papers recognized was "Analysis of a Denial of Service Attack on TCP," written by a group of my former students -- Christoph Schuba, Ivan Krsul, Markus Kuhn, Aurobindo Sundaram, Diego Zamboni -- and me. The paper originally appeared in the 1997 S&P conference.

The paperreported results of work done in the COAST Laboratory -- the precursor to CERIAS. In this post, I'll make a few comments about the paper, and provide a little history about COAST.

The Paper & Authors

When we received notice of the award, we were all a bit taken aback. 23 years? At the time, we were one of only two or three recognized academic groups working in cybersecurity (although that word had yet to be used). As such, we managed to attract over a dozen very talented students — including the other authors of this paper.

In the second half of 1996, several network denial-of-service attacks took place across the Internet. We discussed these at one of our regular lab meetings. I challenged the students to come up with ways to mitigate the problem, especially to protect our lab infrastructure. The first step involved replicating the attack so it could be studied. That only took the students a few days of effort.

After a week or two of further work, we had another group discussion that included the students presenting a detailed review of how the attack worked, using the TCP diagram as illustration. There was a discussion of some partial solutions that were disappointing in scale or efficacy. I remember suggesting that if they could model the attack as a state machine, a solution might be developed the same way — noting good and bad hosts.

Within a week, the students had coded a working prototype to test against our model attack. Thereafter, there was some extended tinkering and tuning, and a rush to produce a paper to submit to the conference. Purdue later obtained a patent (U.S. Patent 6725378) on the idea, although it was never licensed for use.

Thereafter, Christoph received his PhD in 1997 with work in firewalls and went on to a career leading to his current position as a Senior Security Architect at Apple Computer. Ivan received his PhD in 1998 with work on security vulnerability classification and he currently runs a company, Artexacta, that he founded in Bolivia. Markus finished his MS in 1997, and after completing his PhD at Cambridge, joined the faculty there. Robin finished his MS in 2017 and is now the Head of Information Assurance and Data Protection at RELX. Diego finished his PhD in 2001 with work in agent-based intrusion detection and is now an Enterprise Security Architect at Swisscom in Switzerland.

The COAST Laboratory

Purdue has a long history of being involved in cybersecurity. Notably, Dorothy E. R. Denning completed her Ph.D. at Purdue in 1975, with a thesis on secure information flow. She then became an assistant professor and offered a graduate course in Data Security, which has been offered continuously to this day as CS 555.

Dorothy was at Purdue until 1983. One of her notable students was Matt Bishop, who completed his M.S. and Ph.D. (1984) in information security on take-grant models. Matt has gone on to also be a major force in the field.

Sam Wagstaff joined the CS department in 1983 and took on the teaching of CS 555 after Dorothy left. His primary area of interest was cryptography, and he has had many notable discoveries and publications during his career at Purdue (Sam retired in 2019). He even has a form of prime number named after him: the Wagstaff Prime!

I joined Purdue's CS department in 1987. My primary research focus was in software engineering and distributed systems. I was involved with the newly-formed Software Engineering Research Center (SERC, an NSF-supported industry-university cooperative research center) at Purdue and the University of Florida. System security was a "hobby" area for me because there was not much of an interest in academia at the time other than in formal methods and cryptography. (I've discussed this elsewhere.)

In 1988, the Internet Worm incident occurred, as did my involvement in responding to it. Soon after that, I was the lead author of the first English-language technical reference book on computer viruses and co-authored the 1st edition of Practical Unix Security with Simson Garfinkel. I also was doing some highly visible research, including the work with Dan Farmer on COPS.

My work in the SERC had resulted in some great results, but I never saw them transitioning into practice. Meanwhile, my work in security had some immediate impact. Thus, I gradually started moving the focus of my work to security. This change was a bit risky halfway to my tenure decision, but it was what I felt compelled to do. I continued my work in intrusion detection and began research in software forensics (my work started that as a formal field).

The increased visibility of security also meant that some good students were coming to Purdue to work in the field and that some external funding started becoming available. Most of the students wanted to build systems-oriented security tools, but we knew there was potential for a very wide set of topics. So, Sam and I decided to form a laboratory within the CS department. The department head at the time, John Rice, gave us a room for the lab and encouraged us to seek out funding.

The COAST name

We knew that we needed a catchy name for the group. I threw it out as a challenge to a few of my students. Steve Chapin (now at LLNL) -- who was my first Ph.D. student in a security-related topic -- came up with COAST as an acronym for "Computer Operations, Audit, and Security Technologies." It also was a sarcastic reference to how funding agencies thought good computer science only occurred at the coasts. We knew immediately it was the perfect name, and we seldom used anything except for the acronym itself.

I, along with a couple of the students, played a bit with the desktop publishing tools of the day (recall, it was 1992) and came up with the logo: logo

We knew that we needed funding to make the lab viable and keep the space. I approached several of the current partners of the SERC along with some other friends of the CS department to see if we could get some initial funding to support equipment purchases and support for the students. Four stepped forward: Sun Microsystems, Bell-Northern Telecom (BNR), Schlumberger Laboratories, and Hughes Laboratories.

We were open for business as of spring in 1992!

Over the next six years, COAST grew in faculty, students, and research, establishing itself as the largest research group in computing security in the country, reaching a peak research budget of over one million dollars per year (pretty good for its time).

COAST's success became notable for several innovative and groundbreaking projects, including the Tripwire tool, the IDIOT intrusion detection system, vulnerability classification work by Aslam and Krsul that influenced the CVE system, the first-ever papers describing software forensics by Krsul, Spafford, and Weeber, the discovery of a serious lurking Kerberos 4 encryption flaw by Dole and Lodin, and the firewall reference model by Schuba -- among others.

Next chapter

As COAST grew and added faculty from across the university, it was clear that it was more than Computer Science. Some of the CS faculty members were hostile to the work, dismissing it as "merely systems administration." (A few still have that attitude.) The CS Ph.D. qualifying exams of the time had mandatory exams in both theory of computation and numerical analysis (the department had its roots -- from 1962 -- in mathematics). Some of the faculty in those two areas were particularly unbending, and as a result, several very promising security grad students exited Purdue with only an M.S. degree. In retrospect, that worked out okay for all of them as they went on to stellar careers in government and industry, all paid much better than any of those professors!

Those factors, and others, led to the transformation of COAST into a university-wide institute, CERIAS, in May of 1998. I've discussed this elsewhere and may do a follow-on post with some of that history.

See some of the recollections in COAST, Machine names, Sun, and Microsoft.

Near the Root of Cybersecurity Dysfunction

Share:

I’ve been missing from the CERIAS blog for much of the last year+ as I enjoyed a long-overdue sabbatical.

While I was away, I was going through some materials in my account and found slides from a talk I was giving many years ago. I referenced those in a post back in February, entitled A Common Theme. I polished that up a little, gave it a few times, and then presented it in the CERIAS Security Seminar when I returned to campus this fall.

Basically, I attribute a large portion of why we continue to have problems in what we call “cybersecurity” is that we don’t have a precise—and agreed-upon—definition of “security.” Coupled with that, we don’t have agreed-upon characteristics, nor do we have well-defined metrics. The result is that we can’t tell if something addresses needs, we have no idea if the money we spent has made a difference that corresponds to the outlay, and we can’t compare different approaches. That is simply the start!

If you want to watch the presentation then visit this link. (Note that we have videos of presentations going back 15 years—over 400 videos—all available at no charge!)

 

Challenging Conventional Wisdom

Share:

In IT security ("cybersecurity") today, there is a powerful herd mentality. In part, this is because it is driven by an interest in shiny new things. We see this with the massive pile-on to new technologies when they gain buzzword status: e.g., threat intelligence, big data, blockchain/bitcoin, AI, zero trust. The more they are talked about, the more others think they need to be adopted, or at least considered. Startups and some vendors add to the momentum with heavy marketing of their products in that space. Vendor conferences such as the yearly RSA conference are often built around the latest buzzwords. And sadly, too few people with in-depth knowledge of computing and real security are listened to about the associated potential drawbacks. The result is usually additional complexity in the enterprise without significant new benefits — and often with other vulnerabilities, plus expenses to maintain them.

Managers are often particularly victimized by these fads as a result of long-standing deficiencies in the security space: we have no sound definition of security that encompasses desired security properties, and we, therefore, have no metrics to measure them. If a manager cannot get some numeric value or comparison of how new technology may make things better vs. its cost, the decision is often made on "best practice." Unfortunately, "best practice" is also challenging to define, especially when there is a lot of talk and excitement by people about vending the next new shiny thing. Additionally, enterprise needs are seldom identical, so “best” may not be uniform. If the additional siren call is heard about "See how it will save you money!" then it is nearly impossible to resist, even if the "savings" are only near-term or downright illusory.

This situation is complicated because so much of what we use is defective, broken, or based on improperly-understood principles. Thus, to attempt to secure it (really, to gain greater confidence in it) solutions that sprinkle magic pixie dust on top are preferred because they don't mean sacrificing the sunk cost inherent in all the machines and software already in use. Magic fairy dust is shiny, too, and usually available at a lower (initial) cost than actually fixing the underlying problems. So that is why we have containers on VMs on systems with multiple levels of hypervisor behind firewalls and IPS --and turtles all the way down — while the sunk costs keep getting larger. This is also why patching and pen testing are seen as central security practices— they are the flying buttresses of security architecture these days.

The lack of a proper definition and metrics has been known for a while. In part, the old Rainbow series from the NCSC (NSA) was about this. The authors realized the difficulty of defining "secure" and instead spoke of "trusted." The series established a set of features and levels of trust assurance in products to meet DOD needs. However, that was with a DOD notion of security at the time, so issues of resilience and availability (among others) weren't really addressed. That is one reason why the Rainbow Series was eventually deprecated: the commercial marketplace found it didn't apply to their needs.

Defining security principles is a hard problem, and is really in the grand challenge space for security research. It was actually stated as such 16 years ago in the CRA security Grand Challenges report (see #3). Defining accompanying metrics is not likely to be simple either, but we really need to do it or continue to come up against problems. If the only qualities we can reliably measure for systems are speed and cost, the decisions are going to be heavily weighted towards solutions that provide those at the expense of maintainability, security, reliability, and even correctness. Corporations and governments are heavily biased towards solutions that promise financial results in the next year (or next quarter) simply because that is easily measured and understood.

I've written and spoken about this topic before (see here and here for instance). But it has come to the forefront of my thinking over the last year, as I have been on sabbatical. Two recent issues have reinforced that:

  • I was cleaning up my computer storage and came across some old presentations from 10-20 years ago. With minor updating, they could be given today. Actually, I have been giving a slightly updated version of one from 11 years ago, and the audiences view it as "fresh." The theme? How we don't define or value security appropriately. (Let me know if you’d like me to present it to your group; you can also view a video of the talk given at Sandia National Laboratories,)
  • I was asked by people associated with a large entity with significant computing presence to provide some advice on cloud computing. They have been getting a strong push from management to move everything to the cloud, which they know to be a mistake, but their management is countering  their concerns about security with "it will cost less." I have heard this before from other places and given informal feedback to the parties involved. This time, I provided more organized feedback, now also available as a CERIAS tech report (here). In summary, moving to the cloud is not always the best idea, nor is it necessarily going to save money in the long term.

I hope to write some more on the issues around defining security and bucking the "conventional wisdom" once I am fully recovered from my sabbatical. There should be no shortage of material. In the meantime, I invite you to look at the cloud paper cited above and provide your comments below.


Cyber Security Hall of Fame 2019 Inductees

Share:

[Update May 1: The CSHOF pages have been updated with bios on the 2019 inductees.{

The 2019 inductees into the CSHOF were announced last week.

The Hall of Fame was created as a way to honor and memorialize individuals who have had a particularly notable impact on cyber security as a field.

There are five criteria when considering potential honorees: Technology, Policy, Public Awareness, Education and Business. Nominated individuals can reside and work anywhere in the United States. A senior board reviews all submissions made to a public call for nominations after they have been compiled and ranked to select each year’s honorees.

This years’s honorees are:

  • Brian Snow , a former technical director with the National Security Agency, known for his work in cryptography, and in seeking to bridge the gap between government and the commercial sector, helping to foster transparency and cooperation.
  • Sheila Brand, the defining person behind the production of the Trusted Computer System Evaluation Criteria, known as the “Orange Book”, a standard set of requirements for assessing and effecting cybersecurity controls in a computer system.
  • Corey Schou , a University Professor, Associate Dean, published author, and the director of the National Information Assurance Training and Education Center. He led the development of the college curricula which underpins the Centers of Academic Excellence in Information Systems Security (Cybersecurity) program
  • Virgil Gligor , a professor at Carnegie Mellon University who has made fundamental contributions in applied cryptography, distributed systems, and cybersecurity. Other subjects that he has worked on include covert channel analysis, access control mechanisms, intrusion detection, and DoS Protection.
  • Ken Minihan , a former United States Air Force officer known for his service as the Director of the NSA, and prior to that, the Director of the Defense Intelligence Agency under the Clinton administration. He operationalized NSA’s Information Systems Security mission promoting engagement with industry and academia and U.S. allies.

Inducted in memorium:

  • Rebecca “Becky” Bace , who was the major force behind building the computer misuse and detection (CMAD) community, starting in the 1990s.She has been widely recognized for her many efforts in industry and government to increase participation by women and minorities in cyber security, and to bring useful technology to market. Becky is fondly remembered by many as the “Den mother of Cyber Security.”
  • Howard Schmidt , who, among many other activities, was Vice Chair of the President's Critical Infrastructure Protection Board and the special adviser for cyberspace security for the Bush White House directly following 9/11. While in that position, he was a leader in developing the U.S. National Strategy to Secure CyberSpace. He later served in the Obama administration as the White House Cybersecurity Coordinator. He also served as the CISO and CSO of Microsoft, among other roles.

The formal induction will be held at an event on April 25, at the annual Hall of Fame Dinner at the Hotel at Arundel Preserve.

Congratulations to all the new inductees!