The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog

Page Content

Ch-ch-ch-changes

Share:

Tomorrow, July 1, 2025, ushers in two significant changes.

For the first time in over 25 years, our fantastic administrative assistant, Lori Floyd, will not be present to greet us as she has retired. Lori joined the staff of CERIAS in October of 1999 and has done a fantastic job of helping us keep moving forward. Lori was the first person people would meet when visiting us in our original offices in the Recitation Building, and often the first to open the door at our new offices in Convergence. At our symposia, workshops, and events of all kinds, Lori helped ensure we had a proper room, handouts, and (when appropriate) refreshments. She also helped keep all the paperwork and scheduling straight for our visitors and speakers, handled some of our purchasing, and acted as building deputy. We know she quietly and competently did many other things behind the scenes, and we'll undoubtedly learn about them as things begin to fall apart!

We all wish Lori well in her retirement. She plans to spend time with her partner, kids, and grandkids, travel, and garden. She will be missed at CERIAS, but definitely not forgotten.

The second change is in the related INSC Interdisciplinary Information Security graduate program, a spin-off of CERIAS. In 2000, Melissa Dark, Victor Raskin, and Spaf founded the INSC program as the first graduate degree in information/cyber security in the world. The program was explicitly interdisciplinary from the start and supported by faculty across the university. Students were (and still are) required to take technology ethics and policy courses in addition to cybersecurity courses. Starting with MS students supported by one of the very first NSF CyberCorp awards, the program quickly grew and was approved to offer the Ph.D. degree.

INSC was never formally a part of CERIAS, but students and faculty often saw them as related. All INSC students were automatically included in CERIAS events, and they were frequently recruited by CERIAS partners (and still are!). CERIAS faculty volunteer to serve on INSC committees and to advise the students. It is a "win–win" situation that has resulted in some great graduates, many now in some notable positions in industry and government.

The change coming to INSC is in leadership. After 25 years as program head, Spaf is stepping into the role of associate head for a while. Taking on the role of program head is Professor Christopher Yeomans. Chris has been a long-time supporter of the program with experience as the chair of the Philosophy Department.

(If you're interested in a graduate degree through INSC visit the website describing the program and how to apply.)

Reflecting on the Internet Worm at 35

Share:

Thirty-five years ago today (November 2nd), the Internet Worm program was set loose to propagate on the Internet. Noting that now to the computing public (and cybersecurity professionals, specifically) often generates an "Oh, really?" response akin to stating that November 2nd is the anniversary of the inaugural broadcast of the first BBC TV channel (1936), and the launch of Sputnik 2 with Laika aboard (1957). That is, to many, it is ho-hum, ancient history.

Perhaps that is to be expected after 35 years -- approximately the length of a human generation. (As an aside, I have been teaching at Purdue for 36 years. I have already taught students whose parents had taken one of my classes as a student; in five or so years, I may see students whose grandparents took one of my classes!). In 1988, fewer than 100,000 machines were likely connected to the Internet; thus, only a few thousand people were involved in systems administration and security. For us, the events were more profound, but we are outnumbered by today's user population; many of us have retired from the field...and more than a few have passed on. Thus, events of decades ago have become ancient history for current users.

Nonetheless, the event and its aftermath were profound for those who lived through it. No major security incident had ever occurred on such a scale before. The Worm was the top news story in international media for days. The events retold in Cliff Stoll's Cuckoo's Egg were only a few years earlier but had affected far fewer systems. However, that tale of computer espionage heightened concern by authorities in the days following the Worm's deployment regarding its origin and purpose. It seeded significant changes in law enforcement, defense funding and planning, and how we all looked at interconnectivity. In the following years, malware (and especially non-virus malware) became an increasing problem, from Code Red and Nimda to today's botnets and ransomware. All of that eventually led to a boom in add-on security measures, resulting in what is now a multi-billion dollar cybersecurity industry.

At the time of the Worm, the study of computing security (the term "cybersecurity" had not yet appeared) was primarily based around cryptography, formal verification of program correctness, and limiting covert channels. The Worm illustrated that there was a larger scope needed, although it took additional events (such as the aforementioned worms and malware) to drive the message home. Until the late 1990s, many people still believed cybersecurity was simply a matter of attentive cyber hygiene and not an independent, valid field of study. (I frequently encountered this attitude in academic circles, and was told it was present in the discussion leading to my tenure. That may seem difficult to believe today, but should not be surprising: Purdue has the oldest degree-granting CS department [60 years old this year], and it was initially viewed by some as simply glorified accounting! It is often the case that outsiders dismiss an emerging discipline as trivial or irrelevant.)

The Worm provided us with an object lesson about many issues that, unfortunately, were not heeded in full to this day. That multi-billion dollar cybersecurity industry is still failing to protect far too many of our systems. Among those lessons:

  • Interconnected systems with long-lasting access (e.g., .rshrc files) created a playground for lateral movement across enterprises. We knew then that good security practice involved fully mediated access (now often referred to as "Zero Trust") and had known that for some time. However, convenience was viewed as more important than security...a problem that continues to vex us to this day. We continue to build systems that both enable effortless lateral movement, and make it difficult or annoying for users to reauthenticate, thus leading them to bypass the checks.
  • Systems without separation of privilege facilitated the spread of malware. Current attackers who manage to penetrate key services or privileged accounts are able to gain broader access to entire networks, including the ability to shut off monitoring and updates. We have proven methods of limiting access (SELinux is one example) but they are too infrequently used.
  • Sharing information across organizations can result in a more robust, more timely response. Today, we still have organizations that refuse to disclose if they have been compromised, thus delaying our societal response; information obtained by government agencies has too often been classified, or at least closely held.. The information that is shared is frequently incomplete or not timely.
  • The use of type-unsafe languages with minimal security features can lead to flaws that may be exploited. One only needs to survey recent CVE entries and attack reports to see buffer overflows, type mismatches, and other well-known software flaws leading to compromise. Many organizations are still producing or reusing software written in C or C++ that are especially prone to such errors. Sadly, higher education is complicit by teaching those languages as primary, mainly because their graduates may not be employable without them.
  • Heterogenity of systems provides some bulwark against common attacks. Since 1988, the number of standard operating systems in use has decreased, as has the underlying machine architectures. There are clearly economic arguments for reduced numbers of platforms, but the homogeneity facilitates common attacks. Consideration of when to reuse and when to build new is sadly infrequent.
  • The Worm incident generated conflicting signals about the propriety of hacking into other people's systems and writing malware. Some people who knew the Worm's author rose to his defense, claiming he was demonstrating security problems and not doing anything wrong. Malware authors and system attackers commonly made that same claim in the decades following, with mixed responses from the community. It still colors the thinking of many in the field, justifying some very dubious behavior as somehow justified by results. Although there is nuance in some discussions, the grey areas around pen testing, companies selling spyware, and "ethical" hacking still enable plausible explanations for bad behavior.

That last point is important as we debate the dangers and adverse side-effects of machine learning/LLM/AI systems. Those are being refined and deployed by people claiming they are not responsible for the (mis)use of (or errors in) those systems and that their economic potential outweighs any social costs. We have failed to clearly understand and internalize that not everything that can be done should be done, especially in the Internet at large. This is an issue that keeps coming up and we continue to fail to address it properly.

As a field, cybersecurity is relatively young. We have a history that arguably starts in the 1960s with the Ware Report. We are still discovering what is involved in protecting systems, data privacy, and safety. Heck, we still need a commonly accepted definition of what cybersecurity entails! (Cf. Chapter 1 of the Cybersecurity Myths book, referenced below.). The first cybersecurity degree program wasn't established until 2000 (at Purdue). We still lack useful metrics to know whether we are making significant progress and titrate investment. And we are still struggling with tools and techniques to create and maintain secure systems. All this while the market (and thus need) is expanding globally.

In that context of growth and need, we should not dismiss the past as "Ho-hum, history." Members of the military study historic battles to avoid future mistakes: mentioning the Punic Wars or The Battle of Thermopylae to such a scholar will not result in dismissal with "Not relevant." If you are interested in cybersecurity, it would be advisable to study some history of the field and think about lessons learned -- and unlearned.


Further Reading

The Ware Report
This can be seen as one of the first descriptions of cybersecurity challenges, needs and approaches.
The protection of information in computer systems
A paper from 1975 by J.H. Saltzer and M.D. Schroeder. This paper refers to basic design principles, in large part inspired by Multics, that include complete mediation (now somewhat captured by "Zero Trust") and least privilege. These are most often violated by software rather than designed in, especially economy of mechanism.
(Versions of this paper may be found outside the paywall via web search engines.)
Historical papers archive
A collection of historical papers presenting the early foundation of cybersecurity. This includes the Ware Report, and its follow-on, the Anderson Report. Some other, hard-to-find items are here.
The Communications of the ACM Worm Issue
An issue of CACM was devoted to papers about the Worm.
The Internet Worm: An Analysis
My full report analyzing what the Worm program did and how it was structured.
The Internet Worm Incident
A report describing the timeline of the Worm release, spread, discovery, and response.
Happy birthday, dear viruses
This is a short article in Science I coauthored with Richard Ford for the 25th anniversary of the Worm, about malware generally.
Cybersecurity Myths and Misconceptions
A new book about things the public and even cybersecurity experts mistakenly believe about cybersecurity. Chapter 1 addresses, in depth, how we do not have an accepted definition of cybersecurity or metrics to measure it. Other items alluded to in this blog post are also addressed in the book.
Cyber security challenges and windmills
One of my blog posts, from 2009, about how we continue to generate studies of what would improve cybersecurity and then completely fail to heed them. The situation has not improved in the years since then.

AI and ML Sturm und Drang

Share:
I recently wrote up some thoughts on the current hype around ML and AI. I sent it to the Risks Digest. Peter Neumann (the moderator) published a much-abbreviated version. This is the complete set of comments.


There is a massive miasma of hype and misinformation around topics related to AI, ML, and chat programs and how they might be used…or misused. I remember previous hype cycles around 5th-generation systems, robotics, and automatic language translation (as examples). The enthusiasm each time resulted in some advancements that weren’t as profound as predicted. That enthusiasm faded as limitations became apparent and new bright, shiny technologies appeared to be chased.

The current hype seems even more frantic for several reasons, not least of which is that there are many more potential market opportunities for recent developments. Perhaps the entities that see new AI systems as a way to reduce expenses by cutting headcount and replacing people with AI are one of the biggest drivers causing both enthusiasm and concern (see, for example, this article). That was a driver of the robotics craze some years back, too. The current cycle has already had an impact on some creative media, including being an issue of contention in the media writers' strike in the US. It also is raising serious questions in academia, politics, and the military.

There’s also the usual hype cycle FOMO (fear of missing out) and the urge to be among the early adopters, as well as those speculating about the most severe forms of misuse. That has led to all sorts of predictions of outlandish capabilities and dire doom scenarios — neither of which is likely wholly accurate. AI, generally, is still a developing field and will produce some real benefits over time. The limitations of today's systems may or may not be present in future systems. However, there are many caveats about the systems we have now and those that may be available soon that justify genuine concern.

First, LLMs such as ChatGPT, Bard, et al. are not really "intelligent." They are a form of statistical inference based on a massive ingest of data. That is why LLMs "hallucinate" -- they produce output that matches their statistical model, possibly with some limited policy shaping. They are not applying any form of "reasoning," as we define it. As noted in a footnote in my recent book,
Philosophically, we are not fond of the terms 'artificial intelligence' and 'machine learning,' either. Scholars do not have a good definition of intelligence and do not understand consciousness and learning. The terms have caught on as a shorthand for 'Developing algorithms and systems enhanced by repeated exposure to inputs to operate in a manner suggesting directed selection.' We fully admit that some systems seem brighter than, say, certain current members of Congress, but we would not label either as intelligent.
I recommend reading this and this for some other views on this topic. (And, of course, buy and read at least one copy of Cybermyths and Misconceptions. grin

Depending on the data used to build their models, LLMs and other ML systems may contain biases and produce outright falsehoods. There are many examples of this issue, which is not new: bias in chatbots (e.g., Microsoft Tay turning racist, bias in court sentencing recommendation systems, and bias in facial recognition systems such as discussed in the movie Coded Bias ). More recently, there have been reports showing racial, religious, and gender biases in versions of ChatGPT (as example, this story). “Hallucinations” of non-existent facts in chatbot output are well-known. Beyond biases and errors in chats, one can also find all sorts of horror stories about autonomous vehicles, including several resulting in deaths and serious injuries because they aren’t comprehensive enough for their uses.

These limitations are based on how the systems are trained. However, it is also possible to "poison" these systems on purpose by feeding them bad information or triggering the recall of biased information. This is an area of burgeoning study, especially within the security community. Given that encoded systems derived in these large ML models cannot be easily reversed to understand precisely what causes certain decisions to be made (often referred to as "explainable AI"), there are significant concerns about inserting these systems in critical paths.

Second, these systems are not accountable in current practice and law. If a machine learning system (I'll use that term but cf my 2nd paragraph) comes up with an action that results in harm, we do not have a clear path of accountability/responsibility. For instance, who should be held at fault if an autonomous vehicle were to run down a child? It is not an "accident" in the sense that it could not be anticipated. Do we assign responsibility to the owner of the vehicle? The programmers? The testers? The stockholders of the vendor? We cannot say that "no one" is responsible because that leaves us without recourse to force a fix of any underlying problems, of potential recompense to the victims, and to general awareness for the public. Suppose we use such systems safety or correctness-critical systems (and I would put voting, healthcare, law enforcement, and finance as exemplars). In that case, it will be tempting for parties to say, "The computer did it," rather than assign actual accountability. That is obviously unacceptable: We should not allow that to occur. The price of progress should not be to absolve everyone of poor decisions (or bad faith). So who do we blame?

Third, the inability of much of the general public to understand the limitations of current systems means that any use may introduce a bias into how people make their own decisions and choices. This could be random, or it could be manipulated; either way, it is dangerous. It could be anything from gentle marketing via recency effects and priming all the way to Newspeak and propaganda. The further towards propaganda we go, the worse the outcome may be. Who draws the line, and where is it drawn?

One argument is, "If you train humans on rampant misinformation, they would be completely biased as well, so how is this different?" Well, yes -- we see that regularly, which is why we have problems with Q-anon, vaccine deniers, and sovereign citizens (among other problem groups). They are social hazards that endanger all of us. We should seek ways to reduce misinformation rather than increase it. The propaganda that is out there now is only likely to get worse when chatbots and LLMs are put to work, producing biased and false information. This has already been seen (e.g., this story about deepfakes), and there is considerable concern about the harm this can bring. Democracy is intended to work best when the voters have access to accurate information. The rising use of these new generative AI systems is already raising the specter of more propaganda, including deep-fake videos.

Another problem with some generative systems (artwork, generating novels, programming) is that they are trained on information that might have restrictions, such as copyright. This raises some important questions about ownership, creativity, and our whole notion of issues of rule of law; the problems of correctness and accountability remain. There is some merit to the claim that systems trained on (for example) art by human artists may be copying some of that art in an unauthorized manner. That may seem silly to some technologists, but we’ve seen lawsuits successfully executed against music composers alleged to have heard a copyrighted tune at some point in the past. The point is the law (and perhaps more importantly, what is fair) is not yet conclusively decided in this realm.

And what of leakage? We’re already seeing cases where some LLM systems are ingesting the questions and materials people give them to generate output. This has resulted in sensitive and trade secret materials being taken into these databases…and possibly discoverable by others with the right prompting (e.g., this incident at Samsung). What of classified material? Law enforcement sensitive material? Material protected by health privacy laws? What happens for models that are used internationally when the laws are not uniform? Imagine the first “Right to be forgotten” lawsuits against data in LLMs. There are many questions yet to be decided, and it would be folly to assume that computing technologists have thoroughly explored these issues and designed around them.

As I wrote at the beginning, there are potential good uses for some of these systems, and what they are now is different from what they will be in, for example, a decade. However, the underlying problem is what I have been calling "The Trek futurists" -- they see all technology being used wisely to lead us to a future roughly like in Star Trek. However, humanity is full of venal, greedy, and sociopathic individuals who are more likely to use technology to lead us to a "Blade Runner" future ... or worse. And that is not considering the errors, misunderstandings, and limitations surrounding the technology (and known to RISKS readers). If we continue to focus on what the technology might enable instead of the reality of how it will be (mis)used, we are in for some tough times. One of the more recent examples of this general lack of technical foresight is cryptocurrencies. They were touted as leading to a more democratic and decentralized economy. However, some of the highest volumes of uses to date are money laundering, illicit marketplaces (narcotics, weapons, human trafficking, etc.), ransomware payments, financial fraud, and damage to the environment. What valid uses of cryptocurrency there might be (if there are any) seem heavily outweighed by the antisocial uses.

We should not dismiss, out of hand, warnings about new technologies and accuse those advocating caution as “Luddites.” Indeed, there are risks to not developing new technologies. However, the more significant risk may be assuming that only the well-intentioned will use them.

Reflections on the 2023 RSA Conference

Share:

I have attended 14 of the last 22 RSA conferences. (I missed the last three because of COVID avoidance; many people I know who went became infected and contributed to making them superspreader events. I saw extremely few masks this year, so I will not be surprised to hear of another surge. I spent all my time on the floor and in crowds with a mask -- I hope that was sufficient.)

I have blogged here about previous iterations of the conference (2007, 2014, 2016, and most recently, 2019). Reading back over those accounts makes me realize that little has really changed. Some of the emphasis has changed, but most of what is exhibited and presented is not novel nor does it address root causes of our problems.

Each year, I treasure meeting with old friends and making some worthwhile new acquaintances with people who actually have a clue (or two). Sadly, the number of people I stop to chat with who don't have the vaguest idea about the fundamentals of the field or its history continue to constitute the majority. How can the field really progress if the technical people don't really have a clue what is actually known about security (as opposed to known about the products in their market segment)?

I was relieved to not see hype about blockchain (ugh!) or threat intelligence. Those were fads a few years ago. Apparently, hype around quantum and LLMs has not yet begun to build in this community. Zero trust and SBOM were also understated themes, thankfully. I did see more hardware-based security, some on OT, and a little more on user privacy. All were under-represented.

My comments on the 2019 RSAC could be used almost word-for-word here. Rather than do that, I strongly suggest you revisit those comments now.

Why did I go if I think it was so uninspiring? As usual, it was for people. Also, this year, I was on a panel for our recent book, Cybersecurity Myths and Misconceptions.. Obviously, I have a bias here, but I think the book addresses a lot of the problems I am noting with the conference. We had a good turnout at the panel session, which was good, but almost no one showed up at the book signings. I hope that isn't a sign that the book is being ignored, but considering it isn't hyping disaster or a particular set of products, perhaps that is what is happening. Thankfully, some of the more senior and knowledgable people in the field did come by for copies or to chat, so there is at least that. (I suggest that after you reread my 2019 comments, you get a copy of the book and think about addressing some of the real problems in the field.)

Will I go to the 2024 RSAC Conference? It depends on my health and whether I can find funds to cover the costs: It is expensive to attend, and academics don't have expense accounts. If I don't go, I will surely miss seeing some of the people who I've gotten to know and respect over the years. However, judging by how many made an effort to find me and how the industry seems to be going, I doubt will be missed if I am not there. That by itself may be enough reason to plan an alternate vacation

Interview with Spaf at S4x23

Share:

If you didn’t get a chance to attend S4x23 to hear the talks, or you simply haven’t heard enough from Spaf yet, here is a recording of the keynote interview with Spaf by Dale Peterson. The interview covered a lot of ground about the nature of defensive security, the new Cybermyths book (got yours yet?), OT security, the scope of security understanding, having too much information, and having a good security mindset.

This and other interviews and talks Spaf has given are on the Professor Spaf YouTube channel.