Last week’s post addressed the myth that LLMs make programmers, security analysts, and incident responders optional. The pundit class continues to predict, with confidence approaching certainty, that there is no longer any reason to study computer science. That confidence is technically misplaced for the reasons I gave then: LLMs are good at recombining what has been written and poor at reasoning about what has not. They are not a substitute for the institutional context that experienced people carry. It is also misplaced for a more general reason that deserves a separate look. New technologies do not end fields. They reshape them. Some particular jobs become obsolete; others that nobody saw coming are created.
New York City in 1900 had roughly 100,000 working horses on its streets. Those horses produced about 2.5 million pounds of manure and 10 million gallons of urine per day. As of 1880, the city was also removing 15,000 dead horses per year. Vacant-lot manure piles reached 40 to 60 feet high. One U.S.-wide estimate put the horse-manure-bred fly population at three billion per day, linked to typhoid and diarrheal disease outbreaks.
The crisis was severe enough that in 1898 the first international urban-planning conference convened in New York City to address it. The delegates abandoned the conference after three days, instead of the scheduled ten, because none of them could see a solution. Fourteen short years later, cars outnumbered horses on New York's streets. By 1917, the last horsecar was retired.
The end of urban horses meant the end of stables, of carriage manufacturing, of buggy-whip making, and of much of the feed-and-livery economy. It also meant the end of an unsanitary, disease-spreading, traffic-snarling system that the experts of the day had explicitly given up trying to solve. What replaced it was more dependable transportation, improved urban sanitation, faster emergency response, longer-distance travel, and whole industries (automotive manufacturing, parts and service, road construction, motels, traffic engineering, automotive insurance, motorsport, etc.) that no expert at the 1898 conference would have predicted.
The pattern is not unique to horses. Some predicted telephones would end face-to-face interaction; instead they created telecom, customer service at scale, mobile applications, and a global communications industry. Affordable electricity was predicted to eliminate domestic labor; it eliminated some drudgery and enabled a far more complex household and industrial economy. Television was predicted to end reading and conversation; it created entire creative industries that did not exist before.
The wheel was, in all likelihood, denounced by the makers of sledges and travois.
None of these transitions ended their fields. Each expanded them. Each eventually produced job categories that no one affected by the change could have imagined.
Computing has had its own buggy-whip moments. Higher-level languages were supposed to make programmers unnecessary; we got more programmers working on more ambitious problems. Portable computers were supposed to mean computing had become a finished consumer product needing no further professional discipline; the opposite happened. Cloud computing was supposed to let enterprises shed their IT staff; most needed different IT staff, often more of them, with skills the previous generation lacked. Open source was supposed to collapse the commercial software industry; it became the foundation on which the next generation of commercial software was built.
Each predicted endpoint was a real change. Each disrupted some careers. None ended computing or reduced the overall demand for computing professionals.
The same pattern applies to cybersecurity and is already visible. AI is first going to disrupt the parts of our field that exist because something else was broken.
The penetration-testing ecosystem is the clearest example. It is a large, mature industry built on a failure mode: vendors, commercial and open source alike, shipping software with predictable defects, and customers willingly adopting it. The parallel to the horse problem is not subtle. That ecosystem exists, in large part, to muck out the stables of an upstream production system that produces far more waste than it should. AI is competent at identifying many flaws that shouldn't make their way to the street and is the technology that makes cleanup more affordable. It is not, by itself, a technology that fixes the source. Until the production process changes, the cleanup never ends.
That disruption is real, and uncomfortable for the people who built careers around it. On the merits, it is also mostly good news. A defect found by an AI tool before deployment is not exploited after deployment, and preventable harm decreases. But those same people are among the best positioned to help define what comes next! Their training and operational instincts are exactly what the emerging areas need.
Disruption of one part of the field opens possibilities elsewhere. Several areas will become more important, not less:
All five have potential for growth, from research through deployment. There are undoubtedly others.
If we recognize that cybersecurity, writ large, is about earning appropriate trust in computing systems, then several of the most interesting frontiers in software engineering are also cybersecurity frontiers. They always were. We have not, until now, had the tooling to address them at scale. A few candidates:
None of these becomes a smaller problem if we fire all the engineers who would have to solve them. All three are about calibrating the trust we place in the systems we already depend on, which is what cybersecurity has always been for.
The history is consistent. Predictions of the end of a field reliably mistake the end of one set of jobs for the end of all of them. The urban horse went away; transportation did not. The carriage-maker found other work, and some of those who studied carriage-making went on to build the industries that replaced it. What replaced the lost jobs was, on average, better.
There is no good reason to expect this transition to be the exception. There is every reason to expect that the field of cybersecurity has more to do twenty years from now than it has today, and that some of the most interesting work has not yet been named.
(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)
More detail on ordure``
Twenty years ago this month, my first post in this blog was "Security Myths and Passwords," addressing the folk wisdom that monthly password rotation improves security. That myth had survived for roughly thirty years before I wrote about it, and NIST did not formally retire the periodic-rotation guidance until 2017, in SP 800-63B. Two decades later, I co-authored a book on the broader phenomenon of cybersecurity myths, Cybersecurity Myths and Misconceptions, with Leigh Metcalf and Josiah Dykstra. The next round of mythmaking is now well underway, around AI in general and large language models (LLMs) in particular.
Once a claim is repeated enough times in policy memos, audit checklists, advertising, and vendor decks, it stops being scrutinized. The 2006 password post made that point. So did my 2019 CERIAS tech report on cloud computing, which warned against decisions driven by "fad" technologies such as cloud, blockchain, and AI. So did my 2023 post "AI and ML Sturm und Drang." Pascal Meunier's post in this blog, from the same week as my first one, "What is Secure Software Engineering?," got at the underlying failure mode: practices "based on experience" are inherently brittle against intelligent adversaries who invent new attacks. That description fits an LLM almost word-for-word.
The marketing claim has become explicit: LLMs will replace software developers, security analysts, compliance reviewers, and incident responders. Some vendors hedge the wording, but the direction is the same: the human becomes an optional component. This is the same type of myth as "change passwords every month" — repeated more often than examined. LLMs provide statistical interpolation based on a fixed training set. They recombine what has been written, and they usually recombine it well. However, they are poor at handling novel cases. A new attack pattern, an unfamiliar architecture, a recent regulation, a business context the training data did not cover — for those, the model produces plausible-sounding text without really addressing the full problem.
LLMs also hallucinate with confidence. They will hallucinate about whether they have followed security-by-design practices in the first place. The sycophancy of current LLMs is well known: an agent told to build a system with security by design will report that it has done so, regardless of whether that is true. Asked to verify its own compliance, an LLM will lie without acknowledging it. Domain expertise, the kind that prevents breaches, is difficult to formalize and slow to acquire. A senior engineer who knows that one system can tolerate a particular failure while another cannot is making a judgment that requires context that no LLM has access to. That expert judgment may take years to build, drawing on incident response, post-mortems, and watching the same problems recur in different forms.
News reports describe thousands of experienced personnel laid off across dozens of companies and replaced with AI. Some of those decisions undoubtedly reflect real reorganization in response to shifting demand. Others appear to use "AI replacement" as a cover for opportunistic cost-cutting. An Oxford Economics analysis found that explicitly AI-cited cuts amounted to roughly 4.5% of U.S. layoffs in the first eleven months of 2025. It concluded that firms may be dressing up layoffs as a good-news story rather than admitting to weak demand or past over-hiring. Either way, the assumption is that AI will fill the gap. That is the patch-instead-of-fix mindset extended to staffing.
It is fair to ask what these tools are good for. The answer is not "nothing." The most valuable thing AI tools currently do in security is an awkward fact for the industry: they are competent at finding flaws in software that should not have been shipped in the first place. The "penetrate-and-patch" culture I and others have been complaining about for decades has produced an enormous backlog of technical debt — code written under deadlines, with corners cut, security analysis skipped, and known limitations punted to the next release. Decades of that, accumulated across the industry, are now coming due, and AI tools are quite effective at locating the rot and kludges.
But there are limits. Much of what an LLM flags is coding flaws rather than security vulnerabilities, and some flaws that could be vulnerabilities turn out not to be exploitable in their deployed context. Telling real risk from noise takes domain expertise and operational experience. An improper buffer length behind validated input is not the same problem as a flagged buffer length on an externally reachable service, and an LLM does not know which is which. That distinction is what the engineers being laid off have spent careers learning to make. Not every smoke alarm is a fire, and not being able to tell the difference means dispatching the fire department hundreds or thousands of times for burnt toast.
There is an irony to all this. Many of the same vendors arguing that AI will replace their security teams are quietly using AI to find the bugs that their previous teams were not given resources or time to fix. That is a real use of the technology. It is not a substitute for the people who would have prevented the bugs to begin with, nor for the people who must understand and triage what the AI surfaces. "Shipped fast, found later, fixed maybe" is the pattern that produced the debt; using AI to keep skating ahead of the consequences without changing the practice is not progress.
Password rotation was mostly wasted effort. The damage was inconvenience, predictable user workarounds, and a generation of policies that pushed people toward weaker, memorable passwords reused across systems. The replacement-by-LLM myth is interacting with a much worse threat landscape, and the recent trends are not encouraging:
Then comes something new in kind. Agentic AI introduces autonomous actors with "write" access inside the perimeter: software that can create, delete, and modify files without a human in the loop. No security architecture I am aware of was designed for that threat model. It certainly isn't zero-trust (whatever that actually means). Treating agents as another category of third-party dependency, given that third-party involvement in breaches has already doubled, is negligent.
The pattern across decades is consistent. Skip the analysis, embed the assumption, repeat it until it counts as common knowledge, and defend the practice long after the evidence has turned against it. The corrective is also consistent: keep experienced humans in the verification loop, demand testable evidence of safety-by-design, resist the pressure to fire the people who carry context in their heads, and treat any system that tells you it is secure as the suspect that it is. The ACM Code of Ethics is explicit on the duty to anticipate and avoid harm. That duty does not pause for hype cycles.
There is also a pragmatic argument. Reckless deployment tends to produce backlash. Nuclear power and commercial aviation are useful precedents. In both cases, preventable incidents led to regulations strict enough to permanently shape how those industries operate, at substantial cost to the firms but with clear benefit to public safety. AI is on that trajectory. Companies that build prudently now will be in a better position when any regulatory wave arrives; companies that race to fire their experts and ship agentic systems into production will find themselves explaining their choices to regulators with enforcement authority.
Quantum computing is already queueing up to be the next myth. Vendors are selling "quantum-safe" and "quantum-ready" products well ahead of any clear definition of either term, and well ahead of any consensus on the threat timeline. We will have a version of this conversation again in five years, and probably every ten years thereafter.
Twenty years from now, I expect somebody — possibly me, possibly an LLM trained on my collected works and confidently misattributing them — will be writing the same post about whatever myth replaces this one.
(Portions of this blog were researched and assembled with the assistance of Anthropic Claude Opus 4.7, but the content is my own.)
Purdue University has a history of “firsts” in computing. The computer science department was founded in 1962, making it the oldest degree-granting CS program in the world. Purdue also has a history of research and education in cybersecurity, including the first multidisciplinary research center in the field (1998, CERIAS), and the first regular graduate degree in cybersecurity (2000).
Dorothy Denning completed her Ph.D. in CS at Purdue in 1975. Her dissertation was entitled Secure Information Flow in Computer Systems. After graduation, she joined the computer science faculty. She began offering a regular course in data security, starting in 1981. Matt Bishop was the TA for that course and completed his Ph.D. in security in 1984 with Dorothy as his advisor. Both Dorothy and Matt are well-known in cybersecurity for their many fundamental contributions.
Sam Wagstaff arrived in 1983 and assumed responsibility for teaching the data security course. Gene Spafford joined the faculty in 1997, although he did not teach a core cybersecurity course in his first few years at Purdue; he primarily taught software engineering and distributed systems.
In 1992, Spafford started the COAST Laboratory in the CS department, with initial support from Wagstaff. In 1998, CERIAS was established as a university institute, led by Spafford and supported by faculty in five other university departments. (As of January 2026, there are over 150 affiliated faculty in 20 academic departments. We'll have a more detailed history of CERIAS in a future post.) The first Ph.D. graduate from COAST, advised by Spafford, was Sandeep Kumar in 1995.
In 1997, immediately prior to the founding of CERIAS, Professor Spafford provided testimony before the House Science Committee of the 105th Congress. In that, he described the then-current national production of Ph.D.s in cybersecurity as only 2-3 per year. This was clearly not sufficient for the growing demand. His testimony inspired formation of both the NSF Scholarship for Service and the NSA/DHS Academic Centers of Excellence to encourage more students to pursue degrees. CERIAS leadership also considered it an initial priority to encourage more such degrees.
In the years since then, a number of universities around the world have developed cybersecurity research and education programs. A few thousand Ph.D.s have been graduated since the mid-1990s.
Rob Morton, a 2024 Ph.D. advised by Spafford, conducted research on degrees produced, augmented by Deep Search in Google Gemini. What follows are results from his research.
1988 was used as a starting point for "modern" academic cybersecurity. Following the Morris Worm (November 1988), the field formalized rapidly: Carnegie Mellon formed the CERT/CC, Purdue formed the COAST Laboratory (precursor to CERIAS), and UC Davis began its dedicated security architecture work.
Since that year, Purdue University and Carnegie Mellon University (CMU) have been the undisputed volume leaders in producing doctoral graduates with security-specific dissertations.
These counts exclude Master's degrees. They represent Doctoral candidates whose dissertations were primarily focused on Information Security, Privacy, or Cryptography. (The CERIAS/COAST numbers have been updated using local Purdue records.)
Tomorrow, July 1, 2025, ushers in two significant changes.
For the first time in over 25 years, our fantastic administrative assistant, Lori Floyd, will not be present to greet us as she has retired. Lori joined the staff of CERIAS in October of 1999 and has done a fantastic job of helping us keep moving forward. Lori was the first person people would meet when visiting us in our original offices in the Recitation Building, and often the first to open the door at our new offices in Convergence. At our symposia, workshops, and events of all kinds, Lori helped ensure we had a proper room, handouts, and (when appropriate) refreshments. She also helped keep all the paperwork and scheduling straight for our visitors and speakers, handled some of our purchasing, and acted as building deputy. We know she quietly and competently did many other things behind the scenes, and we'll undoubtedly learn about them as things begin to fall apart!
We all wish Lori well in her retirement. She plans to spend time with her partner, kids, and grandkids, travel, and garden. She will be missed at CERIAS, but definitely not forgotten.
The second change is in the related INSC Interdisciplinary Information Security graduate program, a spin-off of CERIAS. In 2000, Melissa Dark, Victor Raskin, and Spaf founded the INSC program as the first graduate degree in information/cyber security in the world. The program was explicitly interdisciplinary from the start and supported by faculty across the university. Students were (and still are) required to take technology ethics and policy courses in addition to cybersecurity courses. Starting with MS students supported by one of the very first NSF CyberCorp awards, the program quickly grew and was approved to offer the Ph.D. degree.
INSC was never formally a part of CERIAS, but students and faculty often saw them as related. All INSC students were automatically included in CERIAS events, and they were frequently recruited by CERIAS partners (and still are!). CERIAS faculty volunteer to serve on INSC committees and to advise the students. It is a "win–win" situation that has resulted in some great graduates, many now in some notable positions in industry and government.
The change coming to INSC is in leadership. After 25 years as program head, Spaf is stepping into the role of associate head for a while. Taking on the role of program head is Professor Christopher Yeomans. Chris has been a long-time supporter of the program with experience as the chair of the Philosophy Department.
(If you're interested in a graduate degree through INSC visit the website describing the program and how to apply.)
Thirty-five years ago today (November 2nd), the Internet Worm program was set loose to propagate on the Internet. Noting that now to the computing public (and cybersecurity professionals, specifically) often generates an "Oh, really?" response akin to stating that November 2nd is the anniversary of the inaugural broadcast of the first BBC TV channel (1936), and the launch of Sputnik 2 with Laika aboard (1957). That is, to many, it is ho-hum, ancient history.
Perhaps that is to be expected after 35 years -- approximately the length of a human generation. (As an aside, I have been teaching at Purdue for 36 years. I have already taught students whose parents had taken one of my classes as a student; in five or so years, I may see students whose grandparents took one of my classes!). In 1988, fewer than 100,000 machines were likely connected to the Internet; thus, only a few thousand people were involved in systems administration and security. For us, the events were more profound, but we are outnumbered by today's user population; many of us have retired from the field...and more than a few have passed on. Thus, events of decades ago have become ancient history for current users.
Nonetheless, the event and its aftermath were profound for those who lived through it. No major security incident had ever occurred on such a scale before. The Worm was the top news story in international media for days. The events retold in Cliff Stoll's Cuckoo's Egg were only a few years earlier but had affected far fewer systems. However, that tale of computer espionage heightened concern by authorities in the days following the Worm's deployment regarding its origin and purpose. It seeded significant changes in law enforcement, defense funding and planning, and how we all looked at interconnectivity. In the following years, malware (and especially non-virus malware) became an increasing problem, from Code Red and Nimda to today's botnets and ransomware. All of that eventually led to a boom in add-on security measures, resulting in what is now a multi-billion dollar cybersecurity industry.
At the time of the Worm, the study of computing security (the term "cybersecurity" had not yet appeared) was primarily based around cryptography, formal verification of program correctness, and limiting covert channels. The Worm illustrated that there was a larger scope needed, although it took additional events (such as the aforementioned worms and malware) to drive the message home. Until the late 1990s, many people still believed cybersecurity was simply a matter of attentive cyber hygiene and not an independent, valid field of study. (I frequently encountered this attitude in academic circles, and was told it was present in the discussion leading to my tenure. That may seem difficult to believe today, but should not be surprising: Purdue has the oldest degree-granting CS department [60 years old this year], and it was initially viewed by some as simply glorified accounting! It is often the case that outsiders dismiss an emerging discipline as trivial or irrelevant.)
The Worm provided us with an object lesson about many issues that, unfortunately, were not heeded in full to this day. That multi-billion dollar cybersecurity industry is still failing to protect far too many of our systems. Among those lessons:
.rshrc files) created a playground for lateral movement across enterprises. We knew then that good security practice involved fully mediated access (now often referred to as "Zero Trust") and had known that for some time. However, convenience was viewed as more important than security...a problem that continues to vex us to this day. We continue to build systems that both enable effortless lateral movement, and make it difficult or annoying for users to reauthenticate, thus leading them to bypass the checks.That last point is important as we debate the dangers and adverse side-effects of machine learning/LLM/AI systems. Those are being refined and deployed by people claiming they are not responsible for the (mis)use of (or errors in) those systems and that their economic potential outweighs any social costs. We have failed to clearly understand and internalize that not everything that can be done should be done, especially in the Internet at large. This is an issue that keeps coming up and we continue to fail to address it properly.
As a field, cybersecurity is relatively young. We have a history that arguably starts in the 1960s with the Ware Report. We are still discovering what is involved in protecting systems, data privacy, and safety. Heck, we still need a commonly accepted definition of what cybersecurity entails! (Cf. Chapter 1 of the Cybersecurity Myths book, referenced below.). The first cybersecurity degree program wasn't established until 2000 (at Purdue). We still lack useful metrics to know whether we are making significant progress and titrate investment. And we are still struggling with tools and techniques to create and maintain secure systems. All this while the market (and thus need) is expanding globally.
In that context of growth and need, we should not dismiss the past as "Ho-hum, history." Members of the military study historic battles to avoid future mistakes: mentioning the Punic Wars or The Battle of Thermopylae to such a scholar will not result in dismissal with "Not relevant." If you are interested in cybersecurity, it would be advisable to study some history of the field and think about lessons learned -- and unlearned.