Posts in Kudos, Opinions and Rants
Page Content
After the Buggy Whip
Last week’s post addressed the myth that LLMs make programmers, security analysts, and incident responders optional. The pundit class continues to predict, with confidence approaching certainty, that there is no longer any reason to study computer science. That confidence is technically misplaced for the reasons I gave then: LLMs are good at recombining what has been written and poor at reasoning about what has not. They are not a substitute for the institutional context that experienced people carry. It is also misplaced for a more general reason that deserves a separate look. New technologies do not end fields. They reshape them. Some particular jobs become obsolete; others that nobody saw coming are created.
New York City in 1900 had roughly 100,000 working horses on its streets. Those horses produced about 2.5 million pounds of manure and 10 million gallons of urine per day. As of 1880, the city was also removing 15,000 dead horses per year. Vacant-lot manure piles reached 40 to 60 feet high. One U.S.-wide estimate put the horse-manure-bred fly population at three billion per day, linked to typhoid and diarrheal disease outbreaks.
The crisis was severe enough that in 1898 the first international urban-planning conference convened in New York City to address it. The delegates abandoned the conference after three days, instead of the scheduled ten, because none of them could see a solution. Fourteen short years later, cars outnumbered horses on New York's streets. By 1917, the last horsecar was retired.
The end of urban horses meant the end of stables, of carriage manufacturing, of buggy-whip making, and of much of the feed-and-livery economy. It also meant the end of an unsanitary, disease-spreading, traffic-snarling system that the experts of the day had explicitly given up trying to solve. What replaced it was more dependable transportation, improved urban sanitation, faster emergency response, longer-distance travel, and whole industries (automotive manufacturing, parts and service, road construction, motels, traffic engineering, automotive insurance, motorsport, etc.) that no expert at the 1898 conference would have predicted.
The pattern is not unique to horses. Some predicted telephones would end face-to-face interaction; instead they created telecom, customer service at scale, mobile applications, and a global communications industry. Affordable electricity was predicted to eliminate domestic labor; it eliminated some drudgery and enabled a far more complex household and industrial economy. Television was predicted to end reading and conversation; it created entire creative industries that did not exist before.
The wheel was, in all likelihood, denounced by the makers of sledges and travois.
None of these transitions ended their fields. Each expanded them. Each eventually produced job categories that no one affected by the change could have imagined.
Computing has had its own buggy-whip moments. Higher-level languages were supposed to make programmers unnecessary; we got more programmers working on more ambitious problems. Portable computers were supposed to mean computing had become a finished consumer product needing no further professional discipline; the opposite happened. Cloud computing was supposed to let enterprises shed their IT staff; most needed different IT staff, often more of them, with skills the previous generation lacked. Open source was supposed to collapse the commercial software industry; it became the foundation on which the next generation of commercial software was built.
Each predicted endpoint was a real change. Each disrupted some careers. None ended computing or reduced the overall demand for computing professionals.
The same pattern applies to cybersecurity and is already visible. AI is first going to disrupt the parts of our field that exist because something else was broken.
The penetration-testing ecosystem is the clearest example. It is a large, mature industry built on a failure mode: vendors, commercial and open source alike, shipping software with predictable defects, and customers willingly adopting it. The parallel to the horse problem is not subtle. That ecosystem exists, in large part, to muck out the stables of an upstream production system that produces far more waste than it should. AI is competent at identifying many flaws that shouldn't make their way to the street and is the technology that makes cleanup more affordable. It is not, by itself, a technology that fixes the source. Until the production process changes, the cleanup never ends.
That disruption is real, and uncomfortable for the people who built careers around it. On the merits, it is also mostly good news. A defect found by an AI tool before deployment is not exploited after deployment, and preventable harm decreases. But those same people are among the best positioned to help define what comes next! Their training and operational instincts are exactly what the emerging areas need.
Disruption of one part of the field opens possibilities elsewhere. Several areas will become more important, not less:
- Security architecture, especially the integration of privacy and safety. Architecture is a judgment-and-context discipline. It is the kind of work that does not reduce to recombining patterns from a training corpus.
- Defenses against social engineering. AI dramatically reduces the cost of tailored attacks. We need more research into detection, design work on resistant interfaces, and considerably better education on how to recognize and resist persuasion-at-scale.
- Digital forensics. More incidents, of greater complexity, and a growing share involving agentic AI as either tool or target. The field needs investment; the practitioner pipeline needs broader training.
- Intrusion and anomaly detection and response. The volume and tempo of attacks continue to rise. The expectation that a single human will sit in a SOC at 3 a.m. and reason it out without modern instrumentation is no longer credible. We need trustworthy enhancement and augmentation.
- Formal design and verification, and defensive deception. Both have been niche specialties for decades. Both counter AI-enabled attacks well, and both can themselves be enhanced by AI tools: proofs can be drafted and checked at a greater scale, and deception artifacts can be generated and rotated faster than attackers can fingerprint them.
All five have potential for growth, from research through deployment. There are undoubtedly others.
If we recognize that cybersecurity, writ large, is about earning appropriate trust in computing systems, then several of the most interesting frontiers in software engineering are also cybersecurity frontiers. They always were. We have not, until now, had the tooling to address them at scale. A few candidates:
- Whole-system assurance. Formal verification is reserved for small critical components (a kernel, a cryptographic primitive) because anything larger is too expensive to verify. With AI assistance for spec drafting, invariant search, and proof maintenance, "what would it take to ship a fully verified million-line system" stops being a thought experiment.
- End-to-end software supply-chain provenance. We presently cannot answer "What is in this binary, where did each piece come from, and what was its build environment?" for almost any non-trivial software. Doing so at scale, continuously, with cryptographic assurance, is a problem AI tools could plausibly help reduce from research to common practice.
- Privacy-preserving analytics. The technical pieces (homomorphic encryption, secure multi-party computation, differential privacy) exist but are currently too slow and brittle for routine use. Lowering the engineering cost of using them could change which analyses are practical: medical research across hospital systems, threat-intel sharing across competitors, fraud detection across financial institutions.
- High-assurance real-time systems. Complex real-time systems have more failure modes and recovery paths than any single designer can hold in mind. AI tools that enumerate and evaluate alternatives against latency, safety, and correctness constraints can help engineers explore design choices currently left to approximation.
None of these becomes a smaller problem if we fire all the engineers who would have to solve them. All three are about calibrating the trust we place in the systems we already depend on, which is what cybersecurity has always been for.
The history is consistent. Predictions of the end of a field reliably mistake the end of one set of jobs for the end of all of them. The urban horse went away; transportation did not. The carriage-maker found other work, and some of those who studied carriage-making went on to build the industries that replaced it. What replaced the lost jobs was, on average, better.
There is no good reason to expect this transition to be the exception. There is every reason to expect that the field of cybersecurity has more to do twenty years from now than it has today, and that some of the most interesting work has not yet been named.
(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)
More detail on ordure``
- "The Big Crapple: NYC Transit Pollution from Horse Manure to Horseless Carriages" — 99% Invisible
- "The Great Horse-Manure Crisis of 1894" — Foundation for Economic Education
- "How Much Horse Manure Was Deposited on the Streets of New York City Before the Advent of the Automobile, and What Happened to It?" — The New York Historical
- The Common Vulnerabilities and Exposures (CVE) database — MITRE
New Myths for Old
Twenty years ago this month, my first post in this blog was "Security Myths and Passwords," addressing the folk wisdom that monthly password rotation improves security. That myth had survived for roughly thirty years before I wrote about it, and NIST did not formally retire the periodic-rotation guidance until 2017, in SP 800-63B. Two decades later, I co-authored a book on the broader phenomenon of cybersecurity myths, Cybersecurity Myths and Misconceptions, with Leigh Metcalf and Josiah Dykstra. The next round of mythmaking is now well underway, around AI in general and large language models (LLMs) in particular.
Once a claim is repeated enough times in policy memos, audit checklists, advertising, and vendor decks, it stops being scrutinized. The 2006 password post made that point. So did my 2019 CERIAS tech report on cloud computing, which warned against decisions driven by "fad" technologies such as cloud, blockchain, and AI. So did my 2023 post "AI and ML Sturm und Drang." Pascal Meunier's post in this blog, from the same week as my first one, "What is Secure Software Engineering?," got at the underlying failure mode: practices "based on experience" are inherently brittle against intelligent adversaries who invent new attacks. That description fits an LLM almost word-for-word.
The marketing claim has become explicit: LLMs will replace software developers, security analysts, compliance reviewers, and incident responders. Some vendors hedge the wording, but the direction is the same: the human becomes an optional component. This is the same type of myth as "change passwords every month" — repeated more often than examined. LLMs provide statistical interpolation based on a fixed training set. They recombine what has been written, and they usually recombine it well. However, they are poor at handling novel cases. A new attack pattern, an unfamiliar architecture, a recent regulation, a business context the training data did not cover — for those, the model produces plausible-sounding text without really addressing the full problem.
LLMs also hallucinate with confidence. They will hallucinate about whether they have followed security-by-design practices in the first place. The sycophancy of current LLMs is well known: an agent told to build a system with security by design will report that it has done so, regardless of whether that is true. Asked to verify its own compliance, an LLM will lie without acknowledging it. Domain expertise, the kind that prevents breaches, is difficult to formalize and slow to acquire. A senior engineer who knows that one system can tolerate a particular failure while another cannot is making a judgment that requires context that no LLM has access to. That expert judgment may take years to build, drawing on incident response, post-mortems, and watching the same problems recur in different forms.
News reports describe thousands of experienced personnel laid off across dozens of companies and replaced with AI. Some of those decisions undoubtedly reflect real reorganization in response to shifting demand. Others appear to use "AI replacement" as a cover for opportunistic cost-cutting. An Oxford Economics analysis found that explicitly AI-cited cuts amounted to roughly 4.5% of U.S. layoffs in the first eleven months of 2025. It concluded that firms may be dressing up layoffs as a good-news story rather than admitting to weak demand or past over-hiring. Either way, the assumption is that AI will fill the gap. That is the patch-instead-of-fix mindset extended to staffing.
It is fair to ask what these tools are good for. The answer is not "nothing." The most valuable thing AI tools currently do in security is an awkward fact for the industry: they are competent at finding flaws in software that should not have been shipped in the first place. The "penetrate-and-patch" culture I and others have been complaining about for decades has produced an enormous backlog of technical debt — code written under deadlines, with corners cut, security analysis skipped, and known limitations punted to the next release. Decades of that, accumulated across the industry, are now coming due, and AI tools are quite effective at locating the rot and kludges.
But there are limits. Much of what an LLM flags is coding flaws rather than security vulnerabilities, and some flaws that could be vulnerabilities turn out not to be exploitable in their deployed context. Telling real risk from noise takes domain expertise and operational experience. An improper buffer length behind validated input is not the same problem as a flagged buffer length on an externally reachable service, and an LLM does not know which is which. That distinction is what the engineers being laid off have spent careers learning to make. Not every smoke alarm is a fire, and not being able to tell the difference means dispatching the fire department hundreds or thousands of times for burnt toast.
There is an irony to all this. Many of the same vendors arguing that AI will replace their security teams are quietly using AI to find the bugs that their previous teams were not given resources or time to fix. That is a real use of the technology. It is not a substitute for the people who would have prevented the bugs to begin with, nor for the people who must understand and triage what the AI surfaces. "Shipped fast, found later, fixed maybe" is the pattern that produced the debt; using AI to keep skating ahead of the consequences without changing the practice is not progress.
Password rotation was mostly wasted effort. The damage was inconvenience, predictable user workarounds, and a generation of policies that pushed people toward weaker, memorable passwords reused across systems. The replacement-by-LLM myth is interacting with a much worse threat landscape, and the recent trends are not encouraging:
- The 2025 Verizon Data Breach Investigations Report found that breaches initiated through exploited vulnerabilities grew 34% year-over-year, that more than half of edge-device vulnerabilities remained unremediated after a full year, and that third-party involvement in breaches doubled, from 15% to 30%.
- IBM's 2025 Cost of a Data Breach Report put the global average breach cost at $4.44 million, with organizations carrying high levels of shadow AI paying an additional $670,000 on average; 97% of organizations that experienced an AI-related security incident lacked proper AI access controls.
- Upguard's State of Shadow AI report found that 81% of the general workforce and 88% of security professionals admit to using unapproved AI tools at work.
Then comes something new in kind. Agentic AI introduces autonomous actors with "write" access inside the perimeter: software that can create, delete, and modify files without a human in the loop. No security architecture I am aware of was designed for that threat model. It certainly isn't zero-trust (whatever that actually means). Treating agents as another category of third-party dependency, given that third-party involvement in breaches has already doubled, is negligent.
The pattern across decades is consistent. Skip the analysis, embed the assumption, repeat it until it counts as common knowledge, and defend the practice long after the evidence has turned against it. The corrective is also consistent: keep experienced humans in the verification loop, demand testable evidence of safety-by-design, resist the pressure to fire the people who carry context in their heads, and treat any system that tells you it is secure as the suspect that it is. The ACM Code of Ethics is explicit on the duty to anticipate and avoid harm. That duty does not pause for hype cycles.
There is also a pragmatic argument. Reckless deployment tends to produce backlash. Nuclear power and commercial aviation are useful precedents. In both cases, preventable incidents led to regulations strict enough to permanently shape how those industries operate, at substantial cost to the firms but with clear benefit to public safety. AI is on that trajectory. Companies that build prudently now will be in a better position when any regulatory wave arrives; companies that race to fire their experts and ship agentic systems into production will find themselves explaining their choices to regulators with enforcement authority.
Quantum computing is already queueing up to be the next myth. Vendors are selling "quantum-safe" and "quantum-ready" products well ahead of any clear definition of either term, and well ahead of any consensus on the threat timeline. We will have a version of this conversation again in five years, and probably every ten years thereafter.
Twenty years from now, I expect somebody — possibly me, possibly an LLM trained on my collected works and confidently misattributing them — will be writing the same post about whatever myth replaces this one.
(Portions of this blog were researched and assembled with the assistance of Anthropic Claude Opus 4.7, but the content is my own.)
Ch-ch-ch-changes
Tomorrow, July 1, 2025, ushers in two significant changes.
For the first time in over 25 years, our fantastic administrative assistant, Lori Floyd, will not be present to greet us as she has retired. Lori joined the staff of CERIAS in October of 1999 and has done a fantastic job of helping us keep moving forward. Lori was the first person people would meet when visiting us in our original offices in the Recitation Building, and often the first to open the door at our new offices in Convergence. At our symposia, workshops, and events of all kinds, Lori helped ensure we had a proper room, handouts, and (when appropriate) refreshments. She also helped keep all the paperwork and scheduling straight for our visitors and speakers, handled some of our purchasing, and acted as building deputy. We know she quietly and competently did many other things behind the scenes, and we'll undoubtedly learn about them as things begin to fall apart!
We all wish Lori well in her retirement. She plans to spend time with her partner, kids, and grandkids, travel, and garden. She will be missed at CERIAS, but definitely not forgotten.
The second change is in the related INSC Interdisciplinary Information Security graduate program, a spin-off of CERIAS. In 2000, Melissa Dark, Victor Raskin, and Spaf founded the INSC program as the first graduate degree in information/cyber security in the world. The program was explicitly interdisciplinary from the start and supported by faculty across the university. Students were (and still are) required to take technology ethics and policy courses in addition to cybersecurity courses. Starting with MS students supported by one of the very first NSF CyberCorp awards, the program quickly grew and was approved to offer the Ph.D. degree.
INSC was never formally a part of CERIAS, but students and faculty often saw them as related. All INSC students were automatically included in CERIAS events, and they were frequently recruited by CERIAS partners (and still are!). CERIAS faculty volunteer to serve on INSC committees and to advise the students. It is a "win–win" situation that has resulted in some great graduates, many now in some notable positions in industry and government.
The change coming to INSC is in leadership. After 25 years as program head, Spaf is stepping into the role of associate head for a while. Taking on the role of program head is Professor Christopher Yeomans. Chris has been a long-time supporter of the program with experience as the chair of the Philosophy Department.
(If you're interested in a graduate degree through INSC visit the website describing the program and how to apply.)
Challenging Conventional Wisdom
In IT security ("cybersecurity") today, there is a powerful herd mentality. In part, this is because it is driven by an interest in shiny new things. We see this with the massive pile-on to new technologies when they gain buzzword status: e.g., threat intelligence, big data, blockchain/bitcoin, AI, zero trust. The more they are talked about, the more others think they need to be adopted, or at least considered. Startups and some vendors add to the momentum with heavy marketing of their products in that space. Vendor conferences such as the yearly RSA conference are often built around the latest buzzwords. And sadly, too few people with in-depth knowledge of computing and real security are listened to about the associated potential drawbacks. The result is usually additional complexity in the enterprise without significant new benefits — and often with other vulnerabilities, plus expenses to maintain them.
Managers are often particularly victimized by these fads as a result of long-standing deficiencies in the security space: we have no sound definition of security that encompasses desired security properties, and we, therefore, have no metrics to measure them. If a manager cannot get some numeric value or comparison of how new technology may make things better vs. its cost, the decision is often made on "best practice." Unfortunately, "best practice" is also challenging to define, especially when there is a lot of talk and excitement by people about vending the next new shiny thing. Additionally, enterprise needs are seldom identical, so “best” may not be uniform. If the additional siren call is heard about "See how it will save you money!" then it is nearly impossible to resist, even if the "savings" are only near-term or downright illusory.
This situation is complicated because so much of what we use is defective, broken, or based on improperly-understood principles. Thus, to attempt to secure it (really, to gain greater confidence in it) solutions that sprinkle magic pixie dust on top are preferred because they don't mean sacrificing the sunk cost inherent in all the machines and software already in use. Magic fairy dust is shiny, too, and usually available at a lower (initial) cost than actually fixing the underlying problems. So that is why we have containers on VMs on systems with multiple levels of hypervisor behind firewalls and IPS --and turtles all the way down — while the sunk costs keep getting larger. This is also why patching and pen testing are seen as central security practices— they are the flying buttresses of security architecture these days.
The lack of a proper definition and metrics has been known for a while. In part, the old Rainbow series from the NCSC (NSA) was about this. The authors realized the difficulty of defining "secure" and instead spoke of "trusted." The series established a set of features and levels of trust assurance in products to meet DOD needs. However, that was with a DOD notion of security at the time, so issues of resilience and availability (among others) weren't really addressed. That is one reason why the Rainbow Series was eventually deprecated: the commercial marketplace found it didn't apply to their needs.
Defining security principles is a hard problem, and is really in the grand challenge space for security research. It was actually stated as such 16 years ago in the CRA security Grand Challenges report (see #3). Defining accompanying metrics is not likely to be simple either, but we really need to do it or continue to come up against problems. If the only qualities we can reliably measure for systems are speed and cost, the decisions are going to be heavily weighted towards solutions that provide those at the expense of maintainability, security, reliability, and even correctness. Corporations and governments are heavily biased towards solutions that promise financial results in the next year (or next quarter) simply because that is easily measured and understood.
I've written and spoken about this topic before (see here and here for instance). But it has come to the forefront of my thinking over the last year, as I have been on sabbatical. Two recent issues have reinforced that:
- I was cleaning up my computer storage and came across some old presentations from 10-20 years ago. With minor updating, they could be given today. Actually, I have been giving a slightly updated version of one from 11 years ago, and the audiences view it as "fresh." The theme? How we don't define or value security appropriately. (Let me know if you’d like me to present it to your group; you can also view a video of the talk given at Sandia National Laboratories,)
- I was asked by people associated with a large entity with significant computing presence to provide some advice on cloud computing. They have been getting a strong push from management to move everything to the cloud, which they know to be a mistake, but their management is countering their concerns about security with "it will cost less." I have heard this before from other places and given informal feedback to the parties involved. This time, I provided more organized feedback, now also available as a CERIAS tech report (here). In summary, moving to the cloud is not always the best idea, nor is it necessarily going to save money in the long term.
I hope to write some more on the issues around defining security and bucking the "conventional wisdom" once I am fully recovered from my sabbatical. There should be no shortage of material. In the meantime, I invite you to look at the cloud paper cited above and provide your comments below.
An Anniversary of Continuing Excellence
In February of 1997, I provided testimony to a Congressional committee about the state of cyber security education. I noted that there were only four major academic programs, with limited resources, in information security at that time. I outlined some steps that could be taken to improve our national posture in the field. Subsequently, I was involved in discussions with staffers of some Congressional committees, with staff at NSF, with National Security Council staff (notably, Richard Clarke), and people at the Department of Defense. These discussions eventually helped produce1 the Scholarship for Service program at NSF, the NSF CyberTrust program (now known as Secure and Trustworthy Cyberspace, SaTC), and the Centers of Academic Excellence program.
On 11 May 1999, 20 years ago, Purdue University 2 was recognized by the NSA as one of the initial Centers of Academic Excellence (CAE).3 There were some notable advocates of enhanced cyber security at each institution, and they had taken steps to institute courses and research to improve the field—notably including Corey Schou (recently inducted into the Cybersecurity Hall of Fame), Matt Bishop, Deborah Frincke, and Doug Jacobson, to name a few.4 As I recall, Dick Clarke was one of the prime movers to get the CAE program established under PDD-63; Dr, Vic Maconachy (then) at NSA became the director of the CAE program.
Over the years, the CAE program has continued to expand, to now encompass several hundred institutions around the US. DHS has become involved as a co-sponsor with the NSA. The main certification has bifurcated into a designation for cyber defense research (CAE-R) and a designation for cyber defense education (CAE-CDE). There ia also a designation for Centers of Academic Excellence in Cyber Operations. The NSA, as a member of the US intelligence community (IC) also helps support a program for IC Centers of Academic Excellence. In addition to the formal external evaluation process to be designated as a CAE, the program has resulted in creation of curricular guidelines and recommended best practices for educational programs. A number of leaders in education in the field have also grown out of this process, creating various resources for the community (some of which are hosted at the CLARK website for public use).
I have been critical of the overall CAE program in the past (cf. here and here). I believe most of the criticisms I made are still valid, particularly the ones concerning the designation of "excellence" and the burden of the application process. Nonetheless, there is no denying that the listed insitutitions have made strides to improve and standardize their programs towards much-needed common goals. There is also continuing (and growing) synergy with efforts such as the NIST National Initiative for Cybersecurity Education (NICE) program and the National Colloquium on Information Systems Security Education (NISSE). Additionally, there has been real progress towards establishing standardized undergraduate curricula in the field, which now includes the potential for ABET accreditation.
Those of us at Purdue recently received notice that Purdue has been recertified as a CAE-R through 2024. This is a result—in large part—of efforts by Dr. James Lerums , one of our recent Ph.D. grads. He volunteered his time to sift through all the documentsation, gathered the necessary information, and completed the application process. It was a significant effort and kudos to Jim for taking it on soon after completing a Ph.D. dissertation!
Despite some of my "grumpy old dude" criticisms, I am glad to see Purdue continue to be recognized for the continued excellence of its programs. CERIAS continues to be a focal point for the "R" aspect of the CAE-R as Purdue's designated research institute in the field: that's the "R" in CERIAS. However, it has also been Purdue's center for education for most of its existence: the "E" in CERIAS is for Education. That history includes the establishment of the first designated degree in information security in 2000, still offered as an interdisciplinary MS and PhD (which is the program Jim Lerums completed, btw).
As for the CAE program itself, and for the 5 (out of 6) other programs receiving that initial CAE designation that are still listed as CAEs, congratulations: we've come a long way, but there is still a long way to go!
Footnotes
- I always note that I cannot claim sole or primary credit for these initiatives; nonetheless, I was the first to publicly advocate for programs such as these, and was involved in the many of the discussions. Dick Clarke deserves a good deal of credit for his active advocacy for the area at the time, as does Lt. General (ret.) Ken Minihan (also a recent CSHOF inductee) for his support.
- Via CERIAS, one year old at the time.
- Also in that group were James Madison University, George Mason University, Idaho State University, Iowa State University, the University of California at Davis, and the University of Idaho.
- My apologies to others whose names I omitted.


