The Center for Education and Research in Information Assurance and Security (CERIAS)

The Center for Education and Research in
Information Assurance and Security (CERIAS)

CERIAS Blog - May 2026

Page Content

After the Buggy Whip

Last week’s post addressed the myth that LLMs make programmers, security analysts, and incident responders optional. The pundit class continues to predict, with confidence approaching certainty, that there is no longer any reason to study computer science. That confidence is technically misplaced for the reasons I gave then: LLMs are good at recombining what has been written and poor at reasoning about what has not. They are not a substitute for the institutional context that experienced people carry. It is also misplaced for a more general reason that deserves a separate look. New technologies do not end fields. They reshape them. Some particular jobs become obsolete; others that nobody saw coming are created.

New York City in 1900 had roughly 100,000 working horses on its streets.  Those horses produced about 2.5 million pounds of manure and 10 million gallons of urine per day. As of 1880, the city was also removing 15,000 dead horses per year. Vacant-lot manure piles reached 40 to 60 feet high. One U.S.-wide estimate put the horse-manure-bred fly population at three billion per day, linked to typhoid and diarrheal disease outbreaks.

The crisis was severe enough that in 1898 the first international urban-planning conference convened in New York City to address it. The delegates abandoned the conference after three days, instead of the scheduled ten, because none of them could see a solution. Fourteen short years later, cars outnumbered horses on New York's streets. By 1917, the last horsecar was retired.

The end of urban horses meant the end of stables, of carriage manufacturing, of buggy-whip making, and of much of the feed-and-livery economy. It also meant the end of an unsanitary, disease-spreading, traffic-snarling system that the experts of the day had explicitly given up trying to solve. What replaced it was more dependable transportation, improved urban sanitation, faster emergency response, longer-distance travel, and whole industries (automotive manufacturing, parts and service, road construction, motels, traffic engineering, automotive insurance, motorsport, etc.) that no expert at the 1898 conference would have predicted.

The pattern is not unique to horses. Some predicted telephones would end face-to-face interaction; instead they created telecom, customer service at scale, mobile applications, and a global communications industry. Affordable electricity was predicted to eliminate domestic labor; it eliminated some drudgery and enabled a far more complex household and industrial economy. Television was predicted to end reading and conversation; it created entire creative industries that did not exist before.

The wheel was, in all likelihood, denounced by the makers of sledges and travois.

None of these transitions ended their fields. Each expanded them. Each eventually produced job categories that no one affected by the change could have imagined.

Computing has had its own buggy-whip moments. Higher-level languages were supposed to make programmers unnecessary; we got more programmers working on more ambitious problems. Portable computers were supposed to mean computing had become a finished consumer product needing no further professional discipline; the opposite happened. Cloud computing was supposed to let enterprises shed their IT staff; most needed different IT staff, often more of them, with skills the previous generation lacked. Open source was supposed to collapse the commercial software industry; it became the foundation on which the next generation of commercial software was built.

Each predicted endpoint was a real change. Each disrupted some careers. None ended computing or reduced the overall demand for computing professionals.

The same pattern applies to cybersecurity and is already visible. AI is first going to disrupt the parts of our field that exist because something else was broken.

The penetration-testing ecosystem is the clearest example. It is a large, mature industry built on a failure mode: vendors, commercial and open source alike, shipping software with predictable defects, and customers willingly adopting it. The parallel to the horse problem is not subtle. That ecosystem exists, in large part, to muck out the stables of an upstream production system that produces far more waste than it should. AI is competent at identifying many flaws that shouldn't make their way to the street and is the technology that makes cleanup more affordable. It is not, by itself, a technology that fixes the source. Until the production process changes, the cleanup never ends.

That disruption is real, and uncomfortable for the people who built careers around it. On the merits, it is also mostly good news. A defect found by an AI tool before deployment is not exploited after deployment, and preventable harm decreases. But those same people are among the best positioned to help define what comes next! Their training and operational instincts are exactly what the emerging areas need.

Disruption of one part of the field opens possibilities elsewhere. Several areas will become more important, not less:

  • Security architecture, especially the integration of privacy and safety. Architecture is a judgment-and-context discipline. It is the kind of work that does not reduce to recombining patterns from a training corpus.
  • Defenses against social engineering. AI dramatically reduces the cost of tailored attacks. We need more research into detection, design work on resistant interfaces, and considerably better education on how to recognize and resist persuasion-at-scale.
  • Digital forensics. More incidents, of greater complexity, and a growing share involving agentic AI as either tool or target. The field needs investment; the practitioner pipeline needs broader training.
  • Intrusion and anomaly detection and response. The volume and tempo of attacks continue to rise. The expectation that a single human will sit in a SOC at 3 a.m. and reason it out without modern instrumentation is no longer credible. We need trustworthy enhancement and augmentation.
  • Formal design and verification, and defensive deception. Both have been niche specialties for decades. Both counter AI-enabled attacks well, and both can themselves be enhanced by AI tools: proofs can be drafted and checked at a greater scale, and deception artifacts can be generated and rotated faster than attackers can fingerprint them.

All five have potential for growth, from research through deployment. There are undoubtedly others.

If we recognize that cybersecurity, writ large, is about earning appropriate trust in computing systems, then several of the most interesting frontiers in software engineering are also cybersecurity frontiers. They always were. We have not, until now, had the tooling to address them at scale. A few candidates:

  • Whole-system assurance. Formal verification is reserved for small critical components (a kernel, a cryptographic primitive) because anything larger is too expensive to verify. With AI assistance for spec drafting, invariant search, and proof maintenance, "what would it take to ship a fully verified million-line system" stops being a thought experiment.
  • End-to-end software supply-chain provenance. We presently cannot answer "What is in this binary, where did each piece come from, and what was its build environment?" for almost any non-trivial software. Doing so at scale, continuously, with cryptographic assurance, is a problem AI tools could plausibly help reduce from research to common practice.
  • Privacy-preserving analytics. The technical pieces (homomorphic encryption, secure multi-party computation, differential privacy) exist but are currently too slow and brittle for routine use. Lowering the engineering cost of using them could change which analyses are practical: medical research across hospital systems, threat-intel sharing across competitors, fraud detection across financial institutions.
  • High-assurance real-time systems. Complex real-time systems have more failure modes and recovery paths than any single designer can hold in mind. AI tools that enumerate and evaluate alternatives against latency, safety, and correctness constraints can help engineers explore design choices currently left to approximation.

None of these becomes a smaller problem if we fire all the engineers who would have to solve them. All three are about calibrating the trust we place in the systems we already depend on, which is what cybersecurity has always been for.

The history is consistent. Predictions of the end of a field reliably mistake the end of one set of jobs for the end of all of them. The urban horse went away; transportation did not. The carriage-maker found other work, and some of those who studied carriage-making went on to build the industries that replaced it. What replaced the lost jobs was, on average, better.

There is no good reason to expect this transition to be the exception. There is every reason to expect that the field of cybersecurity has more to do twenty years from now than it has today, and that some of the most interesting work has not yet been named.

(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)


More detail on ordure``

Blog Archive

Get Your Degree with CERIAS