Posts in General
Page Content
More Than the Code
The two earlier posts in this series — New Myths for Old and After the Buggy Whip — argued that LLMs are not substitutes for the experienced people whose tacit knowledge keeps systems running and safe, and that the field of computing is being reshaped rather than ended. So, you may ask, if computing is not going away, and the people who do it well remain hard to replace, what is the best preparation for a career in it?
The answer is awkward considering the current direction of higher-education budgeting and state-level academic policy. The right preparation includes a substantial helping of what is being cut.
Picture an undergraduate two weeks into sophomore year, looking at her course plan. She is in computer science, a field she has come to love. She had planned to add a minor in sociology, or a second major in music or history. Now she is reconsidering. Her parents worry about the job market and think anything outside her major is a distraction. Her engineering friends tell her nothing outside the major will help her get hired, and that CS itself will soon be replaced by AI. She no longer knows what to believe. The minor goes.
That scenario will play out on many campuses next term, paralleling an argument older than computing. John Henry Newman's The Idea of a University, drawn from lectures first delivered in 1852 and developed across the following two decades, held that a university's purpose is the formation of intellectual habit — independent judgment, the capacity to weigh evidence and pursue truth — not the production of trained specialists. Half a century later, John Dewey's pragmatism placed a different emphasis: knowledge is inseparable from doing, and education should equip people to work, vote, and live as citizens. The American land-grant tradition that produced Purdue and scores of other public universities split the difference. The 1862 Morrill Act called for "the liberal and practical education of the industrial classes" — agricultural and mechanical instruction paired with broader study. These are not opposing camps. They are different facets of the same education.
Those facets have always been in tension over budget and class hours, but until recently, few seriously proposed cutting one to subsidize the other. The present is different. Amid budget shortfalls, demographic decline, and political pressure to demonstrate immediate "workforce alignment," universities are reducing breadth to fund a narrower depth. Increasingly, languages, liberal arts, and fine arts are being eliminated as majors at one regional public university after another. The justification offered is invariably economic.
Higher education also faces a non-economic pressure. It has long attracted political hostility because it encourages thought and argument that may run counter to current leadership. That hostility has intensified in recent years in the US. For example, consider state-level prohibitions on teaching specified topics, federal cuts to research on subjects deemed inappropriate, and proposed reductions to the National Endowment for the Humanities. The fields most impacted are the same fields being cut for stated economic reasons. The two pressures reinforce each other; the students lose access to the same coursework, regardless of the justification offered.
The most-cited recent case is West Virginia University, which in September 2023 eliminated 28 majors and 143 faculty positions to close a $45 million budget gap, including most of its world-language degrees. WVU was not the only one. Among others:
- Clarkson University announced in late 2023 that it would phase out the nine majors in its School of Arts & Sciences — history, sociology, political science, literature, film, and others — and refocus on STEM.
- In November 2024, Boston University suspended PhD admissions in a dozen humanities and social-science programs, including English, history, philosophy, classical studies, and sociology.
- In April 2026, Syracuse University announced the elimination or suspension of 93 academic programs, most of them in the humanities and arts: classics, fine arts, painting, music composition and performance, several foreign languages, and digital humanities. Syracuse's provost framed the decision as alignment with student demand rather than budget. The fields cut are the same.
The pattern is concentrated at regional public universities and at private institutions serving non-elite students: first-generation, rural, working adults, and others for whom a broad college education has historically been a step up in society. Wealthy private universities continue to staff full humanities and arts faculties; their students will read Rawls and Dostoyevsky, listen to Brahms and Glass, study Rembrandt and Mondrian, and learn Latin and Mandarin. The students who lose access are the ones paying for an education that no longer offers what their wealthier peers still take for granted.
Against this backdrop, one might ask, what does a computing professional actually need from outside computing? The list is shorter than the curriculum, but each item is easier to learn early, alongside the technical skills, where the habits take hold and last.
- Communication, written and spoken, technical and lay. Most consequential decisions in cybersecurity, and in computing more broadly, are made in writing — in an executive summary, an after-action report, or a board memo — and depend on readers who understand the context, why it matters, and what comes next. Policy review and design review rely on the same skill set. The NACE Job Outlook 2025 survey found problem-solving rated essential by nearly 90 percent of employers, teamwork by more than 80 percent, and written communication by close to 70 percent. Employers consistently report new graduates underperforming on those competencies. The WEF Future of Jobs 2025 report puts analytical thinking first; creative thinking, resilience, leadership, and motivation round out the top five. Those capacities are cultivated more reliably in a seminar that requires you to read a difficult text and argue about it than in any programming course.
- Ethics and the recognition of harm. The ACM Code of Ethics, which I have cited repeatedly, is explicit on the duty to anticipate and avoid harm. Anticipating harm requires moral imagination, historical perspective, and the willingness to consider who is on the other side of the system being built — none of which the Code itself can teach. Those capacities are cultivated by the disciplines under threat.
- Social and historical literacy. Computing systems are deployed into societies, not vacuums. Knowing how a labor market works, why a community distrusts the agency rolling out a new platform, what a free-speech tradition has meant in practice for two centuries, how an artistic movement broke from the one before — these are not decorations. They shape whether a system gets used, whether it is fought, and whether the decisions it automates produce the outcomes its designers intended.
- Argument and interpretation. A senior engineer reads ambiguous evidence — a half-confirmed indicator of compromise, an inconclusive postmortem, a policy proposal with consequences that depend on contested assumptions — and reaches a defensible reading. That is the same skill English majors practice on poems, art historians on paintings, and historians on archives. It is not a coincidence that people who do well in computing tend to have studied widely in other fields.
A defender of the cuts will object that AI now reproduces the humanities. LLMs can produce essays, sermons, and sonnets that read fluently. Image models generate plausible canvases in any style on demand. Audio models compose passable tunes. There is no shortage of seemingly new artistic material being produced by machines.
That output is mechanical novelty, not innovation. An LLM trained on published novels can recombine plot, voice, and image into an arrangement that has never previously existed, but the arrangement will be derivative by definition. Real innovation in any of these fields has historically meant that a person, shaped by long reading, looking, or listening, has arrived at a way of working that breaks with what came before. Atwood did not interpolate from Dickens, Mondrian from Vermeer, Coltrane from Brahms. Each had to live with the prior tradition and then go somewhere it had not been. That is not the operation an AI model performs.
The formative experience that produces such people produces the judgment that is now being hired to backstop AI. Reading complex material longer than a text message, writing under critique, arguing in a seminar, defending a reading against a serious objection — these are how the capacity to weigh and choose under uncertainty is built. The polished surface an AI tool produces is not a substitute; it is exactly the kind of plausible artifact that practitioners with that judgment are needed to evaluate. As I argued in New Myths for Old, an LLM is a statistical interpolator over a frozen corpus. It is good at recombining what has been written and poor at handling the unfamiliar. That is true for cybersecurity decisions; it is no less true for art.
The clearest signal of what AI development actually requires comes from the firms building those systems. The major AI labs have hired philosophers, anthropologists, and ethicists to work alongside their engineers on safety, policy, and societal-impact questions. Anthropic, for example, brought on the philosopher Amanda Askell to help define the values its models express, and the moral philosopher Peter Railton to work on training in ethics. Google DeepMind and OpenAI have parallel programs. The pundit class may claim that AI will replace humanists; the labs themselves are hiring humanists to constrain what their models do.
The strongest computing programs are structured along the same logic. Purdue's undergraduate BA in Artificial Intelligence is built jointly by the Departments of Philosophy and Computer Science — by design, not as an accommodation. The degree pairs twenty-four credit hours of philosophy with fifteen of CS, and Purdue also offers a CS+Philosophy dual-degree program. Stanford's Institute for Human-Centered AI places philosophers and ethicists alongside engineers as co-equal members. MIT's Schwarzman College of Computing has a mandate to integrate computing with the social sciences and humanities. Carnegie Mellon's Department of Philosophy and School of Computer Science offer joint undergraduate paths — including a longstanding Logic and Computation program — that pair the two fields by design. The institutions best positioned to define computing for the next generation are treating philosophy and the humanities as core to the work, not adjacent to it.
The fine arts have made a less-discussed contribution to computing, one that no AI tool yet produces on its own: the difference between the merely usable and the joy to use. Susan Kare, an artist with a BA in art from Mount Holyoke and a Ph.D. in fine art from NYU, drew the original, beloved Macintosh icons. Jim Reekes, an audio designer, composed the Mac startup chord on a Korg Wavestation in his living room, with "A Day in the Life" on his mind. Jony Ive, trained in industrial design and now chancellor of the Royal College of Art, shaped the look of every Apple product from the iMac through the iPhone. Musician Brian Eno composed the Windows 95 startup sound, which was added to the Library of Congress's National Recording Registry in 2025. Engineers built the machines; artists made them into objects people wanted to use.
My own undergraduate degree was a double major in math and computer science with a minor in philosophy. The faculty designed our program to emphasize interdisciplinary study and extensive research and writing, even though both majors were technical. My graduate work was in computing throughout, with psychology as a minor area. Some of what I learned in those non-technical courses I did not fully appreciate at the time. I have drawn on it in every decade since, and I believe they have contributed to my success.
The INSC graduate program in information security at Purdue — the world's first cybersecurity graduate degree program, which I founded in 2000 and directed for twenty-five years — requires coursework in ethics and technology policy. Not as enrichment, but because a security graduate who cannot reason about consequences and policy is not prepared. One reason our graduates have been highly valued in industry, government, and academia is that they arrive with that range. A narrowly trained graduate is at a disadvantage in the rooms where decisions are made.
If you are a student reading this and weighing what to take next, choose your electives deliberately. Take writing-intensive courses in which the writing is critiqued. Take a course that requires you to argue and be argued with. Consider a second major or minor in humanities, social science, or fine arts — not as decoration, but because the half-life of any particular framework or tool is shorter than a career, while the half-life of literacy, judgment, and the capacity to read a room is the rest of your working life. The capacities that make you human are the capacities that will not become obsolete with the next model release.
And there is a part of this argument that the career framing misses. Liberal-arts and fine-arts studies are not only a hedge against technical obsolescence. They are the foundation for the rest of life: for understanding what you read, watch, and listen to, for participating as a citizen, for making sense of grief and joy, for sustaining good conversation across a long life with people who matter. They are, eventually, what fills your time after your career ends. A person trained only for technical work reaches retirement with little left to draw on. The same person trained to read history, hear music, and follow an argument carries this training into a wider life.
The tasks that will still need people twenty years from now are the ones that require people to be people: to judge under uncertainty, to argue with care, to understand who is on the other side of the system. The life that will still be worth living when that work is done is the life one was prepared for outside the office. A university that strips its humanities and fine-arts programs to fund another technical certificate is preparing graduates who will be obsolete in their work and impoverished outside it — and thus failing to deliver on the promises on which higher education was founded.
(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)
After the Buggy Whip
Last week’s post addressed the myth that LLMs make programmers, security analysts, and incident responders optional. The pundit class continues to predict, with confidence approaching certainty, that there is no longer any reason to study computer science. That confidence is technically misplaced for the reasons I gave then: LLMs are good at recombining what has been written and poor at reasoning about what has not. They are not a substitute for the institutional context that experienced people carry. It is also misplaced for a more general reason that deserves a separate look. New technologies do not end fields. They reshape them. Some particular jobs become obsolete; others that nobody saw coming are created.
New York City in 1900 had roughly 100,000 working horses on its streets. Those horses produced about 2.5 million pounds of manure and 10 million gallons of urine per day. As of 1880, the city was also removing 15,000 dead horses per year. Vacant-lot manure piles reached 40 to 60 feet high. One U.S.-wide estimate put the horse-manure-bred fly population at three billion per day, linked to typhoid and diarrheal disease outbreaks.
The crisis was severe enough that in 1898 the first international urban-planning conference convened in New York City to address it. The delegates abandoned the conference after three days, instead of the scheduled ten, because none of them could see a solution. Fourteen short years later, cars outnumbered horses on New York's streets. By 1917, the last horsecar was retired.
The end of urban horses meant the end of stables, of carriage manufacturing, of buggy-whip making, and of much of the feed-and-livery economy. It also meant the end of an unsanitary, disease-spreading, traffic-snarling system that the experts of the day had explicitly given up trying to solve. What replaced it was more dependable transportation, improved urban sanitation, faster emergency response, longer-distance travel, and whole industries (automotive manufacturing, parts and service, road construction, motels, traffic engineering, automotive insurance, motorsport, etc.) that no expert at the 1898 conference would have predicted.
The pattern is not unique to horses. Some predicted telephones would end face-to-face interaction; instead they created telecom, customer service at scale, mobile applications, and a global communications industry. Affordable electricity was predicted to eliminate domestic labor; it eliminated some drudgery and enabled a far more complex household and industrial economy. Television was predicted to end reading and conversation; it created entire creative industries that did not exist before.
The wheel was, in all likelihood, denounced by the makers of sledges and travois.
None of these transitions ended their fields. Each expanded them. Each eventually produced job categories that no one affected by the change could have imagined.
Computing has had its own buggy-whip moments. Higher-level languages were supposed to make programmers unnecessary; we got more programmers working on more ambitious problems. Portable computers were supposed to mean computing had become a finished consumer product needing no further professional discipline; the opposite happened. Cloud computing was supposed to let enterprises shed their IT staff; most needed different IT staff, often more of them, with skills the previous generation lacked. Open source was supposed to collapse the commercial software industry; it became the foundation on which the next generation of commercial software was built.
Each predicted endpoint was a real change. Each disrupted some careers. None ended computing or reduced the overall demand for computing professionals.
The same pattern applies to cybersecurity and is already visible. AI is first going to disrupt the parts of our field that exist because something else was broken.
The penetration-testing ecosystem is the clearest example. It is a large, mature industry built on a failure mode: vendors, commercial and open source alike, shipping software with predictable defects, and customers willingly adopting it. The parallel to the horse problem is not subtle. That ecosystem exists, in large part, to muck out the stables of an upstream production system that produces far more waste than it should. AI is competent at identifying many flaws that shouldn't make their way to the street and is the technology that makes cleanup more affordable. It is not, by itself, a technology that fixes the source. Until the production process changes, the cleanup never ends.
That disruption is real, and uncomfortable for the people who built careers around it. On the merits, it is also mostly good news. A defect found by an AI tool before deployment is not exploited after deployment, and preventable harm decreases. But those same people are among the best positioned to help define what comes next! Their training and operational instincts are exactly what the emerging areas need.
Disruption of one part of the field opens possibilities elsewhere. Several areas will become more important, not less:
- Security architecture, especially the integration of privacy and safety. Architecture is a judgment-and-context discipline. It is the kind of work that does not reduce to recombining patterns from a training corpus.
- Defenses against social engineering. AI dramatically reduces the cost of tailored attacks. We need more research into detection, design work on resistant interfaces, and considerably better education on how to recognize and resist persuasion-at-scale.
- Digital forensics. More incidents, of greater complexity, and a growing share involving agentic AI as either tool or target. The field needs investment; the practitioner pipeline needs broader training.
- Intrusion and anomaly detection and response. The volume and tempo of attacks continue to rise. The expectation that a single human will sit in a SOC at 3 a.m. and reason it out without modern instrumentation is no longer credible. We need trustworthy enhancement and augmentation.
- Formal design and verification, and defensive deception. Both have been niche specialties for decades. Both counter AI-enabled attacks well, and both can themselves be enhanced by AI tools: proofs can be drafted and checked at a greater scale, and deception artifacts can be generated and rotated faster than attackers can fingerprint them.
All five have potential for growth, from research through deployment. There are undoubtedly others.
If we recognize that cybersecurity, writ large, is about earning appropriate trust in computing systems, then several of the most interesting frontiers in software engineering are also cybersecurity frontiers. They always were. We have not, until now, had the tooling to address them at scale. A few candidates:
- Whole-system assurance. Formal verification is reserved for small critical components (a kernel, a cryptographic primitive) because anything larger is too expensive to verify. With AI assistance for spec drafting, invariant search, and proof maintenance, "what would it take to ship a fully verified million-line system" stops being a thought experiment.
- End-to-end software supply-chain provenance. We presently cannot answer "What is in this binary, where did each piece come from, and what was its build environment?" for almost any non-trivial software. Doing so at scale, continuously, with cryptographic assurance, is a problem AI tools could plausibly help reduce from research to common practice.
- Privacy-preserving analytics. The technical pieces (homomorphic encryption, secure multi-party computation, differential privacy) exist but are currently too slow and brittle for routine use. Lowering the engineering cost of using them could change which analyses are practical: medical research across hospital systems, threat-intel sharing across competitors, fraud detection across financial institutions.
- High-assurance real-time systems. Complex real-time systems have more failure modes and recovery paths than any single designer can hold in mind. AI tools that enumerate and evaluate alternatives against latency, safety, and correctness constraints can help engineers explore design choices currently left to approximation.
None of these becomes a smaller problem if we fire all the engineers who would have to solve them. All three are about calibrating the trust we place in the systems we already depend on, which is what cybersecurity has always been for.
The history is consistent. Predictions of the end of a field reliably mistake the end of one set of jobs for the end of all of them. The urban horse went away; transportation did not. The carriage-maker found other work, and some of those who studied carriage-making went on to build the industries that replaced it. What replaced the lost jobs was, on average, better.
There is no good reason to expect this transition to be the exception. There is every reason to expect that the field of cybersecurity has more to do twenty years from now than it has today, and that some of the most interesting work has not yet been named.
(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)
More detail on ordure``
- "The Big Crapple: NYC Transit Pollution from Horse Manure to Horseless Carriages" — 99% Invisible
- "The Great Horse-Manure Crisis of 1894" — Foundation for Economic Education
- "How Much Horse Manure Was Deposited on the Streets of New York City Before the Advent of the Automobile, and What Happened to It?" — The New York Historical
- The Common Vulnerabilities and Exposures (CVE) database — MITRE
New Myths for Old
Twenty years ago this month, my first post in this blog was "Security Myths and Passwords," addressing the folk wisdom that monthly password rotation improves security. That myth had survived for roughly thirty years before I wrote about it, and NIST did not formally retire the periodic-rotation guidance until 2017, in SP 800-63B. Two decades later, I co-authored a book on the broader phenomenon of cybersecurity myths, Cybersecurity Myths and Misconceptions, with Leigh Metcalf and Josiah Dykstra. The next round of mythmaking is now well underway, around AI in general and large language models (LLMs) in particular.
Once a claim is repeated enough times in policy memos, audit checklists, advertising, and vendor decks, it stops being scrutinized. The 2006 password post made that point. So did my 2019 CERIAS tech report on cloud computing, which warned against decisions driven by "fad" technologies such as cloud, blockchain, and AI. So did my 2023 post "AI and ML Sturm und Drang." Pascal Meunier's post in this blog, from the same week as my first one, "What is Secure Software Engineering?," got at the underlying failure mode: practices "based on experience" are inherently brittle against intelligent adversaries who invent new attacks. That description fits an LLM almost word-for-word.
The marketing claim has become explicit: LLMs will replace software developers, security analysts, compliance reviewers, and incident responders. Some vendors hedge the wording, but the direction is the same: the human becomes an optional component. This is the same type of myth as "change passwords every month" — repeated more often than examined. LLMs provide statistical interpolation based on a fixed training set. They recombine what has been written, and they usually recombine it well. However, they are poor at handling novel cases. A new attack pattern, an unfamiliar architecture, a recent regulation, a business context the training data did not cover — for those, the model produces plausible-sounding text without really addressing the full problem.
LLMs also hallucinate with confidence. They will hallucinate about whether they have followed security-by-design practices in the first place. The sycophancy of current LLMs is well known: an agent told to build a system with security by design will report that it has done so, regardless of whether that is true. Asked to verify its own compliance, an LLM will lie without acknowledging it. Domain expertise, the kind that prevents breaches, is difficult to formalize and slow to acquire. A senior engineer who knows that one system can tolerate a particular failure while another cannot is making a judgment that requires context that no LLM has access to. That expert judgment may take years to build, drawing on incident response, post-mortems, and watching the same problems recur in different forms.
News reports describe thousands of experienced personnel laid off across dozens of companies and replaced with AI. Some of those decisions undoubtedly reflect real reorganization in response to shifting demand. Others appear to use "AI replacement" as a cover for opportunistic cost-cutting. An Oxford Economics analysis found that explicitly AI-cited cuts amounted to roughly 4.5% of U.S. layoffs in the first eleven months of 2025. It concluded that firms may be dressing up layoffs as a good-news story rather than admitting to weak demand or past over-hiring. Either way, the assumption is that AI will fill the gap. That is the patch-instead-of-fix mindset extended to staffing.
It is fair to ask what these tools are good for. The answer is not "nothing." The most valuable thing AI tools currently do in security is an awkward fact for the industry: they are competent at finding flaws in software that should not have been shipped in the first place. The "penetrate-and-patch" culture I and others have been complaining about for decades has produced an enormous backlog of technical debt — code written under deadlines, with corners cut, security analysis skipped, and known limitations punted to the next release. Decades of that, accumulated across the industry, are now coming due, and AI tools are quite effective at locating the rot and kludges.
But there are limits. Much of what an LLM flags is coding flaws rather than security vulnerabilities, and some flaws that could be vulnerabilities turn out not to be exploitable in their deployed context. Telling real risk from noise takes domain expertise and operational experience. An improper buffer length behind validated input is not the same problem as a flagged buffer length on an externally reachable service, and an LLM does not know which is which. That distinction is what the engineers being laid off have spent careers learning to make. Not every smoke alarm is a fire, and not being able to tell the difference means dispatching the fire department hundreds or thousands of times for burnt toast.
There is an irony to all this. Many of the same vendors arguing that AI will replace their security teams are quietly using AI to find the bugs that their previous teams were not given resources or time to fix. That is a real use of the technology. It is not a substitute for the people who would have prevented the bugs to begin with, nor for the people who must understand and triage what the AI surfaces. "Shipped fast, found later, fixed maybe" is the pattern that produced the debt; using AI to keep skating ahead of the consequences without changing the practice is not progress.
Password rotation was mostly wasted effort. The damage was inconvenience, predictable user workarounds, and a generation of policies that pushed people toward weaker, memorable passwords reused across systems. The replacement-by-LLM myth is interacting with a much worse threat landscape, and the recent trends are not encouraging:
- The 2025 Verizon Data Breach Investigations Report found that breaches initiated through exploited vulnerabilities grew 34% year-over-year, that more than half of edge-device vulnerabilities remained unremediated after a full year, and that third-party involvement in breaches doubled, from 15% to 30%.
- IBM's 2025 Cost of a Data Breach Report put the global average breach cost at $4.44 million, with organizations carrying high levels of shadow AI paying an additional $670,000 on average; 97% of organizations that experienced an AI-related security incident lacked proper AI access controls.
- Upguard's State of Shadow AI report found that 81% of the general workforce and 88% of security professionals admit to using unapproved AI tools at work.
Then comes something new in kind. Agentic AI introduces autonomous actors with "write" access inside the perimeter: software that can create, delete, and modify files without a human in the loop. No security architecture I am aware of was designed for that threat model. It certainly isn't zero-trust (whatever that actually means). Treating agents as another category of third-party dependency, given that third-party involvement in breaches has already doubled, is negligent.
The pattern across decades is consistent. Skip the analysis, embed the assumption, repeat it until it counts as common knowledge, and defend the practice long after the evidence has turned against it. The corrective is also consistent: keep experienced humans in the verification loop, demand testable evidence of safety-by-design, resist the pressure to fire the people who carry context in their heads, and treat any system that tells you it is secure as the suspect that it is. The ACM Code of Ethics is explicit on the duty to anticipate and avoid harm. That duty does not pause for hype cycles.
There is also a pragmatic argument. Reckless deployment tends to produce backlash. Nuclear power and commercial aviation are useful precedents. In both cases, preventable incidents led to regulations strict enough to permanently shape how those industries operate, at substantial cost to the firms but with clear benefit to public safety. AI is on that trajectory. Companies that build prudently now will be in a better position when any regulatory wave arrives; companies that race to fire their experts and ship agentic systems into production will find themselves explaining their choices to regulators with enforcement authority.
Quantum computing is already queueing up to be the next myth. Vendors are selling "quantum-safe" and "quantum-ready" products well ahead of any clear definition of either term, and well ahead of any consensus on the threat timeline. We will have a version of this conversation again in five years, and probably every ten years thereafter.
Twenty years from now, I expect somebody — possibly me, possibly an LLM trained on my collected works and confidently misattributing them — will be writing the same post about whatever myth replaces this one.
(Portions of this blog were researched and assembled with the assistance of Anthropic Claude Opus 4.7, but the content is my own.)
Ph.D.s in Cybersecurity
Introduction
Purdue University has a history of “firsts” in computing. The computer science department was founded in 1962, making it the oldest degree-granting CS program in the world. Purdue also has a history of research and education in cybersecurity, including the first multidisciplinary research center in the field (1998, CERIAS), and the first regular graduate degree in cybersecurity (2000).
Dorothy Denning completed her Ph.D. in CS at Purdue in 1975. Her dissertation was entitled Secure Information Flow in Computer Systems. After graduation, she joined the computer science faculty. She began offering a regular course in data security, starting in 1981. Matt Bishop was the TA for that course and completed his Ph.D. in security in 1984 with Dorothy as his advisor. Both Dorothy and Matt are well-known in cybersecurity for their many fundamental contributions.
Sam Wagstaff arrived in 1983 and assumed responsibility for teaching the data security course. Gene Spafford joined the faculty in 1997, although he did not teach a core cybersecurity course in his first few years at Purdue; he primarily taught software engineering and distributed systems.
In 1992, Spafford started the COAST Laboratory in the CS department, with initial support from Wagstaff. In 1998, CERIAS was established as a university institute, led by Spafford and supported by faculty in five other university departments. (As of January 2026, there are over 150 affiliated faculty in 20 academic departments. We'll have a more detailed history of CERIAS in a future post.) The first Ph.D. graduate from COAST, advised by Spafford, was Sandeep Kumar in 1995.
In 1997, immediately prior to the founding of CERIAS, Professor Spafford provided testimony before the House Science Committee of the 105th Congress. In that, he described the then-current national production of Ph.D.s in cybersecurity as only 2-3 per year. This was clearly not sufficient for the growing demand. His testimony inspired formation of both the NSF Scholarship for Service and the NSA/DHS Academic Centers of Excellence to encourage more students to pursue degrees. CERIAS leadership also considered it an initial priority to encourage more such degrees.
In the years since then, a number of universities around the world have developed cybersecurity research and education programs. A few thousand Ph.D.s have been graduated since the mid-1990s.
Ph.D. Production from mid 1990s
Rob Morton, a 2024 Ph.D. advised by Spafford, conducted research on degrees produced, augmented by Deep Search in Google Gemini. What follows are results from his research.
1988 was used as a starting point for "modern" academic cybersecurity. Following the Morris Worm (November 1988), the field formalized rapidly: Carnegie Mellon formed the CERT/CC, Purdue formed the COAST Laboratory (precursor to CERIAS), and UC Davis began its dedicated security architecture work.
Since that year, Purdue University and Carnegie Mellon University (CMU) have been the undisputed volume leaders in producing doctoral graduates with security-specific dissertations.
The Historical "Leaderboard" (Covering 1988–2024)
These counts exclude Master's degrees. They represent Doctoral candidates whose dissertations were primarily focused on Information Security, Privacy, or Cryptography. (The CERIAS/COAST numbers have been updated using local Purdue records.)
Detailed Breakdown by Era
- Total US Production: Extremely low (~5–10 per year nationwide).
- Dominant School: Purdue University (COAST Lab).
- Context: In this decade, if you met a PhD in security, they likely came from Purdue or UC Davis.There were almost no dedicated “Security” tracks elsewhere; students had to beg CS advisors to let them study viruses or intrusion detection.
- Notable Alumni: Many of the early leaders of security research graduated in this narrow window from these two schools.
- Total US Production: Growing (~30–50 per year).
- Dominant School: Carnegie Mellon (CMU) and Purdue.
- Context: The NSA started the “Centers of Academic Excellence” (CAE) program in 1999. Funding exploded. CMU’s CyLab began to industrialize the PhD process, adding policy and economics to the mix. Georgia Tech began ramping up network research. Also notable, although smaller, were programs at James Madison University, George Mason University, Idaho State University, Iowa State University, and the University of Idaho.
- Total US Production: High (~100–150+ per year).
- Dominant School: Georgia Tech and Northeastern.
- Context: Security became a standard sub-field of Computer Science.
- Purdue remains the steady "interdisciplinary" leader (averaging ~15–20 PhDs/year recently), mostly in CS.
- Georgia Tech and Northeastern aggressively hired faculty to scale their output.
- Top-Tier Shift: Schools like MIT and Stanford began producing PhDs focused on “Adversarial AI,” blurring the line between Security and Artificial Intelligence.
- Purdue (CERIAS): Their public alum rosters list approximately 360+ PhD graduates associated with the institute since its inception (counting the COAST era). However, the total count across the whole university is known to be higher as affiliation with CERIAS is optional and graduates originate in many disciplines.
- UC Davis: Their Security Lab alum page lists approximately 85+ PhDs specifically from the Computer Security Lab. However, the total count across the whole university is likely higher.
Ch-ch-ch-changes
Tomorrow, July 1, 2025, ushers in two significant changes.
For the first time in over 25 years, our fantastic administrative assistant, Lori Floyd, will not be present to greet us as she has retired. Lori joined the staff of CERIAS in October of 1999 and has done a fantastic job of helping us keep moving forward. Lori was the first person people would meet when visiting us in our original offices in the Recitation Building, and often the first to open the door at our new offices in Convergence. At our symposia, workshops, and events of all kinds, Lori helped ensure we had a proper room, handouts, and (when appropriate) refreshments. She also helped keep all the paperwork and scheduling straight for our visitors and speakers, handled some of our purchasing, and acted as building deputy. We know she quietly and competently did many other things behind the scenes, and we'll undoubtedly learn about them as things begin to fall apart!
We all wish Lori well in her retirement. She plans to spend time with her partner, kids, and grandkids, travel, and garden. She will be missed at CERIAS, but definitely not forgotten.
The second change is in the related INSC Interdisciplinary Information Security graduate program, a spin-off of CERIAS. In 2000, Melissa Dark, Victor Raskin, and Spaf founded the INSC program as the first graduate degree in information/cyber security in the world. The program was explicitly interdisciplinary from the start and supported by faculty across the university. Students were (and still are) required to take technology ethics and policy courses in addition to cybersecurity courses. Starting with MS students supported by one of the very first NSF CyberCorp awards, the program quickly grew and was approved to offer the Ph.D. degree.
INSC was never formally a part of CERIAS, but students and faculty often saw them as related. All INSC students were automatically included in CERIAS events, and they were frequently recruited by CERIAS partners (and still are!). CERIAS faculty volunteer to serve on INSC committees and to advise the students. It is a "win–win" situation that has resulted in some great graduates, many now in some notable positions in industry and government.
The change coming to INSC is in leadership. After 25 years as program head, Spaf is stepping into the role of associate head for a while. Taking on the role of program head is Professor Christopher Yeomans. Chris has been a long-time supporter of the program with experience as the chair of the Philosophy Department.
(If you're interested in a graduate degree through INSC visit the website describing the program and how to apply.)


