<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:admin="http://webns.net/mvcb/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:content="http://purl.org/rss/1.0/modules/content/">

    <channel>
    
    <title>Main Feed</title>
    <link>https://www.cerias.purdue.edu/</link>
    <description>{weblog_description}</description>
    <dc:language>{weblog_language}</dc:language>
    <dc:creator>webmaster@cerias.purdue.edu</dc:creator>
    <dc:rights>Copyright 2001</dc:rights>
    <dc:date>2001-08-27T19:15+00:00</dc:date>
    <admin:generatorAgent rdf:resource="CERIAS RSS Generator 2.0" />
    
    <item>
      <title>Contact CERIAS</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject></dc:subject>
      <dc:date>2026-05-05T20:03+00:00</dc:date>
    </item>    <item>
      <title>The Security Mistakes Being Repeated with AI</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>General,</dc:subject>
      <dc:date>2026-05-05T13:33+00:00</dc:date>
    </item>    <item>
      <title>After the Buggy Whip</title>
      <author>spaf@cerias.purdue.edu (Prof. Spafford)</author>


      
      <description><![CDATA[
	
	<p><a href="https://www.cerias.purdue.edu/site/blog/post/new-myths-for-old">Last week’s post</a>&nbsp;addressed the myth that LLMs make programmers, security analysts, and incident responders optional. The pundit class continues to predict, with confidence approaching certainty, that there is no longer any reason to study computer science. That confidence is technically misplaced for the reasons I gave then: LLMs are good at recombining what has been written and poor at reasoning about what has not. They are not a substitute for the institutional context that experienced people carry. It is also misplaced for a more general reason that deserves a separate look. New technologies do not end fields. They reshape them. Some particular jobs become obsolete; others that nobody saw coming are created.</p>
<p>New York City in 1900 had roughly 100,000 working horses on its streets.&nbsp; Those horses produced about 2.5 million pounds of manure and 10 million gallons of urine per day. As of 1880, the city was also removing 15,000 dead horses per year. Vacant-lot manure piles reached 40 to 60 feet high. One U.S.-wide estimate put the horse-manure-bred fly population at three billion per day, linked to typhoid and diarrheal disease outbreaks.</p>
<p>The crisis was severe enough that in 1898 the <a href="https://99percentinvisible.org/article/cities-paved-dung-urban-design-great-horse-manure-crisis-1894/">first international urban-planning conference</a> convened in New York City to address it. The delegates abandoned the conference after three days, instead of the scheduled ten, because none of them could see a solution. Fourteen short years later, cars outnumbered horses on New York's streets. By 1917, the last horsecar was retired.</p>
<p>The end of urban horses meant the end of stables, of carriage manufacturing, of buggy-whip making, and of much of the feed-and-livery economy. It also meant the end of an unsanitary, disease-spreading, traffic-snarling system that the experts of the day had explicitly given up trying to solve. What replaced it was more dependable transportation, improved urban sanitation, faster emergency response, longer-distance travel, and whole industries (automotive manufacturing, parts and service, road construction, motels, traffic engineering, automotive insurance, motorsport, etc.) that no expert at the 1898 conference would have predicted.</p>
<p>The pattern is not unique to horses. Some predicted telephones would end face-to-face interaction; instead they created telecom, customer service at scale, mobile applications, and a global communications industry. Affordable electricity was predicted to eliminate domestic labor; it eliminated some drudgery and enabled a far more complex household and industrial economy. Television was predicted to end reading and conversation; it created entire creative industries that did not exist before.</p>
<p>The wheel was, in all likelihood, denounced by the makers of sledges and travois.</p>
<p>None of these transitions ended their fields. Each expanded them. Each eventually produced job categories that no one affected by the change could have imagined.</p>
<p>Computing has had its own buggy-whip moments. Higher-level languages were supposed to make programmers unnecessary; we got more programmers working on more ambitious problems. Portable computers were supposed to mean computing had become a finished consumer product needing no further professional discipline; the opposite happened. Cloud computing was supposed to let enterprises shed their IT staff; most needed different IT staff, often more of them, with skills the previous generation lacked. Open source was supposed to collapse the commercial software industry; it became the foundation on which the next generation of commercial software was built.</p>
<p>Each predicted endpoint was a real change. Each disrupted some careers. None ended computing or reduced the overall demand for computing professionals.</p>
<p>The same pattern applies to cybersecurity and is already visible. AI is first going to disrupt the parts of our field that exist because something else was broken.</p>
<p>The penetration-testing ecosystem is the clearest example. It is a large, mature industry built on a failure mode: vendors, commercial and open source alike, shipping software with predictable defects, and customers willingly adopting it. The parallel to the horse problem is not subtle. That ecosystem exists, in large part, to muck out the stables of an upstream production system that produces far more waste than it should. AI is competent at identifying many flaws that shouldn't make their way to the street and is the technology that makes cleanup more affordable. It is not, by itself, a technology that fixes the source. Until the production process changes, the cleanup never ends.</p>
<p>That disruption is real, and uncomfortable for the people who built careers around it. On the merits, it is also mostly good news. A defect found by an AI tool before deployment is not exploited after deployment, and preventable harm decreases. But those same people are among the best positioned to help define what comes next! Their training and operational instincts are exactly what the emerging areas need.</p>
<p>Disruption of one part of the field opens possibilities elsewhere. Several areas will become more important, not less:</p>
<ul>
<li><strong>Security architecture</strong>, especially the integration of privacy and safety. Architecture is a judgment-and-context discipline. It is the kind of work that does not reduce to recombining patterns from a training corpus.</li>
<li><strong>Defenses against social engineering.</strong> AI dramatically reduces the cost of tailored attacks. We need more research into detection, design work on resistant interfaces, and considerably better education on how to recognize and resist persuasion-at-scale.</li>
<li><strong>Digital forensics.</strong> More incidents, of greater complexity, and a growing share involving agentic AI as either tool or target. The field needs investment; the practitioner pipeline needs broader training.</li>
<li><strong>Intrusion and anomaly detection and response.</strong> The volume and tempo of attacks continue to rise. The expectation that a single human will sit in a SOC at 3 a.m. and reason it out without modern instrumentation is no longer credible. We need trustworthy enhancement and augmentation.</li>
<li><strong>Formal design and verification, and defensive deception.</strong> Both have been niche specialties for decades. Both counter AI-enabled attacks well, and both can themselves be enhanced by AI tools: proofs can be drafted and checked at a greater scale, and deception artifacts can be generated and rotated faster than attackers can fingerprint them.</li>
</ul>
<p>All five have potential for growth, from research through deployment. There are undoubtedly others.</p>
<p>If we recognize that cybersecurity, writ large, is about earning appropriate trust in computing systems, then several of the most interesting frontiers in software engineering are also cybersecurity frontiers. They always were. We have not, until now, had the tooling to address them at scale. A few candidates:</p>
<ul>
<li><strong>Whole-system assurance.</strong> Formal verification is reserved for small critical components (a kernel, a cryptographic primitive) because anything larger is too expensive to verify. With AI assistance for spec drafting, invariant search, and proof maintenance, "what would it take to ship a fully verified million-line system" stops being a thought experiment.</li>
<li><strong>End-to-end software supply-chain provenance.</strong> We presently cannot answer "What is in this binary, where did each piece come from, and what was its build environment?" for almost any non-trivial software. Doing so at scale, continuously, with cryptographic assurance, is a problem AI tools could plausibly help reduce from research to common practice.</li>
<li><strong>Privacy-preserving analytics.</strong> The technical pieces (homomorphic encryption, secure multi-party computation, differential privacy) exist but are currently too slow and brittle for routine use. Lowering the engineering cost of using them could change which analyses are practical: medical research across hospital systems, threat-intel sharing across competitors, fraud detection across financial institutions.</li>
<li><strong>High-assurance real-time systems.</strong> Complex real-time systems have more failure modes and recovery paths than any single designer can hold in mind. AI tools that enumerate and evaluate alternatives against latency, safety, and correctness constraints can help engineers explore design choices currently left to approximation.</li>
</ul>
<p>None of these becomes a smaller problem if we fire all the engineers who would have to solve them. All three are about calibrating the trust we place in the systems we already depend on, which is what cybersecurity has always been for.</p>
<p>The history is consistent. Predictions of the end of a field reliably mistake the end of one set of jobs for the end of all of them. The urban horse went away; transportation did not. The carriage-maker found other work, and some of those who studied carriage-making went on to build the industries that replaced it. What replaced the lost jobs was, on average, better.</p>
<p>There is no good reason to expect this transition to be the exception. There is every reason to expect that the field of cybersecurity has more to do twenty years from now than it has today, and that some of the most interesting work has not yet been named.</p>
<p><em>(A few portions of this text were drafted and structured with the assistance of Anthropic Claude Opus 4.7; the ideas, arguments, and final editorial decisions are the author's.)</em></p>
<hr>
<p><strong>More detail on ordure``</strong></p>
<ul>
<li><a href="https://99percentinvisible.org/article/cities-paved-dung-urban-design-great-horse-manure-crisis-1894/">"The Big Crapple: NYC Transit Pollution from Horse Manure to Horseless Carriages"</a> — 99% Invisible</li>
<li><a href="https://fee.org/articles/the-great-horse-manure-crisis-of-1894/">"The Great Horse-Manure Crisis of 1894"</a> — Foundation for Economic Education</li>
<li><a href="https://www.nyhistory.org/community/horse-manure">"How Much Horse Manure Was Deposited on the Streets of New York City Before the Advent of the Automobile, and What Happened to It?"</a> — The New York Historical</li>
<li><a href="https://www.cve.org/">The Common Vulnerabilities and Exposures (CVE) database</a> — MITRE</li>
</ul>
		]]></description>
      <dc:subject>General, Kudos, Opinions and Rants, Secure IT Practices,</dc:subject>
      <dc:date>2026-05-03T18:12+00:00</dc:date>
    </item>    <item>
      <title>New Myths for Old</title>
      <author>spaf@cerias.purdue.edu (Prof. Spafford)</author>


      
      <description><![CDATA[
	
	<p>
    Twenty years ago this month, my first post in this blog was "<a href="https://www.cerias.purdue.edu/site/blog/post/password-change-myths">Security Myths and Passwords</a>," addressing the folk wisdom that monthly password rotation improves security. That myth had survived for roughly thirty years before I wrote about it, and NIST did not formally retire the periodic-rotation guidance until 2017, in <a href="https://pages.nist.gov/800-63-3/sp800-63b.html">SP 800-63B</a>. Two decades later, I co-authored a book on the broader phenomenon of cybersecurity myths, <em><a href="https://informit.com/cybermyths">Cybersecurity Myths and Misconceptions</a></em>, with Leigh Metcalf and Josiah Dykstra. The next round of mythmaking is now well underway, around AI in general and large language models (LLMs) in particular.
</p>
<p>
    Once a claim is repeated enough times in policy memos, audit checklists, advertising, and vendor decks, it stops being scrutinized. The 2006 password post made that point. So did my <a href="https://www.cerias.purdue.edu/assets/pdf/bibtex_archive/2019-3.pdf">2019 CERIAS tech report on cloud computing</a>, which warned against decisions driven by "fad" technologies such as cloud, blockchain, and AI. So did my 2023 post "<a href="https://www.cerias.purdue.edu/site/blog/post/ai_and_ml_sturm_und_drang">AI and ML Sturm und Drang</a>." Pascal Meunier's post in this blog, from the same week as my first one, "<a href="https://www.cerias.purdue.edu/site/blog/post/what-is-secure-software-engineering">What is Secure Software Engineering?</a>," got at the underlying failure mode: practices "based on experience" are inherently brittle against intelligent adversaries who invent new attacks. That description fits an LLM almost word-for-word.
</p>
<p>
    The marketing claim has become explicit: LLMs will replace software developers, security analysts, compliance reviewers, and incident responders. Some vendors hedge the wording, but the direction is the same: the human becomes an optional component. This is the same type of myth as "change passwords every month" — repeated more often than examined. LLMs provide statistical interpolation based on a fixed training set. They recombine what has been written, and they usually recombine it well. However, they are poor at handling novel cases. A new attack pattern, an unfamiliar architecture, a recent regulation, a business context the training data did not cover — for those, the model produces plausible-sounding text without really addressing the full problem.
</p>
<p>
    LLMs also hallucinate with confidence. They will hallucinate about whether they have followed security-by-design practices in the first place. The sycophancy of current LLMs is well known: an agent told to build a system with security by design will report that it has done so, regardless of whether that is true. Asked to verify its own compliance, an LLM will lie without acknowledging it. Domain expertise, the kind that prevents breaches, is difficult to formalize and slow to acquire. A senior engineer who knows that one system can tolerate a particular failure while another cannot is making a judgment that requires context that no LLM has access to. That expert judgment may take years to build, drawing on incident response, post-mortems, and watching the same problems recur in different forms.
</p>
<p>
    News reports describe <a href="https://www.trueup.io/layoffs">thousands of experienced personnel laid off across dozens of companies</a> and replaced with AI. Some of those decisions undoubtedly reflect real reorganization in response to shifting demand. Others appear to use "AI replacement" as a cover for opportunistic cost-cutting. An <a href="https://www.oxfordeconomics.com/resource/evidence-of-an-ai-driven-shakeup-of-job-markets-is-patchy/">Oxford Economics analysis</a> found that explicitly AI-cited cuts amounted to roughly 4.5% of U.S. layoffs in the first eleven months of 2025. It concluded that firms may be dressing up layoffs as a good-news story rather than admitting to weak demand or past over-hiring. Either way, the assumption is that AI will fill the gap. That is the patch-instead-of-fix mindset extended to staffing.
</p>
<p>
    It is fair to ask what these tools are good for. The answer is not "nothing." The most valuable thing AI tools currently do in security is an awkward fact for the industry: they are competent at finding flaws in software that should not have been shipped in the first place. The "penetrate-and-patch" culture I and others have been complaining about for decades has produced an enormous backlog of technical debt — code written under deadlines, with corners cut, security analysis skipped, and known limitations punted to the next release. Decades of that, accumulated across the industry, are now coming due, and AI tools are quite effective at locating the rot and kludges.
</p>
<p>
    But there are limits. Much of what an LLM flags is coding flaws rather than security vulnerabilities, and some flaws that could be vulnerabilities turn out not to be exploitable in their deployed context. Telling real risk from noise takes domain expertise and operational experience. An improper buffer length behind validated input is not the same problem as a flagged buffer length on an externally reachable service, and an LLM does not know which is which. That distinction is what the engineers being laid off have spent careers learning to make. Not every smoke alarm is a fire, and not being able to tell the difference means dispatching the fire department hundreds or thousands of times for burnt toast.
</p>
<p>
    There is an irony to all this. Many of the same vendors arguing that AI will replace their security teams are quietly using AI to find the bugs that their previous teams were not given resources or time to fix. That is a real use of the technology. It is not a substitute for the people who would have prevented the bugs to begin with, nor for the people who must understand and triage what the AI surfaces. "Shipped fast, found later, fixed maybe" is the pattern that produced the debt; using AI to keep skating ahead of the consequences without changing the practice is not progress.
</p>
<p>
    Password rotation was mostly wasted effort. The damage was inconvenience, predictable user workarounds, and a generation of policies that pushed people toward weaker, memorable passwords reused across systems. The replacement-by-LLM myth is interacting with a much worse threat landscape, and the recent trends are not encouraging:
</p>
<ul>
    <li>The <a href="https://www.verizon.com/business/resources/reports/dbir/">2025 Verizon Data Breach Investigations Report</a> found that breaches initiated through exploited vulnerabilities grew 34% year-over-year, that more than half of edge-device vulnerabilities remained unremediated after a full year, and that third-party involvement in breaches doubled, from 15% to 30%.</li>
    <li>IBM's <a href="https://www.ibm.com/reports/data-breach">2025 Cost of a Data Breach Report</a> put the global average breach cost at $4.44 million, with organizations carrying high levels of shadow AI paying an additional $670,000 on average; 97% of organizations that experienced an AI-related security incident lacked proper AI access controls.</li>
    <li>Upguard's <a href="https://www.upguard.com/blog/state-of-shadow-ai">State of Shadow AI report</a> found that 81% of the general workforce and 88% of security professionals admit to using unapproved AI tools at work.</li>
</ul>
<p>
    Then comes something new in kind. Agentic AI introduces autonomous actors with "write" access inside the perimeter: software that can create, delete, and modify files without a human in the loop. No security architecture I am aware of was designed for that threat model. It certainly isn't zero-trust (whatever <em>that</em> actually means). Treating agents as another category of third-party dependency, given that third-party involvement in breaches has already doubled, is negligent.
</p>
<p>
    The pattern across decades is consistent. Skip the analysis, embed the assumption, repeat it until it counts as common knowledge, and defend the practice long after the evidence has turned against it. The corrective is also consistent: keep experienced humans in the verification loop, demand testable evidence of safety-by-design, resist the pressure to fire the people who carry context in their heads, and treat any system that tells you it is secure as the suspect that it is. The <a href="https://www.acm.org/code-of-ethics">ACM Code of Ethics</a> is explicit on the duty to anticipate and avoid harm. That duty does not pause for hype cycles.
</p>
<p>
    There is also a pragmatic argument. Reckless deployment tends to produce backlash. Nuclear power and commercial aviation are useful precedents. In both cases, preventable incidents led to regulations strict enough to permanently shape how those industries operate, at substantial cost to the firms but with clear benefit to public safety. AI is on that trajectory. Companies that build prudently now will be in a better position when any regulatory wave arrives; companies that race to fire their experts and ship agentic systems into production will find themselves explaining their choices to regulators with enforcement authority.
</p>
<p>
    Quantum computing is already queueing up to be the next myth. Vendors are selling "quantum-safe" and "quantum-ready" products well ahead of any clear definition of either term, and well ahead of any consensus on the threat timeline. We will have a version of this conversation again in five years, and probably every ten years thereafter.
</p>
<p>
    Twenty years from now, I expect somebody — possibly me, possibly an LLM trained on my collected works and confidently misattributing them — will be writing the same post about whatever myth replaces this one.
</p>
<p>
    <em>(Portions of this blog were researched and assembled with the assistance of Anthropic Claude Opus 4.7, but the content is my own.)</em>
</p>
		]]></description>
      <dc:subject>General, Kudos, Opinions and Rants, Secure IT Practices,</dc:subject>
      <dc:date>2026-04-26T20:34+00:00</dc:date>
    </item>    <item>
      <title>2026 Daniel DeLaurentis</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-03T20:21+00:00</dc:date>
    </item>    <item>
      <title>2026 Aniket Kate</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-03T20:18+00:00</dc:date>
    </item>    <item>
      <title>2026 Yiheng Feng</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-03T20:07+00:00</dc:date>
    </item>    <item>
      <title>2026 Brad Fruth</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-03T20:00+00:00</dc:date>
    </item>    <item>
      <title>Annual Information Security Symposium Videos</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject></dc:subject>
      <dc:date>2026-04-03T19:42+00:00</dc:date>
    </item>    <item>
      <title>2026 Kraig Kiehl</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-02T15:46+00:00</dc:date>
    </item>    <item>
      <title>2026 Somali Chaterji</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-02T14:58+00:00</dc:date>
    </item>    <item>
      <title>2026 Jerry Towler</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-02T13:30+00:00</dc:date>
    </item>    <item>
      <title>2026 Tianyi Zhang</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-02T13:20+00:00</dc:date>
    </item>    <item>
      <title>2026 Josh Knox</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-01T18:01+00:00</dc:date>
    </item>    <item>
      <title>2026 Jeremiah Blocki</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-01T17:49+00:00</dc:date>
    </item>    <item>
      <title>2026 Mustafa Abdallah</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-01T17:42+00:00</dc:date>
    </item>    <item>
      <title>2026 Dan Goldwasser</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-04-01T17:40+00:00</dc:date>
    </item>    <item>
      <title>2026 Courtney Falk</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-03-31T18:59+00:00</dc:date>
    </item>    <item>
      <title>2026 Chris Lupini</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-03-31T18:53+00:00</dc:date>
    </item>    <item>
      <title>2026 Carolin Frueh</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-03-31T14:15+00:00</dc:date>
    </item>    <item>
      <title>2026 Yuehwern Yih</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-03-31T01:01+00:00</dc:date>
    </item>    <item>
      <title>2026 Ananth Grama</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>Symposium 2026,</dc:subject>
      <dc:date>2026-03-31T00:40+00:00</dc:date>
    </item>    <item>
      <title>Responsible Data Science Lab</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject></dc:subject>
      <dc:date>2026-03-25T13:03+00:00</dc:date>
    </item>    <item>
      <title>Digital Enterprise Center (DEC)</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject></dc:subject>
      <dc:date>2026-03-24T19:58+00:00</dc:date>
    </item>    <item>
      <title>Spafford appointed to MITRE ATT&amp;amp;CK Advisory Council</title>
      <author>webmaster@cerias.purdue.edu (admin)</author>


      
      <description><![CDATA[
	
	
		]]></description>
      <dc:subject>General,</dc:subject>
      <dc:date>2026-03-19T12:39+00:00</dc:date>
    </item>
    
    </channel>
</rss>