Artificial intelligence (AI) creates two sets of cybersecurity risks: cyber risks in the AI systems themselves, and cyber risks from AI tools used to exploit non-AI systems. The U.S. government has primarily focused on the former, but it should urgently be preparing for the latter.
Securing AI systems has been a priority for the past two administrations. Under President Biden, the National Institute of Standards and Technology (NIST) created the U.S. AI Safety Institute to support the development of safe and secure AI. Later, under President Trump, NIST reformed the organization as the Center for AI Standards and Innovation (CAISI) and refocused its priorities to concentrate on securing commercial systems. Late last year, the Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) issued a report for securely integrating AI in government operations, which included principles such as secure‑by‑design integration, continuous monitoring, anomaly detection, human‑in‑the‑loop oversight, and rigorous testing and red‑teaming of AI systems.
But the federal government has taken relatively few steps to address risks in existing systems that cyber threat actors may discover and exploit using artificial intelligence (AI) tools, despite policymakers being aware of the issue. For example, nearly two years ago, the Bipartisan Senate AI Working Group called on lawmakers “to develop legislation bolstering the use of AI in U.S. cyber capabilities.” Similarly, the Bipartisan House Task Force on AI stated that “security teams must use AI defensively to improve cybersecurity resiliency.”
However, the issue has now reached a new level of urgency. Earlier this month, Anthropic announced Claude Mythos Preview, a new frontier AI model—meaning a cutting-edge model that pushes the boundaries of what AI can accomplish —that is “capable of identifying and then exploiting zero-day vulnerabilities in every major operating system and every major web browser.” For example, the model uncovered a 27‑year‑old flaw in OpenBSD—an open‑source, security‑focused operating system—that allowed an attacker to remotely crash any machine simply by connecting to it. These results demonstrate that advanced AI will increasingly be capable of discovering vulnerabilities in existing systems.
In response, prominent cybersecurity researchers released a joint report warning that malicious actors will increasingly use advanced AI to discover and exploit vulnerabilities. Many public and private sector organizations, including critical‑infrastructure entities, will face heightened risk, leaving essential services exposed to disruptions, data breaches, and system failures. Industry is already responding. For example, Anthropic has already announced Project Glasswing, a cross‑industry initiative to use frontier AI to strengthen cyber defense among organizations that build or maintain some of the nation’s most critical systems.
But the federal government should step in too and boost these efforts, especially to ensure critical infrastructure operators adopt AI‑enabled defenses faster than emerging threats. The AI‑driven detection and defense capabilities demonstrated in Project Glasswing should be extended across critical infrastructure sectors such as energy, water, transportation, and healthcare.
The March 2026 White House National AI Policy Framework urged Congress to ensure federal agencies handling national security build the technical capacity to understand and implement plans to handle risks from frontier AI models, including by collaborating with frontier AI labs. Operationalizing the White House’s guidance requires coordinated collaboration between federal agencies, critical‑infrastructure operators, and frontier AI labs. Key steps should include creating joint AI security‑testing environments where agencies and frontier labs evaluate models against realistic cyber‑attack scenarios; establishing shared AI threat‑intelligence pipelines that give agencies early insight into emerging AI‑enabled attack techniques; and maintaining continuously updated AI security frameworks authored by leading federal agencies, such as the National Institute of Standards and Technology (NIST) and CISA, to ensure consistent security baselines.
Realistically, it will also likely require an infusion of additional funding. As Congress continues to debate funding for the Department of Homeland Security, it should create a one-time $500M matching credit program to fund efforts to use frontier AI models to detect and mitigate cybersecurity vulnerabilities in critical or widely used systems. Given that cyberattacks cost the United States tens of billions annually, investing in prevention is clearly warranted.
AI tools offer enormous opportunities for cybersecurity defenses, including providing early‑warning systems for industrial‑control anomalies, automating detection of ransomware precursors, and identifying cascading failure risks across interconnected systems. These capabilities would be especially valuable for state and local governments, which often lack the resources to counter sophisticated AI‑enabled threats. Integrating AI into the cybersecurity fabric of critical infrastructure would reduce the asymmetry that currently favors attackers and strengthen national resilience.
The rise of AI‑enabled cyber threats demands a coordinated national response. When implemented effectively, AI can act as a force multiplier for defenders, enabling federal, state, and local entities to stay ahead of rapidly evolving threats and strengthen national cybersecurity in an era of AI‑accelerated attacks.
