Home IssueArtificial Intelligence California’s AI Safety Law Gets More Wrong Than Right

California’s AI Safety Law Gets More Wrong Than Right

by Hodan Omaar
by

California has passed a new AI safety law and supporters are touting a trifecta of benefits: protecting innovation while advancing safety, filling a regulatory gap left by congressional inaction, and positioning the United States as a global leader on AI safety. On substance, the law has some merits, and had it been enacted at the federal level, it could have marked imperfect but genuine progress. But by adopting those provisions at the state level, California does more harm than good on the very same fronts it claims as strengths. The law undermines U.S. innovation by fragmenting the national market, makes bipartisan compromise on a national AI framework more difficult, and blurs America’s position on AI governance.

The law focuses on frontier AI developers, defined as companies training AI systems using more than 10²⁶ floating-point operations (FLOPS) of compute. They must notify the state before releasing new or updated models, disclosing release dates, intended uses, and whether access will be open-source or via API. All developers must report catastrophic safety incidents within 15 days—or within 24 hours if lives are at risk. Larger firms with more than $500 million in annual revenue face additional obligations. They are required to publish and update safety frameworks, conduct catastrophic risk assessments and submit summaries to the California Office of Emergency Services, and implement strong cybersecurity to protect unreleased model weights. They must also maintain anonymous whistleblower channels, update them monthly, and provide quarterly summaries to senior leadership with protections against retaliation. The Attorney General can impose fines of up to $1 million per violation.

The law has serious shortcomings: its blunt revenue threshold penalizes firms based on their size rather than their risk profiles, and its compute cut-off misses smaller but still capable models. But there are also important elements worth commending. Incident reporting is vital to post-deployment safety, which allows regulators to learn from real-world failures rather than relying solely on pre-release audits. Whistleblower protections are a simple and effective way to create legally protected channels for reporting imminent AI risks. And unlike its vetoed predecessor, SB 1047—which would have required annual third-party audits against rigid safety mandates—this law takes a more flexible approach to transparency. By requiring firms to publish their own safety frameworks and submit high-level risk summaries to state officials, the law leans on public and market-facing pressures rather than solely on centralizing oversight in government oversight. That helps it avoid the trap Virginia has proposed, where routing everything through a single authority turns accountability into a paperwork exercise.

Had these provisions appeared in a federal law on AI safety, their flaws might not have outweighed their value. One could reasonably argue for adjusting the revenue threshold to be size-neutral and for replacing crude compute cut-offs with capability-based criteria that could evolve over time. In that context, a federal statute with such elements could have still offered a net positive step toward more effective oversight of high-risk AI systems.

But this is not a federal law. It is a state statute, and that changes the calculus entirely. No matter how measured or innovation-friendly the regulatory approach may appear in isolation, its merits collapse when applied through a single state because the law guarantees inconsistency across the country. California’s policymakers will argue that its outsized role—hosting 32 of the world’s 50 leading AI companies—creates de facto uniformity, but that assumption confuses where companies are headquartered with where they operate. Even if all 50 top firms were based in California, they would still have to comply with rules in every other state. And those states aren’t likely to adopt California’s approach wholesale, they’ll imitate the idea but not clone the details. Just as with privacy laws, AI safety statutes will diverge in scope, enforcement, and definitions, even while targeting the same problems. The result is fragmentation: a patchwork of conflicting obligations that forces companies into duplicative compliance and drains resources from innovation. There is no version of California’s AI safety law that can foster innovation nationally because state action on this topic creates fragmentation that harms innovation by definition.

Proponents say this critique misses the point: of course AI should ideally be regulated federally, but in the absence of congressional action, someone must step in. California’s law explicitly allows for federal preemption, so state officials cast their statute as a bridge, holding the line on safety until Washington acts, while advancing provisions they want to see as a national policy floor. 

But if this truly is the logic, then it is self-defeating, because enacting California’s law undermines efforts by Congressional Democrats at getting these measures adopted nationally. The law doesn’t just provide a policy floor on substance; it anchors these provisions as the left-most position in the federal debate—the reference point against which Republicans will define how far they are willing to go. For Republicans like Senator Ted Cruz (R-TX), who is spearheading federal preemption efforts and rallying opposition to any national framework that takes its cues from Sacramento, the California law doesn’t pull the federal debate toward its model, it pushes the eventual compromise further away.

Still, some will argue that compromise is a fantasy in today’s political climate, and that California’s assertiveness is therefore necessary. Even if one sympathises with that instinct, it fails to reckon with the nature of the issue. Mitigating catastrophic AI risks means not only shaping how frontier systems are developed, but identifying and responding when they are misused. State regulators cannot compel developers beyond their borders, nor can they govern how actors in other states use systems built within their own. Federal regulation is therefore necessary—not only to safeguard innovation, but for the authority to manage risks wherever they arise in the United States, the capacity to identify misuse where it occurs, and the credibility to advocate for those safeguards abroad.

And on substance, the partisan gap is not nearly as wide as the politics suggest. If emerging Republican proposals from the House and the Senate echo the measures former Trump administration officials are advocating for, there is common ground on both sides of the aisle around transparency requirements for high-risk systems and disclosure of mitigation practices. And since the only realistic path to a durable federal AI law that preempts state measures is through Congress’s regular order, Republicans will need Democrats at the table.

California’s ideas could strengthen global AI safety, but only if they are carried through a national framework. If Democrats want that to happen, they should resist the lure of short-term state wins that make a federal deal harder to reach. Republicans, for their part, should resist the reflex to dismiss the merits of these ideas, many of which echo their own calls to regulate realized rather than hypothetical harms. The key to success—for both innovation and safety—is lifting good ideas from both sides of the aisle and anchoring them in a bipartisan federal framework.

Image credits: Wikimedia commons

You may also like

Show Buttons
Hide Buttons