New York Governor Kathy Hochul signed a new AI safety law in the final weeks of December, arguing that it aligns with California’s approach and moves the United States closer to a unified framework for regulating advanced AI. An earlier version of the bill was more aggressive, but it was pared back to more closely mirror California’s structure, and the governor is leaning hard on that alignment to cast New York’s move as evidence that states are speaking with one voice and creating a unified national AI framework. But alignment is not the same as unity. A law that is almost—but not exactly—the same as California’s creates a death by a thousand cuts: Developers must comply with overlapping state regimes, duplicate filings, and slightly different reporting and oversight requirements for the same set of risks. The result is more friction, not more coherence, and it pulls the country further away from a clear, pro-innovation national approach.
The case for alignment is easy to make. Like California’s SB 53 (the Transparency in Frontier Artificial Intelligence Act), New York’s Responsible AI Safety and Education (RAISE) Act targets “frontier” AI systems defined as models trained using more than 10²⁶ floating-point operations (FLOPs) of compute. Both laws require developers of such systems to produce formal safety documents that describe how they identify and mitigate “critical” or “catastrophic” harms. Those harms are defined in similar terms, focusing on AI systems that could materially assist in the creation of chemical, biological, radiological, or nuclear (CBRN) weapons, enable mass-casualty events, or cause more than $1 billion in economic damage. Many of the accompanying compliance mechanisms also look similar, including mandatory reporting of serious safety incidents and whistleblower protections for engineers and researchers who raise safety concerns. Looking only at these shared elements, New York’s law may look like it is simply extending California’s model eastward.
However, even where the differences between the substance of the obligations are minimized, the process is inherently duplicative. Both laws target frontier models and define “large developers” using the same $500 million annual revenue threshold, ensuring that the same firms are captured by both regimes. But meeting one state’s requirements does not satisfy the other’s.
In New York, developers have 72 hours to report a critical safety incident and must file a disclosure statement identifying every entity with a 5 percent or greater interest in the company. They also pay pro rata fees—a specific regulatory assessment where the total cost of running the new Office of AI Oversight is divided up and billed directly to the “large developers” based on their size. Whereas in California, under SB 53, developers operate under a 15-day reporting window, face no industry-funded agency fees (oversight is funded by the state’s general budget), and are not required to disclose their private ownership structures to state regulators.
Alignment in shared definitions, thresholds, and language about catastrophic risk, therefore, does not converge into a single national framework. Instead, it becomes a compliance tax. This redundancy reallocates capital away from actual safety research and toward administrative management. Rather than streamlining the path for innovation, New York has merely ensured that the most successful American firms are the ones hit hardest by this new, state-level drag.
Granted, designing workable AI safety standards is inherently difficult, and any effective framework will have to evolve as AI systems, risks, and deployment contexts change. But this is exactly why states cannot effectively regulate AI safety. If New York is right to shorten reporting windows or layer on additional requirements, then California’s approach is, by definition, inadequate. If California is right, then New York’s deviations are unnecessary. Instead of the country moving systematically toward safeguards that improve over time, each state is locking its own revision into law at a particular moment in time and freezing provisional judgments rather than refining them. The result is not just slower innovation, as companies are forced to navigate a patchwork of requirements that never fully line up, but also weaker safety.
Governor Hochul says New York’s law moves the country closer to a unified approach to AI safety, but in practice, it does the opposite. By hard-coding a different set of thresholds, triggers, and oversight assumptions that differ from California’s, New York is not refining a shared national framework but fragmenting it further. Common language may create the appearance of alignment, but when states lock competing judgments into law, coherence becomes harder, not easier, to achieve.
Image credit: Metropolitan Transportation Authority/Flickr
Editor’s note: This post has been updated to reflect the final version of New York’s Responsible AI Safety and Education (RAISE) Act, as enacted following December chapter amendments. An earlier version of this post analyzed provisions from earlier bill prints that were subsequently revised prior to enactment. The analysis has been updated to reflect the law that will take effect.


