Home IssueArtificial Intelligence California’s Bill to Regulate Frontier AI Models Undercuts More Sensible Federal Efforts

California’s Bill to Regulate Frontier AI Models Undercuts More Sensible Federal Efforts

by Hodan Omaar
by
California State Senator Scott Wiener

California State Senator Scott Wiener introduced a sweeping new AI bill this month designed to ensure providers of highly capable AI models mitigate existential threats, either from their systems going rogue or from enabling humans to use them in extremely dangerous ways, such as creating biological and nuclear weapons or attacking critical infrastructure through cyberattacks. Unfortunately, the bill undermines more sensible federal efforts and would unnecessarily hamper U.S. AI innovation.

The AI models that would fall under California’s Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act need to meet at least one of the following criteria: being trained using at least 1026 floating-point operations (a measure of computing cycles) or having “capability below the relevant threshold on a specific benchmark but is of otherwise similar general capability.” The second condition is largely unintelligible because “the relevant threshold,” “specific benchmark” and “similar general capability” referred to are not defined, but presumably means that AI models trained on less compute but have comparable performance to state-of-the-art models would be subject to similar scrutiny and safeguards. In essence, the bill seeks to regulate frontier models, a term that made its way into the lexicon in late 2023 to define highly capable general-purpose AI models that “could possess dangerous capabilities sufficient to pose severe risks to public safety.” If a developer is creating a frontier model they have two options under the California bill, each fraught with arduous barriers.

Option 1 is to self-certify that their model is not extremely dangerous. They can do this, for instance, if they determine that their model will have a lower performance than models that are already considered safe under the bill, but the catch is they have to make this determination before they start training the model. That is, the bill would require developers to predict the future performance of a model before they have trained it. But as the leading academic paper on regulating frontier models itself says, “It is very difficult to predict the pace of AI development and the capabilities that could emerge in advance; indeed, we even lack certainty about the capabilities of existing systems.” What’s even more daunting, is that developers would have to certify to the future performance of their models under the specter of perjury because the bill creates a new enforcement authority called the Frontier Model Division within California’s Department of Technology that developers would have to submit their certification to. If other actors down the line use their model in harmful ways that the developers were not able to predict, this new agency could seemingly hold them liable for a felony.

Option 2, if the developer cannot self-certify before training that the model will not have hazardous capabilities, is to submit to a formidable nine-step regulatory regime that would be entirely impractical. One of the steps is to implement a capability to promptly enact a full shutdown of a model, including all copies and derivatives. Another is to “implement all covered guidance.” That would be a tall order. All covered guidance includes any guidance that the National Institute of Standards and Technology (NIST) issues, any state-specific guidance that the Frontier Model Division issues, any applicable safety-enhancing standards set by other standards setting organizations, and “industry best practices, including relevant safety practices, precautions, or testing procedures undertaken by developers of comparable models, and any safety standards or best practices commonly or generally recognized by relevant experts in academia or the nonprofit sector.” While some may argue this is a pessimistic interpretation of the proposal, it highlights the complexities, challenges, and potential contradictions inherent in complying with the bill’s provisions.

Even if developers can get through these and the other bureaucratic barricades the bill lays out, they would then have to comply with additional complicated rules that make it hard to commercialize the model. The legislation would also impose restrictions on data centers, requiring them to monitor customers who could potentially be training foundation models and implement measures to ensure they can promptly shut down these models in case of an emergency. In essence, they want to create a series of kill switches for dangerous AI.

Collectively, these regulations stand to seriously hinder development of state-of-the-art systems in California, where much of the frontier AI development in the United States is taking place. And since California often sets the tone for action in other states, this bill risks setting a precedent for a fragmented regulatory landscape for AI safety across the country. Moreover, these measures risks incentivizing American AI companies to relocate either out-of-state or abroad.

A national set of standards that preempts states from creating their own standards is a much better approach—and California Senator Wiener’s suggestion that federal action is unlikely is wrong. The Commerce Department is already doing commendable work at the national level in response to the Biden administration’s recent Executive Order on AI. It has stood up a national AI Safety Institute and the National Institute of Standards and Technology (NIST) is actively soliciting input from stakeholders in industry, academia, and civil society on how to develop safety standards and work with international partners. California legislators should not duplicate and convolute this process with its own ham-handed regulations.

Image credit: Scott Wiener

You may also like

Show Buttons
Hide Buttons