California's AI security bill is in the spotlight. Turning it into law is the best way to improve it


On August 29, the California Legislature approved Senate Bill 1047 —the Secure Innovation for Frontier Models of Artificial Intelligence Act—and sent it to Gov. Gavin Newsom for his signature. Newsom’s decision, which must be submitted by Sept. 30, is binary: kill it or sign it into law.

Recognizing the potential harm that could result from advanced AI, SB 1047 requires technology developers to build in safeguards as they develop and deploy what the bill calls “covered models.” The California attorney general can enforce these requirements by bringing civil actions against parties who fail to take “reasonable care” that 1) their models do not cause catastrophic harm, or 2) their models can be shut down in an emergency.

Many major AI companies oppose the bill, either individually or through trade associations. Their objections include concerns that the definition of covered models is too inflexible to account for technological progress, that it is unreasonable to hold them liable for harmful apps developed by others, and that the bill in general will stifle innovation and cripple small startups that don't have the resources to devote to compliance.

These objections are not frivolous; they deserve consideration, and some further amendment to the bill will likely be necessary. But the governor should sign or approve it anyway, because a veto would indicate that no regulation of AI is acceptable now and likely until catastrophic damage occurs. That is not the right position for governments to take on such technology.

The bill's author, Sen. Scott Wiener (D-San Francisco), met with the AI ​​industry on several iterations of the bill before its final legislative passage. At least one major AI company, Anthropic, asked for specific and significant changes to the text, many of which were incorporated into the final bill. Since the Legislature passed it, Anthropic's CEO has said that its “benefits probably outweigh its costs… [although] Some aspects of the bill [still] Public evidence to date suggests that most other AI companies simply chose to oppose the bill on principle, rather than undertake specific efforts to amend it.

What are we to make of such opposition, especially given that the leaders of some of these companies have publicly expressed concerns about the potential dangers of advanced AI? In 2023, the CEOs of Google’s OpenAI and DeepMind, for example, signed an open letter comparing the risks of AI to those of a pandemic and nuclear war.

A reasonable conclusion is that, unlike Anthropic, they oppose any kind of mandatory regulation. They want to reserve the right to decide when the risks of an activity or research effort or any other implemented model outweigh its benefits. More importantly, they want those who develop applications based on their covered models to be fully responsible for risk mitigation. Recent court cases have suggested Parents who put guns in the hands of their children bear some legal responsibility for the consequences. Why should AI companies be treated any differently?

Artificial intelligence companies want the public to give them a free rein despite an obvious conflict of interest: for-profit companies should not be trusted to make decisions that could hamper their profit prospects.

We’ve been through this before. In November 2023, OpenAI’s board of directors fired its CEO because it determined that under his leadership, the company was taking a dangerous technological path. Within days, several OpenAI stakeholders managed to reverse that decision, reinstating him and expelling the board members who had advocated for his dismissal. Ironically, OpenAI had been specifically structured to allow the board to act as it did: despite the company’s profit-generating potential, the board was supposed to ensure that the public interest came first.

If SB 1047 is vetoed, anti-regulation forces will claim a victory that demonstrates the wisdom of their position and will have little incentive to work on alternative legislation. Not having meaningful regulation works to their advantage and they will rely on a veto to maintain the status quo.

Alternatively, the governor could sign SB 1047 into law, adding an open invitation to its opponents to help fix its specific flaws. With what they see as an imperfect law, the bill's opponents would have a considerable incentive to work—and work in good faith—to fix it. But the basic approach would be for industry, not government, to make its case about what constitutes reasonable and appropriate care regarding the safety properties of its advanced models. The government's role would be to make sure that industry does what industry itself says it should do.

The consequences of killing SB 1047 and preserving the status quo are substantial: companies would be able to develop their technologies without restrictions. The consequences of accepting an imperfect bill would be a significant step toward a better regulatory environment for all involved. It would be the beginning, rather than the end, of the AI ​​regulatory game. This first step sets the tone for what is to come and establishes the legitimacy of AI regulation. The governor should sign SB 1047.

Herbert Lin is a senior fellow at the Center for International Security and Cooperation at Stanford University and a fellow at the Hoover Institution. He is the author of “Cyber ​​Threats and Nuclear Weapons.”

scroll to top