Editorial: Why California should lead AI regulation


OpenAI’s launch of ChatGPT in late 2022 was like the starting gun that triggered a race among big tech companies to develop ever more powerful generative AI systems. Giants like Microsoft, Google, and Meta rushed to implement new AI tools, while billions of dollars in venture capital poured into AI startups.

At the same time, a growing group of people working and researching in the field of AI began to sound the alarm: the technology was evolving faster than anyone had anticipated. There was a fear that, in their eagerness to dominate the market, companies might launch products before they were safe.

In spring 2023, more than 1,000 researchers and industry leaders He asked for a six-month break In developing the most advanced artificial intelligence systems, AI labs are racing to deploy “digital minds” that even their creators cannot reliably understand, predict or control. The technology poses “profound risks to society and humanity,” they warned. Tech company leaders urged lawmakers to develop regulations to prevent harm.

It was in that environment that state Sen. Scott Wiener (D-San Francisco) began talking with industry experts about developing legislation that would become… Senate Bill 1047the Safe Innovation for Frontier Models of Artificial Intelligence Act. The bill is an important first step in the responsible development of AI.

While state lawmakers introduced Dozens of bills Wiener, who works on a range of AI-related issues including election misinformation and protecting artists’ work, took a different approach. Her bill focuses on trying to prevent catastrophic harm if AI systems are abused.

SB 1047 would require developers of the most powerful AI models to implement testing procedures and safeguards to prevent the technology from being used to shut down the power grid, enable the development of biological weapons, carry out major cyberattacks, or cause other serious harm. If developers fail to take reasonable precautions to prevent catastrophic harm, they could be sued by the state attorney general. The bill would also protect whistleblowers within AI companies and create CalCompute, a public cloud computing cluster that would be available to help startups, researchers, and academics develop AI models.

The bill has the support of leading AI safety groups, including some of the so-called AI godfathers who… wrote in a letter to Governor Gavin Newsom “Relative to the scale of the risks we face, this is a remarkably weak piece of legislation,” he said.

But that hasn’t stopped a groundswell of opposition from tech companies, investors and researchers, who have argued that the bill wrongly places responsibility on model developers for anticipating harm that users might cause. They say that liability would make developers less willing to share their models, stifling innovation in California.

Last week, eight members of Congress from California joined the effort with a letter to Newsom urging him to veto SB 1047 if the Legislature passes it. The bill, they argued, is premature, with a “misguided emphasis on hypothetical risks,” and lawmakers should focus on regulating uses of AI that are currently causing harm, such as the use of deepfakes in election ads and revenge porn.

There are plenty of good bills that address immediate and specific misuses of AI. That doesn’t negate the need to anticipate and try to prevent future harms, especially when experts in the field call for action. SB 1047 raises familiar questions for the tech sector and lawmakers. When is the right time to regulate an emerging technology? What is the right balance to encourage innovation while also protecting the public that has to live with its effects? And can the genie be put back in the bottle after the technology is deployed?

Staying on the sidelines for too long carries risks. Today, lawmakers are still playing catch-up on data privacy and curbing harms on social media platforms. This is not the first time leaders of big tech companies have publicly stated that they welcome regulation of their products, but then lobbied fiercely to block specific proposals.

Ideally, the federal government would take the lead in regulating AI to avoid a patchwork of state policies. But Congress has proven unable, or unwilling, to regulate big tech companies. For years, proposed legislation to regulate AI has been in the works. Protecting data privacy Efforts to reduce online risks for children have stalled. In the absence of federal action, California, particularly because it is home to Silicon Valley, has chosen to adopt pioneering regulations on net neutrality, data privacy, and online safety for children. AI is no different. In fact, House Republicans have already said they will not support any new AI regulation.

By passing SB 1047, California can put pressure on the federal government to establish rules and regulations that could supersede state regulation, and until that happens, the law could serve as an important backstop.

scroll to top