Scott Wiener's AI bill moves forward with significant changes


A controversial bill that aims to protect Californians from catastrophes caused by artificial intelligence has caused a stir in the tech industry. This week, the legislation was approved by a key committee, but with amendments to make it more palatable to Silicon Valley.

Senate Bill 1047, by state Sen. Scott Wiener (D-San Francisco), is scheduled to be introduced in the state Assembly later this month. If it passes the Legislature, Gov. Gavin Newsom will have to decide whether to sign or veto the groundbreaking legislation.

The bill's backers say it will create guardrails to prevent rapidly advancing artificial intelligence models from causing disastrous incidents, such as shutting down the power grid without warning. They worry that the technology is developing faster than its human creators can control.

Lawmakers aim to incentivize developers to manage the technology responsibly and empower the state attorney general to impose penalties in the event of imminent threat or harm. The legislation also requires that developers be able to disable the AI ​​models they directly control if things go wrong.

But some tech companies, like Facebook-owner Meta Platforms, and politicians like influential U.S. Rep. Ro Khanna (D-Fremont), say the bill would stifle innovation. Some critics say it focuses on far-off, apocalyptic scenarios rather than more immediate concerns like privacy and misinformation — though there are other bills that address those issues.

SB 1047 is one of about 50 AI-related bills that have been introduced in the state Legislature, as concerns mount about the technology’s effects on jobs, misinformation and public safety. As politicians work to create new laws to put limits on the rapidly growing industry, some businesses and talent are suing AI companies in hopes that courts can set ground rules.

Wiener, who represents San Francisco — home to artificial intelligence startups OpenAI and Anthropic — has been at the center of the debate.

On Thursday, he made significant changes to his bill that some believe weaken the legislation while making it more likely to pass in the Assembly.

The amendments removed a penalty for perjury from the bill and changed the legal standard for developers regarding the security of their advanced AI models.

In addition, a plan to create a new government entity, which would have been called the Model Border Division, is no longer in the works. Under the original text, the bill would have required developers to submit their security measures to the newly created division. In the new version, developers would submit those security measures to the attorney general.

“I think some of those changes could increase the chances of it passing,” said Christian Grose, a USC professor of political science and public policy.

Some tech companies support the bill, including the Center for AI Safety and Geoffrey Hinton, considered the “godfather of AI.” But others are concerned it could hurt a booming industry in California.

Eight members of the California House of Representatives — Khanna, Zoe Lofgren (D-San Jose), Anna G. Eshoo (D-Menlo Park), Scott Peters (D-San Diego), Tony Cardenas (D-Pacoima), Ami Bera (D-Elk Grove), Nanette Diaz Barragan (D-San Pedro) and Lou Correa (D-Santa Ana) — wrote a letter to Newsom on Thursday encouraging him to veto the bill if it passes the state Assembly.

“[Wiener] “There’s really a cross-pressure in San Francisco between people who are experts in this area, who have been telling him and others in California that AI can be dangerous if we don’t regulate it, and then those whose paychecks, their cutting-edge research, comes from AI,” Grose said. “This could be a real flashpoint for him, both for and against, for his career.”

Some tech giants say they are open to regulation, but disagree with Wiener's approach.

“We agree with the way (Wiener) describes the bill and the goals it has, but we remain concerned about the bill’s impact on AI innovation, particularly in California, and particularly on open source innovation,” Kevin McKinley, Meta’s state policy manager, said in a meeting with members of the LA Times editorial board last week.

Meta is one of the companies that has an open-source collection of AI models called Llama, which allows developers to build their own products on top of it. Meta launched Llama 3 in April and it has already had 20 million downloads, the tech giant said.

Meta declined to comment on the new amendments. Last week, McKinley said SB 1047 is “actually a very difficult bill to fix and correct.”

A Newsom spokesman said his office typically does not comment on pending legislation.

“The governor will evaluate this bill on its merits if it reaches his desk,” spokesman Izzy Gardon wrote in an email.

San Francisco artificial intelligence startup Anthropic, known for its AI assistant Claude, said it might support the bill if it were amended. In a July 23 letter to Assemblywoman Buffy Wicks (D-Oakland), Anthropic’s state and local policy officer, Hank Dempsey, proposed changes, including shifting the bill to focus on holding companies accountable for causing disasters rather than putting control measures in place before damage is done.

Wiener said the amendments took Anthropic's concerns into account.

“We can make progress on both innovation and security,” Wiener said in a statement. “The two are not mutually exclusive.”

It's unclear whether the amendments will change Anthropic's position on the bill. On Thursday, Anthropic said in a statement that it would review the new “bill language as it becomes available.”

Russell Wald, deputy director of Stanford University's HAI, which aims to advance AI research and policy, said he still opposes the bill.

“The recent changes appear to be more about appearance than substance,” Wald said in a statement. “It seems less controversial in order to appease a couple of leading AI companies, but does little to address the real concerns of academic institutions and open source communities.”

It's a delicate balance for lawmakers trying to weigh concerns about AI while also supporting the state's tech sector.

“What a lot of us are trying to do is come up with a regulatory environment that allows some of those barriers to exist without stifling the innovation and economic growth that comes with AI,” Wicks said after Thursday’s committee meeting.

Times staff writer Anabel Sosa contributed to this report.

scroll to top