The California Appropriations Committee on Thursday approved the Safe Innovation for Frontier Artificial Intelligence Models Act (also known as SB-1047), the latest step in the Silicon Valley regulatory saga.
The state Assembly and Senate must still approve the bill before it becomes law.
What is SB-1047?
SB-1047, colloquially known as the California AI Act and closely watched across the country for its potential intent to set a precedent for state guidelines around generative AI, lays out several rules for AI developers:
- Create security and protection protocols for covered AI models.
- Make sure that such models can be completely closed.
- Prevent the distribution of models capable of causing what the law defines as “critical damage.”
- Hire an auditor to ensure compliance with the law.
In short, the bill provides a framework that aims to prevent generative AI models from causing large-scale harm to humanity, such as through nuclear war or biological weapons, or causing losses of more than $500 million through a cybersecurity event.
The law defines “covered models” as those that use computing power greater than 10^26 integer or floating-point operations, and cost more than $100 million during training.
The recent version of the law includes contributions from Anthropic
The version of the bill that passed Thursday included some changes suggested by artificial intelligence maker Anthropic and accepted by the bill's lead author, Sen. Scott Wiener, D-Calif.
Anthropic successfully got the state to remove language from the bill that said companies that broke the law could face legal action from the state attorney general. The latest version also removes the need for companies to disclose security test results under threat of perjury. Instead, developers will be required to submit disclosures, which do not carry the same legal weight.
Other changes include:
- The change in wording, from AI companies having to provide “reasonable assurance” of safety to having to provide “reasonable care.”
- One exception is that AI researchers who spend less than $10 million refining an open-source model are not considered developers of that model.
SEE: Anthropic and OpenAI have conducted their own research into how generative AI creates content, including biased content.
The bill no longer provides for the creation of a Frontier Models Division, an agency to oversee the AI industry. Instead, a Frontier Models Board focused on forward-looking safety guidance and audits will be housed within the existing Government Operations Agency.
Although Anthropic contributed to the bill, other large organizations such as Google and Meta have expressed their disapproval. Andreessen Horowitz, a venture capital firm known as a16z that is behind many AI startups, has strongly opposed SB-1047.
Why is SB-1047 controversial?
Some industry and Congressional representatives say the law will restrict innovation and make it especially difficult to work with open-source AI models. Among the bill’s critics was Hugging Face co-founder and CEO Clement Delangue, as noted by Fast Company.
An April study by the pro-regulation think tank Artificial Intelligence Policy Institute found that a majority of Californians voted in favor of the bill as it stood, with 70% agreeing that “future powerful AI models could be used for dangerous purposes.”
Researchers Geoffrey Hinton and Yoshua Bengio, known as the “godfathers of AI” for their pioneering work in deep learning, also publicly support the bill. The law “will protect the public,” Bengio wrote in an op-ed in Fortune on August 15.
Eight of California’s 52 members of Congress signed a letter Thursday saying the law would “create unnecessary risks to California’s economy with very little benefit to public safety.” They argue it’s too early to create standardized assessments for AI, as government agencies like NIST are still working on creating those standards.
They also suggest that the definition of critical harm could be misleading. The bill has erred by focusing on large-scale disasters, such as nuclear weapons, while “largely ignoring demonstrable risks of AI, such as misinformation, discrimination, non-consensual deepfakes, environmental impacts, and workforce displacement,” the lawmakers say.
SB-1047 includes specific protections for whistleblowers at artificial intelligence companies under the California Whistleblower Protection Act.
Alla Valente, a senior analyst at Forrester, said lawmakers were right to focus on cyberattacks, such as the Change Healthcare incident in May, as these attacks have been shown to cause serious harm. “With the use of generative AI, these attacks can be carried out more effectively and on a much larger scale, making AI regulation something all states will have to consider as part of how they protect and serve their residents,” she said.
The law raises the challenges of balancing regulation and innovation.
“We can make progress on both innovation and security; the two are not mutually exclusive,” Wiener wrote in a public statement on August 15. “While the amendments do not reflect 100% of the changes requested by Anthropic, a global leader in both innovation and security, we have agreed to a number of very reasonable proposed amendments and believe we have addressed the key concerns expressed by Anthropic and many others in the industry.”
He noted that Congress is “blocked” on regulating AI, so “California must act to get ahead of the foreseeable risks posed by the rapid advance of AI while simultaneously fostering innovation.”
The bill will then need to be approved by the Assembly and Senate. If approved, the bill will be considered by Gov. Gavin Newsom, likely in late August.
“Enterprise organizations already have to deal with the risks of generative AI as they navigate the enactment and enforcement of existing and emerging AI laws,” Valente said. “Meanwhile, the growing AI litigation landscape is forcing organizations to prioritize AI governance in their organizations to ensure they address potential legal liabilities. SB 1047 would create guardrails and standards for AI products that would promote trust in generative AI-enabled products and potentially accelerate adoption.”