California's AI bill: Will it protect consumers or gut the technology?


California is a world leader in artificial intelligence, which means we’re expected to help determine how to regulate it. The state is considering multiple bills to those ends, and none is drawing more attention than Senate Bill 1047. The measure, introduced by Sen. Scott Wiener (D-San Francisco), would require companies that produce the largest AI models to test and modify those models to avoid facilitating serious harm. Is this a necessary step to keep AI accountable or an overreach? Simon Last, co-founder of an AI-powered company, and Paul Lekas, public policy director for the Software and Information Industry Association, gave their views.

This bill will help keep technology safe without harming innovation

By Simon Last

As co-founder of an AI-powered company, I have witnessed the astonishing advancement of artificial intelligence. Every day, I design products that use AI, and it’s clear that these systems will only get more powerful in the coming years. We will see huge progress in creativity and productivity, along with advances in science and medicine.

However, as AI systems become more sophisticated, we must be mindful of their risks. Without reasonable precautions, AI could cause serious harm on an unprecedented scale: cyberattacks on critical infrastructure, development of chemical, nuclear or biological weapons, automated crime and more.

California’s SB 1047 strikes a balance between protecting public safety from these harms and supporting innovation, focusing on common-sense safety requirements for the few companies developing the most powerful AI systems. It includes protections for whistleblowers who report safety issues at AI companies, and importantly, the bill is designed to support California’s incredible startup ecosystem.

SB 1047 would only affect companies building the next generation of AI systems that cost more than $100 million to train. Drawing on industry best practices, the bill requires safety testing and mitigation of anticipated risks before the launch of these systems, as well as the ability to shut them down in an emergency. In cases where AI causes many casualties or at least $500 million in damages, the state attorney general can sue to hold the companies accountable.

These security rules would apply to the “base models” of AI that startups build specialized products on. Through this approach, we can more effectively mitigate risks across the industry without burdening small-scale developers. As a startup founder, I am confident that the bill will not hinder our ability to develop and grow.

Some critics argue that regulation should focus solely on harmful uses of AI, rather than the underlying technology. But this view is flawed because it is already illegal, for example, to conduct cyberattacks or use biological weapons. SB 1047 provides what is missing: a way to prevent harm before it occurs. Product safety testing is standard for many industries, including manufacturers of cars, airplanes, and prescription drugs. Builders of the largest AI systems should be held to a similar standard.

Others claim the legislation would drive companies out of the state, which makes no sense. The supply of talent and capital in California is virtually nonexistent, and SB 1047 will not change those factors that attract companies to operate here. Furthermore, the bill applies to real estate developers operating in California, regardless of where they are based.

Tech leaders, including Mark Zuckerberg of Meta and Sam Altman of OpenAI, have gone to Congress to discuss AI regulation, warn of the potentially catastrophic effects of the technology, and even ask for regulation, but expectations of action by Congress are low.

With 32 of the top 50 artificial intelligence companies according to Forbes Headquartered in California, our state bears much of the responsibility for helping the industry thrive. SB 1047 provides a framework for younger companies to thrive alongside larger ones, while prioritizing public safety. By making smart policy decisions now, state lawmakers and Governor Gavin Newsom could cement California’s position as a global leader in responsible AI progress.

Simon Last is co-founder of San Francisco-based Notion.

These nearly impossible standards would cause California to lose its AI advantage

By Paul Lekas

California is the birthplace of American innovation. Over the years, many technology and information companies, including those my association represents, have given back to Californians by creating new products for consumers, improving public services, and boosting the economy. Unfortunately, legislation moving through the California Legislature threatens to undermine the brightest innovators and target cutting-edge or highly advanced artificial intelligence models.

The bill goes far beyond the stated goal of addressing real concerns about the safety of these models while ensuring that California reaps the benefits of this technology. Rather than focusing on foreseeable harms, such as the use of AI for predictive policing based on biased historical data, or holding accountable those who use AI for nefarious purposes, SB 1047 would ultimately prohibit developers from publishing AI models that can be tailored to address the needs of California consumers and businesses.

SB 1047 would accomplish this by requiring those at the forefront of new AI technologies to anticipate and mitigate all possible forms of misuse of their models and to prevent such misuse. This is simply not possible, particularly because there are no universally accepted technical standards for measuring and mitigating the risk of frontier models.

If SB 1047 were to become law, California consumers would lose access to AI tools they find useful. That would be like stopping production of a prescription drug because someone took it illegally or overdosed on it. They would also lose access to AI tools designed to protect Californians from malicious activity enabled by other AIs.

To be clear, concerns with SB 1047 do not reflect a belief that AI should proliferate without meaningful oversight. There is bipartisan consensus that we need guardrails around AI to reduce the risk of misuse and address foreseeable harms to public health and safety, civil rights, and other areas. States have led the way by enacting laws to disincentivize the use of AI for evil. Indiana, Minnesota, Texas, Washington, and California, for example, have enacted laws to ban the creation of deepfakes that show intimate images of identifiable people and to restrict the use of AI in election advertising.

Congress is also considering building guardrails for elections, privacy, national security, and other issues while maintaining America’s technological edge. Indeed, oversight would be best managed in a coordinated manner at the federal level, as is being done through the AI ​​Security Institute created at the National Institute of Standards and Technology, without the specter of civil and criminal liability. This approach recognizes that border-model security requires massive resources that no state, not even California, can muster.

So while it is essential that elected leaders take action to protect consumers, SB 1047 goes too far. It would force startups and established companies to weigh nearly impossible-to-meet compliance standards against the value of doing business elsewhere. California could lose its edge in AI innovation, and AI developers outside the United States who are not held to the same principles of transparency and accountability would find their position strengthened, inevitably putting the privacy and security of American consumers at risk.

Paul Lekas ​​​​is the head of global public policy and government affairs at the Software and Information Industry Association. in Washington.

scroll to top