📣 California Governor Blocks AI Bill – Is Tech at Risk?

posted  30 Sept 2024
California Governor Gavin Newsom blocked an artificial intelligence (AI) safety bill that would have enforced strong security measures for AI companies and developers, potentially impacting industry growth. 

Newsom pointed to the bill’s overly broad and general approach, saying it placed excessive burdens on both high-risk and low-risk AI models. 

If passed into legislation, the SB-1047 bill, introduced by Senator Scott Wiener, would have required AI developers and companies like Meta and OpenAI to implement strict safety measures, including a “kill switch” for their models, maintain detailed safety protocols, and undergo independent third-party audits to ensure compliance. 

The bill, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, additionally included provisions for whistleblower protections and allowed the Attorney General to pursue legal action in cases of non-compliance.

Governor Newsom acknowledged the importance of safety and agreed with concerns over the potential dangers of AI in his official statement, noting that “we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” the governor believes that this particular legislation could slow innovation and give the public false confidence in AI regulation.
“California is home to 32 of the world's 50 leading Al companies, pioneers in one of the most significant technological advances in modern history. We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom.”

Gavin Newsom

Tech Industry Pushback and Support

The bill faced strong resistance from tech companies, including Google, OpenAI, and Meta. These companies emphasized that the artificial intelligence bill could harm innovation and drastically slow advancements in AI.

Jason Kwon, OpenAI’s Chief Strategy Officer, argued in a letter to Senator Wiener that AI regulation should be handled at the federal level rather than driven by ‘a patchwork of state laws’, as reported by Bloomberg. Despite amendments to the bill, which removed the creation of a new regulatory agency and reduced the attorney general’s authority, many companies still remained wary of its potential impact on AI development.

However, the bill found support from figures like Elon Musk and Hollywood stars, including Jane Fonda and “The Last of Us” star Pedro Pascal, who were among 125 industry professionals who signed the Artists 4 Safe AI open letter pushing the governor to green light the bill. 
Possible Implications of the Veto

Had SB-1047 passed, California would have set the strictest AI regulations in the U.S., applying to AI models with training costs that exceed $100 million. It would have mandated rigorous testing protocols and established protections for whistleblowers. But Newsom’s veto means there will be no binding regulations for AI companies in the state – at least for now.

“Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself “, Newsom added in his statement. 

Critics of the veto, including Senator Wiener, warned that the absence of oversight leaves the public exposed to potential risks posed by unchecked AI development. While the federal government is also exploring AI regulation, meaningful progress has yet to be made, leaving companies largely free to self-regulate.

As AI continues to shape industries and attract capital, with BlackRock launching an AI Infrastructure Fund and Microsoft building new AI data centers, the debate over how to balance innovation with responsible oversight remains far from resolved. Considering the global race for AI dominance, Newsom’s decision adds another layer of complexity to the ongoing conversation around AI regulation both in the U.S. and the world.