California Gov. Gavin Newsom killed the “kill switch” on artificial intelligence (AI) Sunday (Sept. 29), vetoing a bill that would have introduced safety testing requirements for AI companies developing models that cost more than $100 million or those using substantial computing power.
The bill also would have mandated that AI developers in California establish fail-safe mechanisms — or a “kill switch” — to shut down their models in case of emergencies or unforeseen consequences.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” Newsom wrote in the correspondence to legislators that accompanies the decision.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
Newsom argued that while he agrees with the need to protect the public from AI risks, the bill’s approach is too broad and inflexible. Newsom believes effective AI regulation should be based on empirical evidence, consider the specific risks of different AI applications, and be adaptable to rapidly evolving technology.
He emphasized California’s commitment to addressing AI risks through other initiatives, including executive orders and recently signed legislation, and expressed his willingness to work with various stakeholders to develop more targeted and scientifically informed AI regulations in the future.
“California is home to 32 of the world’s 50 leading Al companies, pioneers in one of the most significant technological advances in modern history,” Newsom wrote. “We lead in this space because of our research and education institutions, our diverse and motivated workforce, and our free-spirited cultivation of intellectual freedom. As stewards and innovators of the future, I take seriously the responsibility to regulate this industry.”
The move is a win for those AI companies, though over 100 employees from AI companies urged Newsom to sign the bill, citing concerns about potential risks posed by AI models.
Signatories include employees from OpenAI, Google DeepMind, Anthropic, Meta and xAI. Supporters include Turing Award winner Geoffrey Hinton and University of Texas professor Scott Aaronson.
“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” a Sept. 9 statement from the employees said. “It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”