AI Bill Veto Sparks Debate on Innovation, Safety and Regulation

Gov. Gavin Newsom’s recent veto of California’s artificial intelligence (AI) safety bill has triggered a heated debate about the future of AI regulation, its impact on innovation, and the prospects for federal action.

The decision, framed as a step toward more “workable guardrails,” has drawn mixed reactions from industry leaders, policymakers and experts. The veto of SB 1047, which would have established the nation’s first state-level AI regulatory framework, underscores the complex challenge of governing rapidly evolving technology. 

Former Congressman Patrick Murphy, now CEO of Togal.AI, told PYMNTS that the decision essentially ensures “a largely unregulated market when it comes to AI competition in Silicon Valley.” He added that this means “fewer guardrails; more leeway to experiment.”

Murphy also highlighted the challenge of regulation keeping pace with innovation: “The government almost never keeps pace with technological advancements, and with technology as complex and fast-moving as AI, it stands no realistic chance of catching up.”

The bill’s rejection comes amid growing concerns about AI’s societal impact and calls for oversight. Yet, it also reflects the tech industry’s warnings that premature regulation could stifle advancement and drive companies out of California. With federal legislation stalling and other states watching closely, California’s next moves could set a precedent for nationwide AI governance.

Divided Industry Response

The bill’s rejection elicited contrasting views within the tech industry. Some executives hailed it as a win for innovation, arguing that the proposed regulations could have driven major companies out of California. Others expressed concern about the lack of binding restrictions on powerful AI technologies.

Ryan Ries, chief data science strategist at Mission Cloud, called the veto “a massive win for Silicon Valley and California.” He told PYMNTS that the bill’s passage could have triggered an exodus of major tech companies from the state, stifling innovation in one of the world’s leading tech hubs.

The debate extends beyond a simple pro-regulation versus anti-regulation divide. Phil Libin, co-founder and former CEO of Evernote, offered a more nuanced perspective. “I was conflicted about SB 1047,” Libin told PYMNTS. He acknowledged the need for AI regulation to address immediate concerns like crime, spam, wasted resources, and discrimination. However, he also noted that “this particular bill didn’t do much and may have led to a fractured regulatory landscape.”

Safety vs Innovation Dilemma

The veto highlighted the ongoing struggle to balance safety concerns with the drive for technological advancement. Industry insiders pointed out the challenges of prioritizing safety in a highly competitive market focused on developing faster, more accurate AI systems.

The issue of balancing safety and innovation without regulations emerged as a central concern. Murphy cautioned, “We can’t be sure that AI companies will prioritize safety because they have little incentive to do so. Right now, it’s less about safety and more about winning the arms race to develop faster, more accurate, and groundbreaking AI technology.”

Some experts suggested that the veto could spur industry self-regulation. Kjell Carlsson, head of AI strategy at Domino Data Lab, urged enterprise leaders to “proactively address AI risks and protect their AI initiatives now.”

Defining “truth” in AI models was highlighted as a critical issue. Ries called it “the biggest issue facing not just model makers but the whole world,” particularly in creating training datasets. He also raised concerns about the increasing prevalence of computer-generated content in these datasets, potentially affecting the performance of future models.

Federal Action Faces Hurdles

The California veto has raised questions about the prospects for federal AI regulation. Experts predict that federal action is unlikely before the 2024 election, and the post-election landscape could potentially shape the regulatory approach.

Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, sees California’s struggle as indicative of federal challenges. “If California is like this, what can we expect to happen at the federal level, where compromise is the name of the game?” Ekbia told PYMNTS.

Ekbia outlined two possible scenarios for post-2024 federal regulation: “In the best scenario, we will get a compromise legislation that can put some guardrails in the use of AI in publicly sensitive areas. In the worst scenario, we can see further encroachment of public protections by major corporations.”

The possibility of a patchwork of state regulations also looms. Murphy warned, “Without direction from the biggest state in the country, we may be dealing for a while with a Wild West hodgepodge of different states’ AI laws that are hard to enforce.”

The Path Forward

With federal AI legislation stalled in Congress and the Biden administration advancing its own regulatory proposals, California’s next moves could set a significant precedent. Newsom has pledged to work with the legislature on AI legislation during its next session, acknowledging that a California-specific approach may be necessary in the absence of federal action.

Newsom stated, “We cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” but added he did not agree that “we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities.”

The governor’s decision to veto the bill came after consulting with leading experts on generative AI. Newsom said he had asked these experts to help California “develop workable guardrails” that focus “on developing an empirical, science-based trajectory analysis.” He also ordered state agencies to expand their assessment of the risks from potential catastrophic events tied to AI use.

As the debate continues, the tech industry finds itself at a crossroads. Murphy emphasized the need for collaboration, stating, “The bigger issue to me is — how do we get the private sector to work alongside the government, sharing knowledge and resources?”

He added, “If we want them to value safety as much as innovation, we need to create incentives that encourage the private sector to collaborate with the government on crafting regulations that are practical and impactful.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.