AI Sector Takes Aim at California Safety Bill

AI Sector Takes Aim at California Safety Bill

A bill in California would require artificial intelligence companies to conduct tests to prevent “catastrophic harm.”

However, AI firms are trying to curtail the legislation, saying it would damage their industry, The Wall Street Journal (WSJ) reported Wednesday (Aug. 7).

The bill, SB 1047, requires that makers of large AI models hold safety tests to reduce the risk of cyberattacks that result in mass casualties or cause at least $500 million in damage, per the report. In addition, companies would need to show that humans can turn off AIs that behave dangerously.

The bill covers AI models meeting a certain computing power threshold and costing more than $100 million to train. That includes OpenAI’s GPT-4, although any company doing business in California would need to comply, the report said.

While some AI industry figures have called for regulation, they want it to come from the federal government, according to the report. The sector argues the bill would require constraints that are too vague.

“If it were to go into effect as written, it would have a chilling effect on innovation in California,” said Luther Lowe, who heads public policy at startup accelerator Y Combinator, per the report.

Meta and OpenAI raised concerns about the bill, the report said. Google, Anthropic and Microsoft all pitched extensive revisions.

The bill, which still needs approval of the full California Assembly, was drafted by state Sen. Scott Weiner, according to the report.

“There are people in the tech sector who are opposed to any and all forms of regulation no matter what it is, even for something reasonable and light-touch,” he said, per the report.

At least 16 companies have signed onto the White House’s voluntary commitment to safe AI development. In doing so, the companies agreed to a range of measures designed to further understand the risks and ethical implications of new technologies while offering greater transparency and restricting the potential for misuse.

Last month, the competition authorities from the United States, the United Kingdom and the European Union issued a rare joint statement outlining their concerns about market concentration and anti-competitive practices in the generative AI field.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

PYMNTS-MonitorEdge-May-2024