UK’s Government’s AI Strategy Faces ‘Serious Scaling Back’

The U.K.’s new Labour government is reportedly rethinking its approach to artificial intelligence (AI).

As Reuters reported Thursday (Aug. 29), the government is developing a new AI strategy that prioritizes public sector adoption instead of direct industry investment in the interest of cutting costs.

The government has already jettisoned plans to invest $1.7 billion into related technologies, including £800 million to help develop a supercomputer at the University of Edinburgh.

It is also weighing scrapping a planned San Francisco office for its AI Safety Institute, a source close to the Department for Science, Innovation and Technology (DSIT) told Reuters.

“Labour always needs to somehow look different to the Tories, and reining in the AI safety stuff, the focus on existential risks, is an easy way to do that,” the source said.

The report noted that — in another sign of the changing direction — the tech minister last month removed one of the co-founders of the country’s AI Safety Institute, Nitarshan Rajkumar, from his position as a senior policy adviser, according to three sources close to the DSIT.

According to Reuters, the sources say Peter Kyle, secretary of state for science, innovation and technology, hopes to fuel AI adoption in the public sector as a means of improving efficiency and reducing costs, while reducing the government’s direct investments into industry.

“I think Peter Kyle sees this [AI] as an easy source of savings,” another source told Reuters. “We’re seeing a serious scaling back of ambition.”

Earlier this year, the U.K. debuted what it called a landmark toolset for AI safety testing. Dubbed  “Inspect,” this software library lets testers such as startups, academics and AI developers to world governments assess specific capabilities of AI models and then generate a score based on the results. 

According to an institute news release, Inspect is the first AI safety testing platform overseen by a government-backed body and released for wider use.

“As part of the constant drumbeat of U.K. leadership on AI safety, I have cleared the AI Safety Institute’s testing platform — called Inspect — to be open sourced,” said Michelle Donelan, the U.K.’s then-secretary of state for science, innovation and technology.

“This puts U.K. ingenuity at the heart of the global effort to make AI safe, and cements our position as the world leader in this space.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.