America’s top financial regulator says artificial intelligence (AI) will cause a financial crisis if left unchecked.
In an interview with the Financial Times on Sunday (Oct. 15), Securities and Exchange Commission Chairman Gary Gensler said that without quick intervention it was “nearly unavoidable” that AI would lead to a crisis within a decade.
He added that regulating AI could be tough, as the risks to financial markets have their roots in technology created by companies outside the SEC’s purview.
“It’s frankly a hard challenge,” Gensler said. “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do. And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”
The SEC in July proposed a regulation to deal with potential conflicts of interest in predictive data analytics, but as the FT noted, it was aimed at individual models deployed by broker dealers and investment advisers.
Even if the present measures were revised, “it still doesn’t get to this horizontal issue … if everybody’s relying on a base model and the base model is sitting not at the broker dealer, but it’s sitting at one of the big tech companies,” Gensler said. “And how many cloud providers [which tend to offer AI as a service] do we have in this country?”
The chairman added that he has discussed the issue with the international Financial Stability Board and the U.S. Treasury’s Financial Stability Oversight Council.
“I think it’s really a cross-regulatory challenge,” Gensler said.
As PYMNTS has written, the rate and speed at which AI technology’s capabilities are evolving, to the point that there is an increased “urgency for businesses, governments and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.”
Creating a coherent approach to AI was at the center of this year’s Group of 20 (G20) meeting last month.
At that gathering, leaders pledged to work toward “responsible AI development, deployment and use” that would safeguard rights, transparency, privacy and data protection; as well as agreed to seek a “pro-innovation regulatory/governance approach” that capitalizes on the benefits of AI while not forgetting the potential risks of the technology.