CFPB Begins to ‘Muscle Up’ AI Regulations

CFPB

America’s consumer watchdog is among the many organizations beefing up its artificial intelligence (AI) safeguards.

In an interview with The Associated Press on Friday (May 26), Consumer Finance Protection Bureau (CFPB) Director Rohit Chopra said his agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges.” 

Among those challenges: mismanaged automated systems at banks that led to wrongful home foreclosures, car repossessions, and lost benefit payments, all of which the CFPB has fined financial institutions for in the last year.

“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,” Chopra told the AP. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.”

The CFPB was part of a group of regulatory agencies that issued a statement last month — along the Civil Rights Division of the United States Department of Justice, the Federal Trade Commission and the U.S. Equal Employment Opportunity Commission say that AI decisions still need to follow the law.

“These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices,” the joint statement said. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination and produce other harmful outcomes.”

Ben Winters, Senior Counsel for the Electronic Privacy Information Center, told the AP that statement was a good start.

“There’s this narrative that AI is entirely unregulated, which is not really true,” he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision.”

As noted here last week, among the debates about potential problems with privacy and copyright created by generative AI, the biggest challenge is likely to be differentiating work created by artificial intelligence from original creations by humans. 

“The implications stretch from fraud to something as basic as the value of human creativity,” PYMNTS wrote. “While AI grows in sophistication, AI detectors struggle to identify content from even early versions of AI text generators.”