PYMNTS-MonitorEdge-May-2024

SEC Joins FTC in Voicing Concerns Over AI as Risk of Regulation Looms

Securities and Exchange Commission Chair Gary Gensler delivered his remarks on Thursday March 10 before the Investor Advisory Committee that was devoted to the use of artificial intelligence in robo-Advisory services and cybersecurity disclosures. 

Gensler, who has previously raised concerns about predictive AI in online brokers and robo-advisors during a Practicing Law Institute conference in October 2021, reiterated the important challenges that these practices may bring, namely, conflict of interest, bias and systemic risks. 

The Investor Advisory Committee tackled ethical issues and fiduciary responsibilities related to the use of artificial intelligence in robo-advising. In contrast, Mr. Gensler focuses on “digital engagement practices” and how they intersect with a variety of finance platforms. 

These practices design user experiences, and in certain situations they may raise conflicts of interest. For example, predictive data analytics, differential marketing and behavioral prompts are integrated into robo-advising and other financial technologies. Platforms, and the people behind these platforms, have to decide what factors they are optimizing, which usually should be investor´s benefits — but they could also include other factors like revenue and performance of the platform. 

Mr. Gensler reminded his audience that finance platforms have to comply with investor protections through specific duties — things like fiduciary duty, duty of care, duty of loyalty, best execution and best interest. And when a platform is also trying to optimize for its own revenue, that´s where there is a conflict with its duties to investors. 

But there is one area that seems to raise red flags, and it may require regulation to address it: behavioral nudges. When broker-dealers use these techniques to influence investors’ behavior, “they may create gray areas between what is and isn’t a recommendation — gradations that could be worth considering through rulemaking,” said Chair Gary Gensler in his remarks. 

This is an area where the SEC is not alone and other regulators around the globe are also worried. The European Union is proposing new regulation on artificial intelligence that contains very few prohibitions, but one of them is on the use of AI systems that use subliminal techniques to influence consumers’ behavior. Behavioral nudges may not necessarily fall under this category, but it shows the high degree of scrutiny that this type of practice will face. 

Read More: AI in Financial Services in 2022: US, EU and UK Regulation 

Another aspect of the use of artificial intelligence that concerns Chairman Gensler is bias, and this is a concern shared by other agencies like the Federal Trade Commission (FTC). Data used by platforms in their analytic models could reflect historical biases, and this could result in people not getting fair access and prices in the financial markets. 

Chairman Gensler didn´t suggest rulemaking may be the way forward in this area, but he instructed his staff to take a closer look.  

However, the FTC is considering a wide range of options, including new rules and guidelines, to tackle algorithmic discrimination. FTC´s Chair Lina Khan, in a letter to Senator Richard Blumenthal (D-CT), outlined her goals to “protect Americans from unfair or deceptive practices online,” and in particular, Khan said that the FTC is considering rulemaking to address “lax security practices, data privacy abuses and algorithmic decision-making that may result in unlawful discrimination.” 

“Rulemaking may prove a useful tool to address the breadth of challenges that can result from commercial surveillance and other data practices […] and could establish clear market-wide requirements,” Khan wrote. 

Read More: FTC Mulls New Artificial Intelligence Regulation to Protect Consumers 

 

PYMNTS-MonitorEdge-May-2024