FTC Issues Proposal to Ban Impersonations as Fraud Rises

The Federal Trade Commission (FTC) is proposing a new set of rules that would prohibit the impersonation of individuals.

Citing growing complaints around impersonation fraud, the FTC said in a news release on Thursday (Feb. 15) that it is “committed to using all of its tools to detect, deter, and halt impersonation fraud.”

The commission noted that new technology, such as artificial intelligence (AI)-generated deepfakes, can worsen the extent of fraud.

“Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale. With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever,” FTC Chair Lina M. Khan said.

“Our proposed expansions to the final impersonation rule would do just that, strengthening the FTC’s toolkit to address AI-enabled scams impersonating individuals.”

The FTC is accepting comments from the public on whether the revised rule should make it unlawful for a firm, such as a generative AI platform that creates images, video or text, to provide services that they know or have reason to know is being used to harm consumers through impersonation.

The proposal will be open for public comment period for 60 days following the date it is published in the Federal Register.

This comes as consumers lost a record $10 billion to fraud in 2023, PYMNTS reported last week.

That figure is 14% higher than last year, according to the report, which cited data from the FTC.

The agency noted that around 2.6 million consumers reported fraud in 2023, with 27% of these consumers reporting a loss.

Digital tools have been making it easier for scammers to find targets. 

“Digital tools are making it easier than ever to target hardworking Americans, and we see the effects of that in the data we’re releasing today,” Samuel Levine, director of the FTC’s Bureau of Consumer Protection, said. “The FTC is working hard to take action against those scams.”