Meta: AI-Generated Political Ads Must Be Labeled As Such

Meta AI

Meta is imposing new controls on AI-generated ads ahead of the 2024 presidential election.

Beginning next year, advertisers around the world who post political ads or information about a social issue or an election will have to disclose whether that material has been digitally created or changed, including through artificial intelligence (AI), Meta announced Wednesday (Nov. 8).

According to the company blog, advertisers will need to disclose if “a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio” was digitally created or altered in cases where the ad shows “a real person as saying or doing something they did not say or do.”

The same rule applies to ads showing “a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened” or “a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.”

Meta said advertisers do not need to disclose if these ads contain digitally-created or altered content in cases where it is “ inconsequential or immaterial to the claim, assertion, or issue raised in the ad.”

For example, ads where someone has digitally cropped an image or used color correction wouldn’t qualify, “unless such changes are consequential or material to the claim, assertion, or issue raised in the ad,” the company said.

AI’s potential to spread misinformation is one of the most common threats mentioned when people discuss the possible risks posed by the technology.

Meta’s new policy follows a call last month by U.S. Rep. Yvette Clarke and U.S. Sen. Amy Klobuchar for social media platforms to address AI-generated deepfakes on their platforms, especially as they relate to political ads.

They aren’t alone. As PYMNTS wrote last month, both actor Tom Hanks and YouTube star Mr. Beast have taken to their social media platforms recently to blast the illicit use of their synthetic likenesses in deepfake scam ads.

So far, at least one other tech company has said it plans to regulate AI-generated election ads. Google announced in September that it will require election advertisers to reveal when their ads have been manipulated or created using AI starting this month.

“Given the growing prevalence of tools that produce synthetic content, we’re expanding our policies a step further to require advertisers to disclose when their election ads include material that’s been digitally altered or generated,” Google said in a statement to PYMNTS. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.