Generative artificial intelligence (AI) can digitally create nearly everything and anything.
And that is becoming a problem.
One of the world’s most popular YouTubers, MrBeast, and America’s dad, Tom Hanks, are separately speaking out against the illicit use of their synthetic likenesses in new deepfake scam ads that are being spread around TikTok, X (formerly known as Twitter) and other social sites.
“Are social media platforms ready to handle the rise of AI deepfakes?” MrBeast tweeted to his more than 24 million followers on X Monday (Oct. 2). “This is a serious problem.”
Lots of people are getting this deepfake scam ad of me… are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem pic.twitter.com/llkhxswQSw
— MrBeast (@MrBeast) October 3, 2023
Hanks posted to his 9.5 million Instagram followers this week: “Beware!! There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”
Meanwhile, the AI-generated likenesses of two BBC presenters, Matthew Amroliwala and Sally Bundock, were also used to promote a known scam.
For scammers and bad actors, the ability to whip up in seconds a believable — if entirely false — likeness of a well-known and trusted celebrity to manipulate their victims across social media platforms at scale must seem almost too good to be true.
After all, with little to no regulation of the technology, companies are left to their own devices when it comes to policing and flagging inappropriate uses.
That’s why U.S. Rep. Yvette Clarke and U.S. Sen. Amy Klobuchar sent Meta CEO Mark Zuckerberg and X CEO Linda Yaccarino a letter Thursday (Oct. 5) expressing “serious concerns” about AI-generated deepfakes on their platforms, particularly as they relate to political ads, according to a Thursday report by the Associate Press.
The executives have until Oct. 27 to respond.
Read also: Generative AI Fabrications Are Already Spreading Misinformation
Generative AI programs like OpenAI’s ChatGPT have, in effect, democratized access to sophisticated phishing and other behaviorally-driven fraud techniques by making them not only more effective and convincing, but also easier to conduct on a larger scale.
“Utilizing generative AI, a fraudster can effectively mimic a voice within three seconds of having recorded data,” Karen Postma, managing vice president of risk analytics and fraud services at PSCU, told PYMNTS in an interview posted Wednesday (Oct. 4).
“Everyone has an equal ability to deploy technology, no matter who they are,” she added.
PYMNTS Intelligence found that, as a result of new AI-driven techniques, phishing attacks alone have seen a 150% increase year over year since 2019.
While misinformation and the fraudulent scams that rely on it are far older than the internet, individuals could usually get by and stay safe simply by relying on their own common sense.
The ability of generative AI to craft lifelike and believable synthetic content in real time is rapidly changing that paradigm.
Complicating matters somewhat is the fact that PYMNTS Intelligence found there doesn’t yet exist a truly foolproof method to detect and expose AI-generated content.
Microsoft Vice Chairman and President Brad Smith called deepfakes the greatest AI-related threat.
Google is the only one of its Big Tech peers to have announced a new policy mandating advertisers for the upcoming U.S. election to disclose when ads have been manipulated or created using AI.
See also: Is It Real or Is It AI?
And it isn’t just globally renowned celebrities, political aspirants and their respective audiences who need to worry about being scammed or duped.
Deepfakes can also target enterprises and financial institutions by impersonating C-suite leaders and other high-level staff, wreaking havoc if left unchecked.
“Misinformation and disinformation can be a company killer,” Wasim Khaled, CEO and co-founder of intelligence platform Blackbird.AI, told PYMNTS in June. “Threat intelligence solutions and cybersecurity measures need to account for the growing impact of a new generation of audience manipulation capabilities.”
The rise in deepfake imposter scams has been singled out by experts in financial crime as one of the most significant threats to the banking and financial services sectors.
“As a financial institution, one has to be aware of that accelerated trend and make sure your organization has enough technology on the good side of the equation to fight back,” Tobias Schweiger, CEO and co-founder of Hawk AI, told PYMNTS in September.
“The application of technology isn’t just reserved for the good guys … and bad actors are accelerating what I would call an arms race, using all of those technologies,” he added.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.