Can one stolen ID, augmented with artificial intelligence, yield $2 million in pilfered government benefits?
It’s a scenario Haywood Talcove, chief executive of LexisNexis Risk Solutions’ government division, laid out for the Financial Times (FT) Wednesday (June 28) in a report on how artificial intelligence (AI) can exacerbate the problem of identity theft.
“I am not a criminal, but I’ve been studying this for a long time — if I had this much information, and it was so pristine, the sky is the limit,” said Talcove.
He noted that the type of information stolen recently by hacking group Clop — photos, names, birthdays and home addresses — could be used to manufacture fake video selfies that many state agencies in the U.S. use for identity verification.
From there, Talcove said, criminals could claim unemployment benefits, apply for college loans and file for food stamps. He estimated that each stolen ID could help scammers steal up to $2 million just in government benefit programs.
“As AI advances, more tools become available to fraudsters … the use of synthetic fraud is rising at an alarming rate,” said Pavel Goldman-Kalaydin, at head of AI at identity verification company Sumsub, who says his firm has to find new ways to prevent these high-tech fakes.
The threat posed by deepfakes has led a number of states to pass laws regulating these audio/visual forgeries, with at least four others considering similar bills.
Last month, Microsoft President Brad Smith said deepfakes are his biggest AI-related concern, and called for measures to “protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI” along with licensing of the most key forms of AI to safeguard physical, cyber and national security.
“We’re going to have to address in particular what we worry about most: foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians,” Smith said.
Meanwhile, AI also has the potential to help prevent and detect fraud in the financial services space, Jeremiah Lotz, managing vice president, digital and data at PSCU, told PYMNTS earlier this week.
And although using AI-powered tools to boost fraud defenses isn’t necessarily a new approach, Lotz said that today’s generative AI solutions can “take things to the next level by looking at deeper, more personalized experiences.”
That means employing AI to support ID verification and transaction authorizations, as well as to better analyze behavioral patterns to spot suspicious behavior.