Behavioral fraud and social engineering scams are getting more dangerous for businesses.
Despite the increasingly sophisticated weapons fraudsters have on hand to execute their scams, social engineering tactics remain one of their most effective ploys.
This, as a study published Thursday (Feb. 29) by researchers at the University of Texas at Austin found that at least $75 billion in cryptocurrency alone — far more than had been previously estimated — was stolen between 2020 and 2024 due mostly to a particular kind of behavioral fraud known as “pig butchering.”
Pig butchering scams are named after the practice of farmers fattening up their livestock before slaughtering them. The victim is the pig, while the bad actors are the butchers.
The way the scams tend to work is by establishing connections with victims via wrong-number text messages whose conversations eventually lead to promises of fake investments, which the victim must pour more and more money into before discovering the ruse. While the approach may seem far-fetched, even transparent in hindsight, victims of these scams routinely lose hundreds of thousands or even millions of dollars.
The bad actors themselves are becoming savvier with their targets and increasingly going after executives and victims with deeper pockets, including C-suite representatives from financial institutions.
News also broke this February that the July 2023 failure of a regional bank in Kansas was the result of its CEO falling for a $50 million pig butchering scam.
Read more: Criminals Target Big Ticket Transactions in Commercial Banking Fraud Surge
While the Kansas bank’s depositors weren’t harmed by the bank’s closure, the CEO-led failure highlights the growing need for financial enterprises to equip themselves, their executives and their employees with the appropriate education and tools to protect against the latest financial fraud techniques.
Kate Frankish, chief business development officer and anti-fraud lead at Pay.UK, highlighted the challenge financial institutions face in detecting these scams in an interview with PYMNTS last year: “From a bank’s perspective, it looks like a real payment, because the customer authorized it using all of their credentials. But it’s either going to an account that doesn’t belong to them or it’s going to a real person who has been scammed out of the money.”
After all, bad actors are notoriously good at finding and exploiting vulnerabilities — and the rise of new technologies like artificial intelligence (AI) has given their tactics a shot in the arm, particularly when it comes to scaling behavioral attacks like pig butchering scams or business email compromise (BEC) attacks.
The FBI estimates U.S. businesses lost $2.4 billion to BEC schemes, in which fraudsters pretend to be a supplier or other authorized party and trick employees into diverting funds to them.
And PYMNTS Intelligence data, in collaboration with Hawk AI, found that about a third of Big Tech and FinTech firms have experienced fraud in recent months, while about 43% of FIs in the U.S. experienced an increase in fraud this year relative to 2022, resulting in a rise in fraud losses increasing by about 65% from $2.3 million in 2022 to $3.8 million in 2023.
See also: Attack Vectors 2024: Identity Theft and Digital Banking
As Tobias Schweiger, CEO and co-founder of Hawk AI, said in an interview with PYMNTS, “the application of [AI] isn’t just reserved for the good guys … and bad actors are accelerating what I would call an arms race, using all of those technologies. As a financial institution, one has to be aware of that accelerated trend and make sure your organization has enough technology on the good side of the equation to fight back.”
Criminals armed with generative AI tools can easily create realistic videos, fake IDs, false identities and convincing deepfakes of company executives — all dangerous capabilities when applied to social engineering scams at scale.
Karen Postma, managing vice president of risk analytics and fraud services at PSCU, told PYMNTS last October that fraudsters utilizing GenAI “can effectively mimic a voice within three seconds of having recorded data,” indicating that they are “utilizing AI to not just commit attacks, but to become very good at committing these attacks.”
As emphasized by many of the risk management leaders PYMNTS has spoken to, the first line of defense for today’s businesses is increasingly their own employees, making individual education around next-generation attack tactics, and the best practice methods to combat them, more important than ever.