PYMNTS-MonitorEdge-May-2024

How Year 1 of AI Impacted the Financial Fraud Landscape 

The emergence this year of generative artificial intelligence (AI) reshaped the business landscape.  

For the most part, it was a good thing. A great thing, even. 

The world had never seen a technology like AI, and rapid advances in its capabilities over just the past 12 months have helped organizations capture new efficiencies and cut legacy costs while opening up new avenues for growth. 

But one area where the capabilities of AI systems on the whole hurt, rather than helped, the global business landscape was the shot in the arm they gave to cybercriminals and bad actors looking to scam consumers and attack businesses. 

That’s because the ability to generate human-like text, virtually clone loved ones’ voices and faces, and scale behavioral-driven attacks increasingly democratized access to cybercrimes that previously only sophisticated bad actors would dream of attempting. 

At the enterprise level, generative AI’s ability to generate code for designing malware, among its other uses, saw a rise in automated phishing attacks, credential stuffing using large sets of username and password combinations obtained from previous data breaches, AI-powered business email compromise (BEC) and automated account takeover (ATO) attacks, and even the use of AI to avoid legacy fraud detection systems by learning patterns of legitimate transactions and mimicking them to avoid detection.

That, however, was 2023. And in 2024, the financial fraud landscape is shaping up so that it will be the enterprises that win — and the fraudsters that lose. 

See alsoCombining Old and Newer Technologies Helps Banks Fight Rising Fraud

How AI Will Reduce Fraud

According to a PYMNTS Intelligence study in collaboration with Hawk AI, nearly 43% of financial institutions (FIs) in the U.S. experienced an increase in fraud this year relative to 2022, resulting in a rise in fraud losses increasing by about 65% from $2.3 million in 2022 to $3.8 million in 2023. 

“Technological advances are often slow and complex, but the new types of fraud that come with those technological advances can be the opposite of that — fast and simple,” Elly Aiala, chief compliance officer at Boost Payment Solutions, told PYMNTS.

“Fraud is growing and the recipes are getting slicker,” Gerhard Oosthuizen, CTO at Entersekt, told PYMNTS. “At this stage, the technology has led to more challenges in the fraud space than potential wins.”

To address this rising tide of fraud and sophisticated financial crime, FIs have had to elevate their existing systems and embrace advanced technologies, with 71% of FIs using both AI and machine learning (ML) to boost their fraud-fighting capabilities. 

As Tobias Schweiger, CEO and co-founder of Hawk AI, told PYMNTS, “the application of [AI] isn’t just reserved for the good guys … and bad actors are accelerating what I would call an arms race, using all of those technologies. As a financial institution, one has to be aware of that accelerated trend and make sure your organization has enough technology on the good side of the equation to fight back.”

Read alsoUnmasking Digital Imposters Is Rising Priority for Industrial Economy 

As the use of AI helps fraudsters find and probe more attack vectors than were previously possible, businesses need to enact multitiered cyberdefense strategies that can not only detect and prevent fraud, but that also help authenticate “good” identities in real time with minimal friction. 

That challenge, of course, is that as the world becomes increasingly digitized, bad actors are equally able to exploit the amount of data that exists in order to create believable, synthetic personas. 

“Now there’s been a democratization of fraud, where anyone can buy the tools and the tutorials they need to carry out successful attacks,” Michael Jabbara, vice president and global head of fraud services at Visa, told PYMNTS. 

That’s why account validation, identity verification, and multifactor identification including device-level and biometric protocols are all going to be key for financial services businesses and other payments sector players as they look to go on the offensive against fraud in 2024. 

“Trust, in the digital age,” Doriel Abrahams, head of risk in the U.S. at Forter, told PYMNTS, “is a lot different than it used to be.”

As PYMNTS has previously noted, there is a “greenfield opportunity for providers and platforms to help automate the verification of counterparties’ identities, payment details and accounts.

Upgrades Can Happen in Stages

The use of older payment rails, and lock-step anti-fraud methodologies like keyword and sentiment analysis, don’t just create more frictions and even declines for end-users, they also paint organizations as an easy target and open them up to more fraud. 

Jeff Gipson, director of payment product management at Discover® Global Network, told PYMNTS in a recent interview that the intensity of fraudsters’ attacks on all parts of the payments ecosystem is ramping up. “AI is at the forefront of their efforts,” he said, adding that “techniques that were previously limited to just the most sophisticated hackers are now more ubiquitous and are more easily deployed at scale.”

“Any firm — whether it’s an FI, payment firm or other company — that chooses not to invest significantly in security … as it evolves more and more, are doing so at their own peril,” Dean M. Leavitt, founder and CEO at Boost Payment Solutions, told PYMNTS.

“Newer technologies are enabling fraudsters to have a lot more scale and automation when committing fraud, making smart defense critical,” Erika Dietrich, VP, Global Fraud Prevention Risk Services at ACI Worldwide, told PYMNTS. “What has stayed the same is the need for businesses to provide their customers with the payment methods they prefer, while ensuring those channels stay secure.”

By embracing AI as a tool for good, organizations can stay safe and provide a best in class experience that drives greater value to their customers in 2024. 

PYMNTS-MonitorEdge-May-2024