If the pace of innovation today were an athlete, it would be Usain Bolt.
That’s how fast the world is changing as new technologies are successfully introduced.
But for every great and positive innovation taking place around the world today, there is an equal and opposite reaction, as bad actors and cyber criminals add those same future-fit tools, like artificial intelligence (AI), to their toolkits.
“The application of technology isn’t just reserved for the good guys … and bad actors are accelerating what I would call an arms race, using all of those technologies,” Tobias Schweiger, CEO and co-founder of Hawk AI, told PYMNTS CEO Karen Webster.
“As a financial institution, one has to be aware of that accelerated trend and make sure your organization has enough technology on the good side of the equation to fight back,” Schweiger added.
New PYMNTS Intelligence found that more than 40% of U.S. banks reported an increase in fraud year over year, almost double what they reported in 2022. This higher number of fraudulent transactions represents a higher value of fraudulent losses and underscores that bad actors are getting increasingly better at bypassing older, legacy lines of fraud defense.
Fraud systems for at least the past two decades have typically been based on simple rules engines that don’t do the trick anymore, Schweiger said.
“Upgrading solutions and systems to also include machine learning (ML) and AI is really the only way [forward],” he explained.
To lock down emergent attack vector vulnerabilities most effectively, organizations can no longer content themselves by painting with a broad defensive brush. As bad actors get cleverer, fraud programs need to respond in kind by becoming more precise in detecting relevant or suspicious behavior.
“I don’t think there is any winner at the end,” Schweiger said. “Firms just need to be as quick as the criminals … the solution is to continuously learn by retuning models, backtesting on things that happened in the past, and calibrating defenses to protect and avoid organization-specific problem behaviors more precisely.”
Complicating matters is the fact that PYMNTS Intelligence revealed at least one-third of financial institutions cited the sophistication of fraudsters as a real challenge in designing their strategies and making decisions about the tools to fight against them.
That’s why, as Schweiger explained, financial institutions need to block and tackle and start small, rather than trying to boil the ocean all at once.
“The old-school rules-based processes don’t need to be replaced entirely,” he said. “Rules have their right to be there because they are what organizations know [and are comfortable with] … Just going AI only is sometimes a bit of a too big step for most organizations. We believe much more in a combination of the old world and the somewhat newer world of machine learning and AI, where the rules reflect the patterns that people want to stop or attend to and a second layer of ML and AI on top provides precision and agility.”
Fraud occurs in real time, and new generative AI capabilities are giving bad actors more avenues to activate vulnerabilities across a growing spectrum of fraudulent behavior that includes transaction fraud, account openings, account takeovers and other behaviorally-driven scams.
“Credit card fraud is still a cash cow,” said Schweiger. “ACH payments are also getting a lot of attention. Using generative AI for impersonations, including deepfake voices and even faces, is something also on the rise. Those techniques represent the bigger, future-focused bet bad actors are making.”
As the sophistication of the fraud techniques becomes more intense, it separately creates friction because legitimate requests from organizations get ignored by both consumers and other end-users who are unsure about which requests are legitimate and which are coming from scammers.
“The application of AI is very relevant in this context because it’s about identifying a deviation from normal behavior,” explained Schweiger. “The strategy is one that always starts with the data, things like transactional data, geolocation information, device fingerprints and more, hopefully in the future, including third-party fraud pool information.”
The ability to ingest different and even unconventional types of data is going to be “very important” going forward, he said.
While consumers today typically aren’t bothered by the authentication or signup frictions they are subject to when accessing their banking platforms and transacting, AI integrations — and the ability of AI models to continually learn from new data and contexts — can help ensure that the balance between security and convenience doesn’t swing too far in one direction.
Beyond fraud, Schweiger said other types of financial crime are also top of mind for financial institutions.
“A tool like Hawk AI that’s capable of combining things like sanction screening and anti-money laundering (AML) transaction monitoring with fraud protection while enriching the signals of one another and vice-versa is the answer for how banks can win the arms race and how they can do it much more efficiently,” he said.