The 21st century’s digitization has transformed the financial crime landscape, arming cybercriminals with new tactics.
Fortunately, as the world evolves, organizations are increasingly able to fight back by integrating their own host of modern tools designed to support regulatory compliance around know-your-customer (KYC) and anti-money laundering (AML) controls, among other critical compliance solutions.
But in today’s fast-evolving environment, failing to prepare often means preparing to fail.
That’s because moving slowly in adopting modern controls adapted to the modern landscape can have disastrous consequences.
Regulators’ concerns around TD Bank’s AML practices reportedly scuttled its $13.4 billion plan to acquire First Horizon Bank on Monday (May 8), underscoring the importance of a strong and auditable compliance program.
An entire ecosystem of artificial intelligence (AI) and machine learning (ML) tools are already enhancing risk-based AML programs and securing organizations’ transactions against fraudulent behavior and bad actors.
Read more: Treasury Department Says DeFi Services Must Improve AML/CFT Controls
Data in the 2023 PYMNTS playbook, “Digital Payments Technology: Investing in Payments Systems for the Digital Economy,” finds that organizations that continue to rely upon manual and reactive anti-fraud tools experience slower growth than those using proactive and automated solutions.
Additional research in PYMNTS’ 2023 “B2B Payments Fraud Tracker” found that 71% of businesses report needing additional digital fraud solutions.
“It’s time for FinTechs to kind of grow up,” Diameter Pay CEO and Co-Founder David Lighton said in a conversation this year with PYMNTS. “In [an] environment with higher regulatory scrutiny, we need to get really serious about compliance if we want to make it.”
New applications of predictive AI are already helping firms enact cost-effective and increasingly nonmanual decisioning processes around their compliance programs. These modern solutions sacrifice neither security nor convenience, while offering auditable processes that stand up to regulatory scrutiny.
Andrew Gleiser, chief revenue officer at payments provider Aeropay, told PYMNTS this week that the use of AI has been “really big” for fraud prevention.
The cutting-edge AI tech is able to better screen for patterns, connections, and statistical anomalies in transactional activity that could have been missed by conventional, manual or human-led monitoring, as well as by enhancing AML processes with hyper targeted customer risk priority categorization during onboarding.
Underpinning the efficacy of these new solutions is the fact that data-hungry AI tools are only as good as the information they are acting on.
Therefore, the ability to gain crucial compliance benefits from AI implementations depends in many ways on an organization’s own data preparedness and enterprise technical capabilities.
See also: Generative AI Gives Scammers More Tools and Greater Reach
As banking goes increasingly digital, and nonbanks provide more and more varied financial services, organizations need to be increasingly vigilant around the companies they trust for handling significant payment volumes.
Highlighting the potentially dangerous nature of today’s landscape, research in PYMNTS’ “2023 Money Mobility Index” shows that, among other money mobility frictions, three-quarters (75%) of FinTech issuers lack the risk management systems to deliver what they promise.
Sheetal Parikh, associate general counsel and vice president of compliance at embedded banking software platform Treasury Prime, explained to PYMNTS that as firms continue to assess bank-alternative products, it remains hugely important for banks and other regulated financial institutions to keep “adequate oversight” of all the other players whose prevalence has exploded alongside the growth of innovative, digital channels.
“You have companies that aren’t regulated in the same way that the banks are regulated, and it will require the tools to evolve and become more innovative to make sure they are addressing the types of financial crimes and fraud that are native to this new space,” Parikh said.
Still, future-fit capabilities are a two-way street, and fraudsters have already made short work of adapting modern technology, particularly AI, to their own needs.
“People are already using ChatGPT and generative AI to write phishing emails, to create fake personas and synthetic IDs,” Gerhard Oosthuizen, chief technology officer of Entersekt, told PYMNTS.
The speed with which bad actors can integrate technical advances to exploit vulnerabilities only highlights the need for firms to boost their own defenses and risk frameworks.
That’s because, as PYMNTS wrote earlier, if fraudsters encounter an obstacle with a business that’s been raising their defenses, they may decide to simply move along to a new victim.