Zest AI Launches Fraud Detection Solution

Zest AI

Zest AI unveiled a tool to identify fraudulent activity during the loan decisioning process.

Zest Project is designed to use artificial intelligence to respond to the 69% increase in fraud cases — per the Federal Trade Commission — witnessed by community banks and credit unions in 2023, according to a Wednesday (Aug. 7) press release.

“Lenders need to outsmart fraud, including an increasing volume of AI-driven fraud in the industry with AI,” said Adam Kleinman, head of strategy and client Success at Zest AI, in the release. “Our team designed Zest Protect to create an efficient tool that can more accurately detect all types of fraud now and in the future, including AI-created fraud, with the ultimate goal of boosting lending confidence for our bank and credit union customers.”

Zest Protect employs machine learning technology to instantly detect first-party and third-party fraud, while also flagging income inconsistencies within the automated loan decisioning process, per the release.

This lets lenders adjust “specific detection thresholds based on their risk tolerances and automation objectives,” the release said. “With access to fraud prevention data and analytics, Zest AI can flag applications swiftly and protect against emerging threats.”

AI is becoming the tool of choice for financial institutions that want to prevent illicit activity such as money laundering or bank fraud.

The PYMNTS Intelligence report “Financial Institutions Revamping Technologies to Fight Financial Crimes” found an uptick in financial crime, with more than 40% of financial institutions surveyed saying incidents of fraud are increasing, and 7 in 10 saying they now are using AI and machine learning to fend off fraudsters.

“Modern payments fraud demands real-time learning and adaptation at scale,” PYMNTS wrote in June. “Generative AI offers the unprecedented advantage of continuous learning. It rapidly refines and adapts its understanding of patterns to distinguish between legitimate and fraudulent payments more accurately.”

In addition, generative AI can create synthetic datasets that mimic real-world financial data, allowing for robust model training without sacrificing privacy or compliance.

However, developing AI and ML tools can be costly, which could explain why just 14% of financial institutions said they build in-house fraud-fighting AI and ML technologies. Almost 30% said they rely entirely on third-party vendors to deliver these tools.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.