Accounts Payable Payments as a Service Tracker® Series Report

Companies Enlist AI in Battle Against AI Fraud

August 2023

Artificial intelligence (AI) has been making headlines recently as a means of automating jobs, bolstering revenue and eliminating busywork. Fraudsters, on the other hand, are leveraging AI to enhance the effectiveness of their deceitful strategies, thereby increasing AI’s potential as a menacing threat.

PYMNTS
01

Fraudsters are engaged in an unending arms race with those pioneering technologies to stop them, as both sides continually strive to outmaneuver one another’s techniques. Legacy accounts payable (AP) systems have proven ineffective at countering this latest AI threat, and companies will need to upgrade their systems to mitigate the associated risks.

02

Synthetic identity fraud presents a serious threat, as malicious actors use various methods to create new identities for the purpose of scamming companies. AI tools have augmented this technique, allowing fraudsters to create phony companies and clients and thwart fraud prevention strategies.

03

Governments are working at a frustratingly slow pace to protect their economies from AI attacks, leaving organizations to fend for themselves against bad actors. More than 40% of businesses or their customers have already encountered AI fraud attacks, with deepfake software duping voice authentication systems 99% of the time. AP departments will need to deploy their own AI software to beat fraudsters at their own game.

Get Unlimited Access
Complete the form below for free, unlimited access to all our Data Studies, Trackers, and MonitorEdge reports.

Thank you for registering. Please confirm your email to view all our Trackers.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    Digital fraud has been a ubiquitous threat since the invention of the internet, but it has grown significantly over time in its pervasiveness and impact. Data from the Federal Trade Commission shows that consumers in the United States reported almost $8.8 billion in fraud losses in 2022, a 30% increase from the prior year. Actual monetary losses, however, are estimated to be much greater.

    This dramatic rise in financial losses is attributed to several factors, including the growing prevalence of eCommerce and the heightened sophistication of malicious actors. One of the most troubling developments in recent years has been the use of AI to augment fraud techniques, posing a significant threat to consumers and businesses alike.

    Fraudsters Leverage AI to Create Fake Companies and Clients

    Synthetic identity fraud presents a serious threat, as malicious actors use various methods to create new identities for the purpose of scamming companies. AI tools have augmented this technique, allowing fraudsters to create phony companies and clients and thwart fraud prevention strategies.

    AI fraud complicates vendor onboarding.

    A recent study reveals that the growing prevalence of generative AI is encumbering the onboarding process. Because of this challenge, 61% of businesses find onboarding new vendors moderately or extremely difficult, while 52% express the same challenge regarding new clients. Engines like ChatGPT allow bad actors to create and populate authentic-looking fake websites, significantly complicating businesses’ due diligence checks when onboarding. These new AI techniques are contributing to a growing share of the 5% of business revenues lost to fraud each year.

    61%

    of businesses say it is difficult to onboard new vendors, thanks to generative AI.

    Deepfakes thwart voice recognition 99% of the time.

    Existing prevention measures are proving inadequate for fighting AI fraud, with a recent study finding that deepfakes have the potential to fool voice authentication systems with up to 99% reliability. A deepfake system needs just five minutes of recorded audio before it has enough data to replicate the subject’s voice and generate convincing, albeit fraudulent, content. Anyone with a video presence online is susceptible to this advanced threat. Experts note that companies relying entirely on voice authentication could find themselves critically compromised by deepfake technology and should consider additional authentication measures.

    Legacy AP Fraud Prevention Is Inadequate for Countering AI Fraud

    Fraudsters are engaged in an unending arms race with those pioneering technologies to stop them, as both sides continually strive to outmaneuver one another’s techniques. Legacy accounts payable (AP) systems have proven ineffective at countering this latest AI threat, and companies will need to upgrade their systems to mitigate the associated risks.

    AI Fraud

    can easily bypass existing fraud prevention measures, thanks to its sophistication and ability to deploy at scale.

    65% of organizations reported falling victim to payment fraud in 2022.

    Of the victims that experienced payment fraud, 71% were impacted by business email compromise, a method by which fraudsters pretend to be a company’s supplier and trick accounting teams into paying them instead of their actual vendors. AI technologies have enabled criminal enterprises to execute this scheme on a much larger scale, orchestrating hundreds of scams simultaneously, with the goal of deceiving just a fraction of their targets. Nearly half of fraud victims said they were unable to recover stolen funds.

    B2B payment fraudsters are experts at exploiting security loopholes.

    Legacy fraud prevention systems still largely rely on methods that can be easily duped by AI: For example, 70% of companies still confirm changes to their suppliers’ bank account information by phone. Fraudsters’ increasing adeptness at replicating voices could easily render this verification method useless, resulting in massive losses as criminals redirect funds.

    Organizations Find Themselves Alone in Fight Against Deepfakes

    Governments are working at a frustratingly slow pace to protect their economies from AI attacks, leaving organizations to fend for themselves against bad actors. More than 40% of businesses or their customers have already encountered AI fraud attacks, with deepfake software duping voice authentication systems 99% of the time. AP departments will need to deploy their own AI software to beat fraudsters at their own game.

    A recent UN report details how the deepfake threat could spell danger worldwide.

    In June, United Nations Secretary-General António Guterres told press outlets that “alarm bells” concerning generative AI like ChatGPT are “deafening” and ring “loudest from the developers who designed it.” To respond to the crisis, the UN is drafting its Code of Conduct for Information Integrity on Digital Platforms. However, this initiative is not expected to be published until September 2024. Further, the UN has minimal legal authority over the regulations of individual nations, making it unlikely that this initiative will effectively counter AI fraud.

    40%

    of businesses have encountered AI fraud.

    U.K. regulator warns that firms must boost their protection against AI fraud.

    The chief executive of the United Kingdom’s Financial Conduct Authority (FCA), Nikhil Rathi, warns that AI fraud techniques such as identity fraud and cyberattacks will increase in scale and sophistication in the coming years, saying that senior managers of financial firms will ultimately be responsible for damages if these attacks are left unchecked. “As AI is further adopted, the investment in fraud prevention and operational and cyber resilience will have to accelerate simultaneously,” Rathi notes.

    Businesses Must Harden Their Defenses Against AI Fraud

    The surging popularity of AI-assisted fraud poses a massive threat to all businesses, as AI offers fraudsters the ability not only to augment existing types of fraud or develop new ones but also to deploy their tactics on an unprecedented scale. With the push of a button, a fraudster can target thousands of companies all at once, and they need only a handful of successful attacks to harvest large sums of money or terabytes of valuable data that they can then sell on the darknet.

    It is imperative for AP departments to harden their defenses against AI fraud. Certain techniques have achieved notable success against these attacks:

    • Virtual cards have built-in controls that prevent unauthorized transactions as well as prohibit overspending and improve cash management.
    • Multifactor and knowledge-based authentication can keep fraudsters from impersonating vendors, including answering questions generated from public records, a task that becomes much harder for them to accomplish, even with AI.
    • AI and machine learning can be deployed by organizations to analyze spending patterns and identify anomalies that could be the result of fraud.
    • Working with trusted payments solution partners with expertise in safeguarding payments can offload complex security burdens and processes from your existing team, so they can focus on protecting other threat vectors in your business.

    Organizations looking to make a dent in AI fraud will need to balance strengthening security with facilitating a smooth experience for vendors, however. Money lost to fraud could quickly be surpassed by lost revenue if vendors take their business elsewhere due to tedious verification requirements.

    Chris Wyatt

    The rise in digital payments demands a new era of security. With AI, GPT models and advanced technologies, we are proactive, not reactive, in safeguarding B2B AP payments.”

    Chris Wyatt
    Chief Strategy Officer

    About

    Finexio is the leading AP Payments as a Service company focused on embedding end-to-end business payment capabilities for mid-market and enterprise organizations into AP software, Procure-to-Pay platforms and financial institutions. Finexio customers benefit from "done-for-you" payments operations services that support 100% of their business payments digitally. CFOs and finance teams seamlessly transition away from manual payment processes to modern, safe and secure electronic payments, realizing significant time savings, reduced payment costs, increased cash flow, fraud prevention and unmatched visibility into their payments data.

    PYMNTS INTELLIGENCE

    PYMNTS Intelligence is a leading global data and analytics platform that uses proprietary data and methods to provide actionable insights on what’s now and what’s next in payments, commerce and the digital economy. Its team of data scientists include leading economists, econometricians, survey experts, financial analysts and marketing scientists with deep experience in the application of data to the issues that define the future of the digital transformation of the global economy. This multi-lingual team has conducted original data collection and analysis in more than three dozen global markets for some of the world’s leading publicly traded and privately held firms.


    We are interested in your feedback on this report. If you have questions or comments, or if you would like to subscribe to this report, please email us at feedback@pymnts.com.

    Disclaimer

    The Accounts Payable Payments as a Service Tracker® Series may be updated periodically. While reasonable efforts are made to keep the content accurate and up to date, PYMNTS MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, REGARDING THE CORRECTNESS, ACCURACY, COMPLETENESS, ADEQUACY, OR RELIABILITY OF OR THE USE OF OR RESULTS THAT MAY BE GENERATED FROM THE USE OF THE INFORMATION OR THAT THE CONTENT WILL SATISFY YOUR REQUIREMENTS OR EXPECTATIONS. THE CONTENT IS PROVIDED “AS IS” AND ON AN “AS AVAILABLE” BASIS. YOU EXPRESSLY AGREE THAT YOUR USE OF THE CONTENT IS AT YOUR SOLE RISK. PYMNTS SHALL HAVE NO LIABILITY FOR ANY INTERRUPTIONS IN THE CONTENT THAT IS PROVIDED AND DISCLAIMS ALL WARRANTIES WITH REGARD TO THE CONTENT, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, AND NONINFRINGEMENT AND TITLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF CERTAIN WARRANTIES, AND, IN SUCH CASES, THE STATED EXCLUSIONS DO NOT APPLY. PYMNTS RESERVES THE RIGHT AND SHOULD NOT BE LIABLE SHOULD IT EXERCISE ITS RIGHT TO MODIFY, INTERRUPT, OR DISCONTINUE THE AVAILABILITY OF THE CONTENT OR ANY COMPONENT OF IT WITH OR WITHOUT NOTICE.
    PYMNTS SHALL NOT BE LIABLE FOR ANY DAMAGES WHATSOEVER, AND, IN PARTICULAR, SHALL NOT BE LIABLE FOR ANY SPECIAL, INDIRECT, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, OR DAMAGES FOR LOST PROFITS, LOSS OF REVENUE, OR LOSS OF USE, ARISING OUT OF OR RELATED TO THE CONTENT, WHETHER SUCH DAMAGES ARISE IN CONTRACT, NEGLIGENCE, TORT, UNDER STATUTE, IN EQUITY, AT LAW, OR OTHERWISE, EVEN IF PYMNTS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
    SOME JURISDICTIONS DO NOT ALLOW FOR THE LIMITATION OR EXCLUSION OF LIABILITY FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES, AND IN SUCH CASES SOME OF THE ABOVE LIMITATIONS DO NOT APPLY. THE ABOVE DISCLAIMERS AND LIMITATIONS ARE PROVIDED BY PYMNTS AND ITS PARENTS, AFFILIATED AND RELATED COMPANIES, CONTRACTORS, AND SPONSORS, AND EACH OF ITS RESPECTIVE DIRECTORS, OFFICERS, MEMBERS, EMPLOYEES, AGENTS, CONTENT COMPONENT PROVIDERS, LICENSORS, AND ADVISERS.
    Components of the content original to and the compilation produced by PYMNTS is the property of PYMNTS and cannot be reproduced without its prior written permission.
    The Accounts Payable Payments as a Service Tracker® Series is a registered trademark of What’s Next Media & Analytics, LLC (“PYMNTS”).