The Treasury Department is sounding the alarm on cybersecurity risks posed by the growing use of artificial intelligence (AI) within the financial services sector.
A new report Wednesday (March 27) highlights potential dangers and calls for urgent collaboration between government and industry to safeguard financial stability. The report, mandated by a Biden administration executive order, focuses on the widening capability gap posed by AI. Large banks and financial houses have the resources to develop custom AI systems, but smaller institutions are increasingly being left behind. This leaves small institutions vulnerable as they may rely on third-party AI solutions.
“Artificial intelligence is redefining cybersecurity and fraud in the financial services sector, and the Biden administration is committed to working with financial institutions to utilize emerging technologies while safeguarding against threats to operational resiliency and financial stability,” said Treasury Under Secretary Nellie Liang in a news release.
“Treasury’s AI report builds on our successful public-private partnership for secure cloud adoption and lays out a clear vision for how financial institutions can safely map out their business lines and disrupt rapidly evolving AI-driven fraud.”
The Treasury study reveals a troubling lack of data sharing on fraud prevention, further disadvantaging smaller financial institutions. Limited data hinders their ability to develop effective AI fraud defenses, while larger institutions leverage massive data troves for model training. The report, produced by Treasury’s Office of Cybersecurity and Critical Infrastructure Protection, is based on interviews with over 40 companies in the financial and technology sectors.
Narayana Pappu, CEO of Zendata, a San Francisco-based provider of data security and privacy compliance solutions, tells PYMNTS that the largest barrier for smaller financial institutions in using AI for fraud detection is not model creation but quality and standardized fraud data. He said that financial institutions can act as a node to aggregate the data.
“Data standardization and quality assessment would be a ripe opportunity for a startup to offer as a service,” he added. “Techniques, such as differential privacy, can be used to facilitate information between financial institutions without exposing individual customer data, which might be a concern preventing smaller financial institutions from sharing information with other financial institutions.”
Marcus Fowler, CEO of Darktrace Federal, told PYMNTS in an interview that the tools used by attackers and defenders — and the digital environments that need to be defended — are constantly changing and increasingly complex.
“Specifically, the use of AI among attackers is still in its infancy, and while we don’t know exactly how it will evolve, we know it is already lowering the barrier to entry for attackers to deploy sophisticated techniques faster and at scale,” he added.
“It will take a growing arsenal of defensive AI to effectively protect organizations in the age of offensive AI. Luckily, defensive AI has been protecting against sophisticated threat actors and tools for years.”
Fowler said that due to their intrinsic operations, financial services organizations have consistently been prime targets for cyberthreats. Consequently, these entities typically possess highly developed and complex cybersecurity measures.
“AI represents the greatest advancement in truly augmenting our cyber workforce, and these organizations serve as an excellent example of how AI can be effectively applied to security operations to increase agility and harden defenses against novel threats,” he said.
“We encourage these organizations to facilitate open conversations around their successes and failures deploying AI to help other organizations across sectors accelerate their adoption of AI for cybersecurity.”
The report’s recommendations include streamlining regulatory oversight to avoid fragmentation as various financial regulators address the challenges posed by AI. It also suggests expanding standards developed by the National Institute of Standards and Technology (NIST) to be specifically applicable to financial services. The report advocates for best practices in tracking data and developing “nutrition labels” for AI vendors. These labels would clarify the type of data used in AI models, its origin, and its intended use.
Additionally, the report urges the need to address “black box” systems by enhancing the explainability of complex AI, especially for the rapidly developing field of generative AI. It highlights the importance of closing human capital gaps by developing training and competency standards for those utilizing AI systems. Other key takeaways emphasize the need for a common AI vocabulary to standardize definitions, address digital identity issues for strengthened fraud prevention, and foster international collaboration to align AI regulations and risk mitigation strategies.
Research conducted by PYMNTS Intelligence reveals that financial institutions (FIs) employ an array of fraud prevention strategies, with entities across the spectrum depending on a combination of internal fraud prevention systems, external resources, and emerging technologies to protect their operations and clients.
In the 2023 report titled “The State of Fraud and Financial Crime in the U.S.,” PYMNTS Intelligence discovered that in September, 66% of banking leaders reported using AI and machine learning (ML) to combat fraud, a significant rise from 34% the previous year.
However, the report noted, “The development of AI and ML tools involves substantial costs, which might explain why only 14% of FIs undertake the creation of their own AI and ML solutions for fighting fraud.” PYMNTS added, “About 30% of FIs depend entirely on external vendors for these technologies. In a similar vein, a mere 11% of FIs develop their APIs in-house, whereas 22% exclusively use third-party API solutions.”