NYDFS Issues Guidance for Combatting AI-Enabled Cybersecurity Risks

AI chatbots, cybersecurity

The New York State Department of Financial Services (DFS) issued new guidance to help DFS-regulated entities address and combat cybersecurity risks arising from artificial intelligence (AI).

“AI has improved the ability for businesses to enhance threat detection and incident response strategies, while concurrently creating new opportunities for cybercriminals to commit crimes at greater scale and speed,” DFS Superintendent Adrienne A. Harris said in a Wednesday (Oct. 16) press release. “New York will continue to ensure that as AI-enabled tools become more prolific, security standards remain rigorous to safeguard critical data, while allowing the flexibility needed to address diverse risk profiles in an ever-changing digital landscape.”

The guidance does not impose new requirements, according to the release. Instead, it helps DFS-regulated institutions meet their existing obligations under cybersecurity regulations.

Under these regulations, institutions assess and address their cybersecurity risks, including those arising from AI, and deploy multiple layers of security controls with overlapping protections so that if one control fails, others are in place to counter a cybersecurity attack, the release said.

Controls and measures that mitigate AI-related threats include risk assessment and risk-based programs, policies, procedures and plans; third-party service provider and vendor management; access controls; cybersecurity training; a monitoring process to detect new security vulnerabilities; and data management, per the guidance.

The guidance notes that AI-specific security risks include social engineering, enhanced cyberattacks, theft of nonpublic information and increased vulnerabilities due to supply chain dependencies, according to the release.

“As AI continues to evolve, so too will AI-related cybersecurity risks,” the guidance said. “Detection of, and response to, AI threats will require equally sophisticated countermeasures, which is why it is vital for Covered Entities to review and reevaluate their cybersecurity programs and controls at regular intervals, as required by Part 500.”

Ninety-three percent of acquirers using AI to detect fraud said that fraud increased during the previous year, according to “AI in Focus: Waging Digital Warfare Against Payments Fraud,” a PYMNTS Intelligence collaboration with Brighterion, a Mastercard company.

The report also found that 60% of acquiring banks said AI systems are their most important fraud detection tools and that 75% of acquirers use AI to detect transaction fraud.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.