The use of artificial intelligence (AI) or automated decisioning in financial services is widespread for many services, from assessing loan applications, to recruitment or to combat fraud. A PYMNTS study shows that 60% of acquiring banks say AI systems are their most important fraud detection tools, making the technology a must-have as digital transaction volumes soar.
While companies are using AI to identify and reduce fraud, regulators are putting their efforts into tackling unfair biases and improving transparency in algorithms and data sets.
Regulators are adopting a very permissive approach to the use of AI, but financial service companies may be affected by new regulations in 2022.
United States
While there is currently no federal regulation of AI in the U.S., regulators and lawmakers have sent a few signs that AI regulation may be coming in 2022.
First, the Algorithmic Accountability Act 2022, introduced in February, aims to bring transparency and oversight of software, algorithms and other systems that are used to make automated decisions.
The bill, if approved, would require companies to conduct impact assessments for bias, effectiveness and other factors, when using automated decisioning systems to make critical decisions. The bill would also give the Federal trade Commission (FTC) the authority to require companies to comply with this bill and to create a public repository of these automated systems.
The bill doesn’t impose bans or tell companies how to use their automated systems, but it imposes requirements regarding reporting and disclosure.
Second, the FTC is considering enacting new regulations that could ban certain AI practices. In a blog post and in a letter sent to Sen. Richard Blumenthal, D-Conn., the FTC outlined the risks of AI for consumers. This included discriminatory outcomes as well as a lack of transparency in the decision-making process and how companies collect and use data. Any rule would likely aim at tackling these issues.
The FTC can resort to its rulemaking authority, but in order to enact new rules, the practice that the regulator seeks to address needs to be “prevalent” in the country. This requirement may not be so easy to meet. As such, the FTC may still rely on individual probes against companies that engage in algorithmic discrimination, and this may affect financial companies that use AI for loan applications.
Read more: Cost of Proposed US AI Bill May Outweigh Its Benefits
European Union
The European Parliament (EP) will soon vote on the AI Act, which creates a risk-based approach to AI in Europe and will impact use and development of AI systems, including within the financial sector.
The most notable features of the AI Act for financial service companies are:
The proposed AI Act may still be subject to amendments in Parliament, but there is enough consensus among policymakers to have a common position by the end of 2022, which could bring a final approval of the bill by 2023.
See also: EU Parliament Committee Urges Member States to Design a Roadmap for AI
United Kingdom
The U.K. government published its national AI strategy in fall 2021, which sets out the government’s proposed timeline for implementing actions, including the governance and regulation of AI for 2022.
For the time being, there are no specific regulations on AI. However, the government and regulators are hinting that regulations may be adopted with a light-touch approach to foster innovation.
The best example comes from the Bank of England and the Financial Conduct Authority. In their report on the AI Public-Private Forum, they didn’t suggest specific public policies, but said that too much regulation in this area would be detrimental for AI development.
Even so, regulators have warned banks using AI systems to approve loan applications that they need to be ready to explain how their decisions are made to avoid discriminatory outcomes.
Related: UK Banks Get the First Taste of What AI Regulation May Look Like