More than one year after regulators in the United Kingdom vowed to analyze the impact of artificial intelligence (AI), a report details the challenges and recommends governance of the controversial tech tool.
The report, “Artificial Intelligence Public-Private Forum,” by the Bank of England and the Financial Conduct Authority (FCA) was launched in 2020 to gather views on potential regulations that could be useful in supporting safe adoption of AI.
The 47-page document explores barriers to adoption, how to address them and mitigate potential risks. But researchers concluded given AI’s rapid evolution, there are no clear answers yet.
The forum operated for one year. It gathered a diverse group of experts across financial services, tech, academics, the public and other U.K. regulators.
The finding show governance is crucial to the safe adoption of AI in financial services. A set of policies and controls would ensure accountability for a firm’s use of AI. Efficient regulation also guarantees effective risk management and address many of the data and model-related issues.
“On the other hand, poor governance can increase challenges and produce risks for consumers, firms, and the financial system,” the report stated.
In another initiative launched last month, the U.K. implemented an AI Standard Hub to increase the development of global AI technical standards.
Read more: UK Seeks Its Place to Shape Global Standards in Artificial Intelligence
The new standard promises to craft tools for businesses to develop AI systems and help organizations develop and benefit from global criteria.
But the final report acknowledged that implementing appropriate and effective governance can be difficult to achieve, in part, because AI could eliminate human judgement and oversight from key decisions. Other concerns include addressing bias and fairness.
The next step, researchers said, is to continue engagement with stakeholders. It would also be useful to have regular meetings on best practices, according to the report.