IMF Lays Out 5-Point AI Regulation Action Plan

2023 saw governments around the world grapple with the commercial emergence of artificial intelligence (AI). 

From the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI, to the European Union’s (EU) AI Act, China’s already implemented policies, and Japan’s “Hiroshima Process,” the world’s largest economies took their own distinct approach to balancing oversight of AI’s implications with support for its further innovation

2024 is already shaping up to be a year where national, and even supranational, policies are sharpened, signed, and implemented. 

But regulation of AI is a complex and evolving topic that involves various considerations — not the least of which is the fact that the technology knows no borders, putting a spotlight on global cooperation and coordination around industry standardization, similar to frameworks that apply to financial regulations, or to cars and healthcare. 

In the latest discussion around the regulation of the technology, the International Monetary Fund (IMF) has laid out an action plan for AI governance in a report entitled “Building Blocks for AI Governance.” 

Authored by AI pioneer Mustafa Suleyman and risk consultant Ian Bremmer, the report outlined five guiding principles “to govern AI effectively,” noting that: “If the Cold War was punctuated by the nuclear arms race, today’s geopolitical contest will likewise reflect a global competition over AI.” 

After all, AI represents an innovation that can impact nearly every facet of modern life. That means that AI governance is not just a single, linear problem to be solved, and AI can’t be dealt with on the basis of previous technological oversight because AI is unlike any previous technology. 

Already, the IMF has noted in a separate report that up to 60% of jobs in advanced economies will be impacted by AI. 

Read also: How AI Regulation Could Shape Three Digital Empires

Learning How to Manage and Govern AI

Many western observers believe that an ongoing process of interaction between governments, the private sector, and other relevant organizations is necessary for AI regulation to be effectively implemented. And few believe that effective oversight of AI — meaning a framework that supports innovation while negating AI’s risks — will be possible to achieve with a single piece of legislation.

“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania law professor Cary Coglianese told PYMNTS earlier this month as part of the “TechReg Talks” series. “It’s not one static thing.”

“AI’s unique characteristics, coupled with the geopolitical and economic incentives of the principal actors, call for creativity in governance regimes,” wrote Suleyman and Bremmer for the IMF. 

Because of the rate and speed at which the capabilities of AI systems are evolving, the present moment represents an increasing time of urgency for businesses, governments and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.

“Any idea that regulation is going to be globally ubiquitous is a fool’s errand,” Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS in November.  

He suggested an approach where regulations primarily target use cases, with technology and security measures tailored to specific applications, arguing for example that within healthcare, the existing regulations, such as HIPAA, provide a strong foundation.

See more: US Eyes AI Regulations that Tempers Rules and Innovation

The five guidelines published by the IMF call for the qualities of effective AI oversight to include being:

  • Precautionary, as in weighted toward AI’s potentially catastrophic downsides;
  • Agile, as in capable of responding in-turn to AI’s rapid advances;
  • Inclusive, as in collaborative and not dominated by any one actor, public or private;
  • Impermeable, as in providing no avenue for exit from compliance;
  • and Targeted, as in modular and adaptable rather than one-size-fits-all. 

Elsewhere on the AI regulation front, the House Financial Services Committee last week formed a working group to examine the effect of AI on the financial services and housing industries, while Senate Majority Leader Chuck Schumer has gone on the record saying that action on AI needs to come from Congress, not the White House. 

When it comes to adherence with existing laws around AI, Cornell University found that just 18 out of 391 companies in New York City had disclosed the impact of AI on their hiring decisions in accordance with an ordinance passed six months ago. 

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.