The Office of Management and Budget (OMB) has released guidance to help federal agencies manage risk and performance when procuring commercially provided artificial intelligence (AI) products.
This guidance — provided in the “Advancing the Responsible Acquisition of Artificial Intelligence in Government” memorandum (M2418) — builds on an earlier guidance that was released in March, the OMB said in a Friday (Oct. 4) fact sheet.
While the earlier memorandum (M-24-10) introduced government-wide binding requirements for agencies focused on the use of AI, the new one helps agencies with the acquisition of the technology, according to the fact sheet.
“Agency acquisition of AI is similar in many respects to the purchase of other types of information technology, but it also presents novel challenges,” the fact sheet said. “M-24-18 helps agencies anticipate and address these challenges by issuing requirements and providing recommendations around three strategic goals.”
Those goals include managing AI risk and performance, promoting a competitive AI market and ensuring collaboration across the federal government, according to the fact sheet.
The guidance includes best practices and specific requirements for managing AI risk and performance, with additional requirements acquiring AI use cases that impact rights and safety; recommendations for minimizing vendor lock-in and securing good contractor performance; and suggestions for working with other agencies to support effective and responsible acquisition of AI, per the fact sheet.
“As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the guidance is designed to serve as a first step toward helping agencies and vendors grow together as the AI market continues to evolve — charting the course for ensuring that Federal acquisition of AI enables agencies to responsibly optimize the services they deliver for the American people,” the fact sheet said.
When releasing its earlier guidance on the use of AI in March, the OMB required federal agencies to identify and mitigate the potential risks of AI.
Those rules require each federal agency to designate a chief AI officer, who is responsible for coordinating the implementation of the technology and ensuring compliance with the policy, and created detailed and publicly accessible inventories of their AI systems, highlighting use cases that could impact safety or civil rights.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.