OpenAI and Anthropic Team With US Government on Safe AI Testing

Two of the most high-profile artificial intelligence firms inked a deal with the United States government.

OpenAI and Anthropic agreed to collaborate with the U.S. Artificial Intelligence Safety Institute on AI safety research, testing and evaluation, according to a Thursday (Aug. 29) press release.

The agreements establish a framework for the institute to receive access to new models from each company before and after their public release, the release said. They also enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.

“Safety is essential to fueling breakthrough technological innovation. With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” U.S. AI Safety Institute Director Elizabeth Kelly said in the release. “These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The institute — a division of the Commerce Department’s National Institute of Standards and Technology (NIST) — plans to provide feedback to OpenAI and Anthropic on potential safety improvements to their models, in collaboration with the United Kingdom’s AI Safety Institute, per the release.

The two countries joined forces earlier this year in a landmark agreement to develop safety tests. The agreement is designed to align the two countries’ individual approaches and speed the development of robust evaluation methods for AI models, systems and agents. It’s part of a growing worldwide effort to address concerns about the safety of AI systems.

“This new partnership will mean a lot more responsibility being put on companies to ensure their products are safe, trustworthy, and ethical,” Andrew Pery of global intelligent automation company ABBYY told PYMNTS in April. “The inclination by innovators of disruptive technologies is to release products with a ‘ship first and fix later’ mentality to gain first mover advantage. For example, while OpenAI is somewhat transparent about the potential risks of ChatGPT, they released it for broad commercial use with its harmful impacts notwithstanding.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.