Commerce Department Proposes Mandatory Reporting Requirements for AI Firms

Commerce Dept Launches AI Advisory Committee

The Commerce Department’s Bureau of Industry and Security (BIS) aims to require the world’s leading artificial intelligence (AI) developers and cloud providers to provide detailed reporting to the federal government.

The BIS released a Notice of Proposed Rulemaking Monday (Sept. 9), saying that the new mandatory reporting requirements are intended to ensure that AI is safe and reliable, can withstand cyberattacks and has limited risk of misuse by foreign adversaries or non-state actors, according to a Monday (Sept. 9) press release.

“As AI is progressing rapidly, it holds both tremendous promise and risk,” Secretary of Commerce Gina M. Raimondo said in the release. “This proposed rule would help us keep pace with new developments in AI technology to bolster our national defense and safeguard our national security.”

The reporting mandated by the proposed rule would encompass developmental activities, cybersecurity measures and outcomes from red-teaming efforts, according to the release.

The red-teaming efforts would involve testing for the ability to assist in cyberattacks; the ability to lower the barriers to entry to developing chemical, biological, radiological or nuclear weapons; and other dangerous capabilities, per the release.

The BIS has long conducted defense industrial base surveys that inform the government about emerging risks in important industries, Under Secretary of Commerce for Industry and Security Alan F. Estevez said in the release.

“This proposed reporting requirement would help us understand the capabilities and security of our most advanced AI systems,” Estevez said.

The Biden Administration issued an executive order aimed at safe AI development in October 2023, adding that more action is required and that the White House would work with Congress in hopes of crafting bipartisan AI legislation.

Biden’s requirements for AI companies included a rule saying that the developers “of the most powerful AI systems” share their safety test results and other key information with the federal government; that AI firms must come up with “standards, tools and tests” to make sure their systems are secure and trustworthy; and that the companies guard against the threat of “using AI to engineer dangerous biological materials” by establishing strong standards for biological synthesis screening.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.