A PYMNTS Company

US Tightens Grip on AI: New Reporting Rules for Developers and Cloud Providers

 |  September 9, 2024

In a move aimed at enhancing safety and cybersecurity within the rapidly evolving artificial intelligence (AI) industry, the U.S. Commerce Department proposed new rules that would require detailed reporting from developers of advanced AI and cloud computing providers. The announcement came on Monday, according to Reuters, marking a significant step towards ensuring that emerging AI technologies can withstand cyberattacks and mitigate risks associated with their misuse.

The proposal, put forward by the department’s Bureau of Industry and Security (BIS), would establish mandatory federal reporting for activities related to the development of so-called “frontier” AI models and computing clusters. Additionally, it mandates that developers disclose cybersecurity measures and the results of red-teaming tests—efforts designed to uncover dangerous capabilities, such as enabling cyberattacks or simplifying the creation of chemical, biological, radiological, or nuclear weapons by non-experts, per Reuters.

Red-teaming, a practice with roots in Cold War U.S. military simulations, has long been utilized in the field of cybersecurity to assess vulnerabilities and identify new risks. The term “red team” historically referred to the simulated enemy forces in these exercises. With the rise of generative AI—technology that can produce text, images, and videos from user prompts—concerns have intensified about its potential misuse. These AI tools have sparked fears of job displacement, election manipulation, and even the possibility of catastrophic consequences if AI systems overpower human control.

Read more: World’s First AI Treaty Set for Signing by US, UK, and EU Amid Concerns

According to the Commerce Department, the information gathered through the proposed rules will be “vital” for ensuring that AI technologies meet high standards for safety and reliability, withstand cyber threats, and have minimal risk of being exploited by foreign adversaries or non-state actors.

This regulatory push comes on the heels of President Joe Biden’s executive order in October 2023, which requires developers of AI systems with national security implications to submit safety test results to the government before these technologies are released to the public. Per Reuters, this latest proposal aligns with the broader goals of that executive order, expanding the focus to include AI models that could pose risks to the economy, public health, and safety.

The legislative effort to regulate AI comes at a time when Congress has struggled to pass laws addressing the technology. Earlier in 2024, BIS conducted a pilot survey of AI developers to gather insights into the industry. This latest step also follows ongoing efforts by the Biden administration to prevent China from accessing U.S. AI technologies, amid growing concerns about security vulnerabilities in the sector.

As the AI industry continues to evolve, this new regulatory framework is designed to ensure that the development and deployment of advanced AI systems occur with appropriate safeguards, particularly as the technology’s capabilities expand.

Source: Reuters