US Tightens Grip on AI: New Reporting Rules for Developers and Cloud Providers
In a move aimed at enhancing safety and cybersecurity within the rapidly evolving artificial intelligence (AI) industry, the U.S. Commerce Department proposed new rules that would require detailed reporting from developers of advanced AI and cloud computing providers. The announcement came on Monday, according to Reuters, marking a significant step towards ensuring that emerging AI technologies can withstand cyberattacks and mitigate risks associated with their misuse.
The proposal, put forward by the department’s Bureau of Industry and Security (BIS), would establish mandatory federal reporting for activities related to the development of so-called “frontier” AI models and computing clusters. Additionally, it mandates that developers disclose cybersecurity measures and the results of red-teaming tests—efforts designed to uncover dangerous capabilities, such as enabling cyberattacks or simplifying the creation of chemical, biological, radiological, or nuclear weapons by non-experts, per Reuters.
Red-teaming, a practice with roots in Cold War U.S. military simulations, has long been utilized in the field of cybersecurity to assess vulnerabilities and identify new risks. The term “red team” historically referred to the simulated enemy forces in these exercises. With the rise of generative AI—technology that can produce text, images, and videos from user prompts—concerns have intensified about its potential misuse. These AI tools have sparked fears of job displacement, election manipulation, and even the possibility of catastrophic consequences if AI systems overpower human control.
Read more: World’s First AI Treaty Set for Signing by US, UK, and EU Amid Concerns
According to the Commerce Department, the information gathered through the proposed rules will be “vital” for ensuring that AI technologies meet high standards for safety and reliability, withstand cyber threats, and have minimal risk of being exploited by foreign adversaries or non-state actors.
This regulatory push comes on the heels of President Joe Biden’s executive order in October 2023, which requires developers of AI systems with national security implications to submit safety test results to the government before these technologies are released to the public. Per Reuters, this latest proposal aligns with the broader goals of that executive order, expanding the focus to include AI models that could pose risks to the economy, public health, and safety.
The legislative effort to regulate AI comes at a time when Congress has struggled to pass laws addressing the technology. Earlier in 2024, BIS conducted a pilot survey of AI developers to gather insights into the industry. This latest step also follows ongoing efforts by the Biden administration to prevent China from accessing U.S. AI technologies, amid growing concerns about security vulnerabilities in the sector.
As the AI industry continues to evolve, this new regulatory framework is designed to ensure that the development and deployment of advanced AI systems occur with appropriate safeguards, particularly as the technology’s capabilities expand.
Source: Reuters
Featured News
Swisscom’s Fastweb-Vodafone Italia Merger Gains Momentum with Antitrust Approval Pending
Nov 24, 2024 by
CPI
Novo Holdings Nears EU Approval for $16.5 Billion Catalent Acquisition
Nov 24, 2024 by
CPI
Australia Drops Plan to Fine Tech Giants for Misinformation Spread
Nov 24, 2024 by
CPI
Michael Jordan’s Racing Team Drops Antitrust Appeal Against NASCAR
Nov 24, 2024 by
CPI
EU Closes Apple E-Book Antitrust Probe After Complaint Dropped
Nov 24, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – Remedies Revisited
Oct 30, 2024 by
CPI
Fixing the Fix: Updating Policy on Merger Remedies
Oct 30, 2024 by
CPI
Methodology Matters: The 2017 FTC Remedies Study
Oct 30, 2024 by
CPI
U.S. v. AT&T: Five Lessons for Vertical Merger Enforcement
Oct 30, 2024 by
CPI
The Search for Antitrust Remedies in Tech Leads Beyond Antitrust
Oct 30, 2024 by
CPI