Major technology companies are making a significant effort to influence the European Union (EU) towards a more lenient approach in regulating artificial intelligence (AI). According to Reuters, this initiative comes as firms aim to reduce the risk of facing hefty fines under the newly introduced AI Act.
A Historic Legislative Framework
The AI Act, finalized by EU lawmakers in May, is the first comprehensive regulatory framework for AI technologies. However, the specific enforcement measures, especially regarding “general purpose” AI systems like OpenAI’s ChatGPT, remain uncertain until the related codes of practice are established. Consequently, companies are left in the dark about the potential for copyright lawsuits and substantial financial penalties.
High Demand for Input
In an unprecedented move, the EU has invited contributions from a wide range of stakeholders to help shape the code of practice, garnering nearly 1,000 applications, per Reuters. While this code will not carry legal weight when implemented late next year, it will serve as an essential guide for businesses seeking to demonstrate compliance. Ignoring its provisions while claiming adherence could expose companies to legal challenges.
Balancing Regulation and Innovation
Boniface de Champris, a senior policy manager at CCIA Europe, underscored the importance of crafting an effective code. “If it’s too narrow or too specific, that will become very difficult,” he cautioned, highlighting the potential stifling of innovation if regulations are excessively restrictive.
Related: Resignation of Thierry Breton Sparks Debate Over Future of Tech Regulation in EU
Copyright Concerns in AI Training
The use of copyrighted materials for training AI models has come under scrutiny, with companies like Stability AI and OpenAI facing questions over their practices. The AI Act requires companies to provide “detailed summaries” of the data used in their training processes, allowing content creators to seek compensation if their works are utilized without permission. However, some industry leaders argue that these summaries should remain minimal to protect trade secrets, while others insist on the need for transparency.
Industry Participation in Drafting Efforts
OpenAI and Google have both applied to join the working groups responsible for drafting the code, as confirmed to Reuters. Amazon also expressed its commitment to contribute to the success of this regulatory effort.
Transparency vs. Corporate Interests
Maximilian Gahntz, AI policy lead at the Mozilla Foundation, expressed concerns about the tech industry’s commitment to transparency. “The AI Act presents the best chance to shine a light on this crucial aspect and illuminate at least part of the black box,” he stated.
Criticism of Regulatory Priorities
Some business leaders have voiced concerns that the EU’s focus on tech regulation may undermine innovation. Those drafting the code are striving to find a middle ground. Recently, former European Central Bank chief Mario Draghi emphasized the need for the EU to enhance its industrial policy coordination and investment to remain competitive with countries like China and the U.S.
Support for Startups
As the regulatory landscape evolves, European startups are advocating for provisions within the AI Act that would lessen the burden on smaller firms. “We’ve insisted these obligations need to be manageable and, if possible, adapted to startups,” said Maxime Ricard, policy manager at Allied for Startups.
Looking Ahead
With the code expected to be published in early 2025, technology companies will have until August 2025 to align their practices with the new regulations. Non-profit organizations, including Access Now and the Future of Life Institute, have also expressed interest in contributing to the drafting process, underscoring the collaborative nature of this critical regulatory undertaking.
Source: Reuters
Featured News
NAR Seeks Dismissal of Pennsylvania Antitrust Lawsuit Filed by Broker
Dec 18, 2024 by
CPI
Vietnam Approves Sweeping Data Law Amid Industry Worries
Dec 18, 2024 by
CPI
Andrew Ferguson’s FTC Leadership Likely to Shape Big Tech and AI Regulation
Dec 18, 2024 by
CPI
Supreme Court to Review TikTok’s Legal Challenge Over US Ban
Dec 18, 2024 by
CPI
Prestigious Universities Accused of Price-Fixing in Financial Aid Scheme
Dec 18, 2024 by
CPI
Antitrust Mix by CPI
Remedies After Illumina/GRAIL– The Thorny Question of Proportionality
Dec 17, 2024 by
Aleksander Tombinski & Ciara Denihan
Why Was Illumina/GRAIL Blocked in the EU? Reviewing The European Commission’s Assessment of Vertical Mergers in Light of the 2022 Prohibition Decision
Dec 17, 2024 by
Will Sparks
The Role of Uncertainty in the Future European Horizontal Merger Guidelines: Lessons Learned From Illumina/GRAIL
Dec 17, 2024 by
Svend Albaek & Daniel Donath
Illumina’s Light on Article 22 EUMR: The Suspended Step and Uncertain Future of EU Merger Control Over Below-Threshold “Killer” Mergers
Dec 17, 2024 by
Anna Tzanaki
EU-Level Jurisdiction Over “Killer Acquisitions” in the Aftermath of Illumina/GRAIL
Dec 17, 2024 by
Peter Whelan