Major players in the burgeoning generative AI sector, Google and OpenAI, have markedly different views about regulatory oversight of the world-changing technology.
In widely published reports, Google is diverging from OpenAI and its partner Microsoft on the structure of AI regulations. On Tuesday (June 13), The Washington Post reported that in a filing with the Commerce Department, Google is asking for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST).
Google and Alphabet President of Global Affairs Kent Walker told the Post, “We think that AI is going to affect so many different sectors, we need regulators who understand the unique nuances in each of those areas.”
OpenAI CEO Sam Altman has taken a different direction, saying during a U.S. Senate hearing in May, “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” suggesting a more centralized and specialized approach.
In an OpenAI blog post published May 22, Altman and co-authors wrote that generative AI requires something akin to an International Atomic Energy Agency (IAEA), but for “superintelligence.”
Read more: Generative AI Shows Its Flaws as Google, OpenAI Competition Intensifies
That blog reads, in part, that “any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.”
By contrast, Google’s response to the Commerce Department’s request for comment said, “At the national level, we support a hub-and-spoke approach — with a central agency like the National Institute of Standards and Technology (NIST) informing sectoral regulators overseeing AI implementation — rather than a ‘Department of AI.’”
“There is this question of should there be a new agency specifically for AI or not?” Helen Toner, a director at Georgetown’s Center for Security and Emerging Technology, told CNBC, adding, “Should you be handling this with existing regulatory authorities that work in specific sectors, or should there be something centralized for all kinds of AI?”
At this point, the Biden administration is in fact-finding mode. But with OpenAI calling for IAEA-style oversight for superintelligence, many anticipate robust regulatory responses worldwide.
Featured News
Electrolux Fined €44.5 Million in French Antitrust Case
Dec 19, 2024 by
CPI
Indian Antitrust Body Raids Alcohol Giants Amid Price Collusion Probe
Dec 19, 2024 by
CPI
Attorneys Seek $525 Million in Fees in NCAA Settlement Case
Dec 19, 2024 by
CPI
Italy’s Competition Watchdog Ends Investigation into Booking.com
Dec 19, 2024 by
CPI
Minnesota Judge Approves $2.4 Million Hormel Settlement in Antitrust Case
Dec 19, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand