US Senator Michael Bennet, a Democrat from Colorado, recently addressed a letter to major technology and generative AI companies, calling for them to label AI-generated content and limit the spread of fake or misleading material. Bennet cited several recent examples of AI-generated content causing alarm and market turbulence. He also underscored the importance of Americans knowing when AI is being used to shape political content.
“Fabricated images can derail stock markets, suppress voter turnout, and shake Americans’ confidence in the authenticity of campaign material,” Bennet said.
OpenAI CEO Sam Altman testified before the Senate Judiciary Committee, highlighting AI’s impact on the spread of false information. Bennet applauds the steps taken by technology companies to identify and label AI-generated content, but acknowledges that these measures are voluntary and can be easily bypassed.
“Americans should know when images or videos are the product of generative AI models, and platforms and developers have a responsibility to label such content properly,” Bennet said during his letter.
U.S. lawmaker N/A echoed Bennet’s sentiments, arguing that platforms ought to update their policies given the availability of generative AI tools to all.
Related: EU Commissioner Says AI-Generated Content Should Be Labelled
“We cannot expect users to dive into the metadata of every image in their feeds, nor should platforms force them to guess the authenticity of content shared by political candidates, parties, and their supporters,” N/A said.
Meanwhile, other lawmakers, including Senate Majority Leader Chuck Schumer, have expressed interest in introducing legislation to regulate AI. Bennet has gone on to introduce a bill requiring political ads to disclose whether AI was used in the production process.
“Continued inaction endangers our democracy. Generative AI can support new creative endeavors and produce astonishing content, but these benefits cannot come at the cost of corrupting our shared reality,” Bennet said.
Bennet’s letter asked the executives about the standards and requirements they employ to identify AI content and how those standards were developed and audited. He also inquired about the consequences for users who violate the rules.
Twitter responded to a request for comment with a poop emoji, while Microsoft declined to comment and TikTok, OpenAI, Meta, and Alphabet did not respond immediately.
As AI-generated content becomes more prevalent and nefarious, U.S. Senator Michael Bennet is pressing for major technology and generative AI companies to act responsibly and promptly to protect public discourse and electoral integrity. Bennet’s letter and subsequent bill demonstrate a sense of urgency and awareness into the risks posed by artificial intelligence and the powerful implications it has on our democracy.
Featured News
Judge Appoints Law Firms to Lead Consumer Antitrust Litigation Against Apple
Dec 22, 2024 by
CPI
Epic Health Systems Seeks Dismissal of Antitrust Suit Filed by Particle Health
Dec 22, 2024 by
CPI
Qualcomm Secures Partial Victory in Licensing Dispute with Arm, Jury Splits on Key Issues
Dec 22, 2024 by
CPI
Google Proposes Revised Revenue-Sharing Limits Amid Antitrust Battle
Dec 22, 2024 by
CPI
Japan’s Antitrust Authority Expected to Sanction Google Over Monopoly Practices
Dec 22, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand