The Federal Trade Commission (FTC) is reportedly investigating OpenAI for issues around false information and data security.
The regulator has sent a letter to the creator of the artificial intelligence (AI)-powered chatbot ChatGPT asking dozens of detailed questions about these issues, The Wall Street Journal (WSJ) reported Thursday (July 13), citing an unnamed source.
One issue being investigated by the FTC is whether ChatGPT has harmed people by publishing false information about them, according to the WSJ report.
The agency is also looking into OpenAI’s data security practices, including the company’s 2020 disclosure that a bug exposed information about users’ chats and payment-related information, the report said.
The FTC’s civil investigative demand also asks questions about OpenAI’s marketing efforts, AI model training practices and handling of user’s personal information, per the report.
FTC Chair Lina Khan wrote in an op-ed published by The New York Times in May that AI should be regulated, and that the agency is looking at “how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.”
Read more: US Advocacy Group Asks FTC To Stop New OpenAI GPT Releases
“Can [the U.S.] continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices,” Khan wrote at the time.
OpenAI said in a May blog post that it’s time to start thinking about the governance of future AI systems.
In the post, OpenAI President Greg Brockman, CEO Sam Altman and Chief Scientist Ilya Sutskever suggested that the leading development efforts in AI be coordinated to limit the rate of growth per year in AI capability, that an international authority be formed to monitor AI development efforts and restrict those above a certain capability, and that technical capability be developed to make superintelligence safe.
In June, PYMNTS reported that OpenAI and Google, another player in the generative AI sector, have different views about regulatory oversight of the sector.
Google asked for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST), while OpenAI favored a more centralized and specialized approach.
Featured News
Electrolux Fined €44.5 Million in French Antitrust Case
Dec 19, 2024 by
CPI
Indian Antitrust Body Raids Alcohol Giants Amid Price Collusion Probe
Dec 19, 2024 by
CPI
Attorneys Seek $525 Million in Fees in NCAA Settlement Case
Dec 19, 2024 by
CPI
Italy’s Competition Watchdog Ends Investigation into Booking.com
Dec 19, 2024 by
CPI
Minnesota Judge Approves $2.4 Million Hormel Settlement in Antitrust Case
Dec 19, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand