A federal lawsuit claims OpenAI trained its ChatGPT tool using millions of people’s stolen data. The suit, filed Wednesday (June 28) in U.S. District Court in San Francisco by the Clarkson law firm, accuses the multi-billion dollar artificial intelligence (AI) company of carrying out a strategy to “secretly harvest massive amounts of personal data from the internet.”
This data, the suit alleges, included private information and conversations, medical data and information about children, without owners’ knowledge or permission.
“Without this unprecedented theft of private and copyrighted information belonging to real people,” the suit says, OpenAI and ChatGPT “would not be the multi-billion dollar business they are today.”
The lawsuit asks the court for a temporary freeze on commercial use of OpenAI’s products. Also named in the suit is Microsoft, which has invested more than $10 billion in OpenAI. PYMNTS has contacted both companies for comment but has not yet gotten a reply.
Read more: OpenAI Picks London For 1st International Office
OpenAI made ChatGPT open to the public in late 2022, and quickly saw the tool explode in popularity due to its ability to provide human-sounding responses to prompts. Since then, companies around the world have begun incorporating generative AI into countless products, leading to — as the lawsuit puts it — an “AI arms race.”
As PYMNTS wrote earlier this month, OpenAI’s “original mission was to build safe AI technology for the benefit of humanity.”
The company changed its organizational structure in 2019 to allow it to raise billions of dollars — primarily from Microsoft — and the firm generates revenue by charging a subscription for access to ChatGPT and other tools, licensing its large language models (LLMs) to businesses.
The change from non-profit to tech giant was noted in the lawsuit, which argues the company “abandoned its original goals and principles, electing instead to pursue profit at the expense of privacy, security, and ethics.”
The suit also devotes a lot of space to potential dangers of AI, noting a belief among experts for the technology’s potential to “act against human interests and values, exploit human beings without regard for their well-being or consent, and/or even decide to eliminate the human species as a threat to its goals.”
That danger has been noted by none other than Sam Altman, CEO of Open AI.
“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he testified in a recent Senate hearing. “We want to work with the government to prevent that from happening.”
Featured News
Judge Appoints Law Firms to Lead Consumer Antitrust Litigation Against Apple
Dec 22, 2024 by
CPI
Epic Health Systems Seeks Dismissal of Antitrust Suit Filed by Particle Health
Dec 22, 2024 by
CPI
Qualcomm Secures Partial Victory in Licensing Dispute with Arm, Jury Splits on Key Issues
Dec 22, 2024 by
CPI
Google Proposes Revised Revenue-Sharing Limits Amid Antitrust Battle
Dec 22, 2024 by
CPI
Japan’s Antitrust Authority Expected to Sanction Google Over Monopoly Practices
Dec 22, 2024 by
CPI
Antitrust Mix by CPI
Antitrust Chronicle® – CRESSE Insights
Dec 19, 2024 by
CPI
Effective Interoperability in Mobile Ecosystems: EU Competition Law Versus Regulation
Dec 19, 2024 by
Giuseppe Colangelo
The Use of Empirical Evidence in Antitrust: Trends, Challenges, and a Path Forward
Dec 19, 2024 by
Eliana Garces
Some Empirical Evidence on the Role of Presumptions and Evidentiary Standards on Antitrust (Under)Enforcement: Is the EC’s New Communication on Art.102 in the Right Direction?
Dec 19, 2024 by
Yannis Katsoulacos
The EC’s Draft Guidelines on the Application of Article 102 TFEU: An Economic Perspective
Dec 19, 2024 by
Benoit Durand