The Federal Trade Commission (FTC) is reportedly investigating OpenAI for issues around false information and data security.
The regulator has sent a letter to the creator of the artificial intelligence (AI)-powered chatbot ChatGPT asking dozens of detailed questions about these issues, The Wall Street Journal (WSJ) reported Thursday (July 13), citing an unnamed source.
Reached by PYMNTS, an FTC spokesperson declined to comment on the report.
OpenAI CEO Sam Altman said in a Thursday tweet that the company built GPT-4 on top of years of safety research and designed it to protect user privacy.
we built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals.
— Sam Altman (@sama) July 13, 2023
“It is very disappointing to see the FTC’s request start with a leak and does not help build trust,” Altman said in another tweet. “That said, it’s super important to us that [our] technology is safe and pro-consumer, and we are confident we follow the law. Of course we will work with the FTC.”
it is very disappointing to see the FTC’s request start with a leak and does not help build trust.
that said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC.
— Sam Altman (@sama) July 13, 2023
One issue being investigated by the FTC is whether ChatGPT has harmed people by publishing false information about them, according to the WSJ report.
The agency is also looking into OpenAI’s data security practices, including the company’s 2020 disclosure that a bug exposed information about users’ chats and payment-related information, the report said.
The FTC’s civil investigative demand also asks questions about OpenAI’s marketing efforts, AI model training practices and handling of user’s personal information, per the report.
FTC Chair Lina Khan wrote in an op-ed published by The New York Times in May that AI should be regulated, and that the agency is looking at “how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.”
“Can [the U.S.] continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices,” Khan wrote at the time.
OpenAI said in a May blog post that it’s time to start thinking about the governance of future AI systems.
In the post, OpenAI President Greg Brockman, CEO Sam Altman and Chief Scientist Ilya Sutskever suggested that the leading development efforts in AI be coordinated to limit the rate of growth per year in AI capability, that an international authority be formed to monitor AI development efforts and restrict those above a certain capability, and that technical capability be developed to make superintelligence safe.
In June, PYMNTS reported that OpenAI and Google, another player in the generative AI sector, have different views about regulatory oversight of the sector.
Google asked for AI oversight to be shared by existing agencies led by the National Institute of Standards and Technology (NIST), while OpenAI favored a more centralized and specialized approach.