OpenAI, the company behind ChatGPT, is making changes.
The company formed a new Safety and Security Committee to guide its board on AI safety and security and announced a forthcoming souped-up version of its current chatbot.
The moves come on the heels of key departures from OpenAI’s safety team. It’s a tricky balancing act for the company as it navigates the challenges of responsible AI development in a competitive industry.
OpenAI also announced in a Tuesday (May 28) blog post that it has started training a new AI model to replace the one behind ChatGPT. The company said this new model, which will succeed GPT-4, would bring it closer to achieving artificial general intelligence (AGI), the hypothetical ability of an AI system to understand or learn any intellectual task that a human can.
“While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment,” the post said.
“A first task of the Safety and Security Committee will be to evaluate and further develop OpenAI’s processes and safeguards over the next 90 days,” the post added. “At the conclusion of the 90 days, the Safety and Security Committee will share their recommendations with the full board. Following the full board’s review, OpenAI will publicly share an update on adopted recommendations in a manner that is consistent with safety and security.”
The new committee, led by CEO Sam Altman and board members Bret Taylor and Nicole Seligman, has 90 days to evaluate and strengthen OpenAI’s safety processes. It will present its recommendations to the board and share an update with the public.
However, the committee’s formation follows the exits of Jan Leike and Ilya Sutskever, two key figures in OpenAI’s AI safety efforts. Leike resigned, criticizing the company for underinvesting in safety and citing tensions with leadership. Sutskever, who briefly ousted Altman as CEO last year, also departed.
The overhaul comes as OpenAI and its competitors race to develop powerful AI systems. AI has potential benefits, from healthcare to productivity. However, it also raises ethical concerns, like bias and job displacement.
The new committee shows that OpenAI recognizes the importance of responsible AI development. The 90-day timeline suggests urgency and commitment. However, the departures raise questions about internal dynamics and the ability to retain AI safety talent.
OpenAI will need to address these challenges and ensure AI safety remains a top priority. This means allocating resources and fostering a culture that values responsibility alongside innovation.
The overhaul also highlights broader challenges in the AI industry. As AI becomes more powerful, robust safety measures and ethical guidelines are crucial. The race to achieve AGI must be balanced with mitigating risks and ensuring AI benefits society.
OpenAI’s new committee will need to navigate these complex issues. It must work to restore confidence in the company’s commitment to AI safety and contribute to the broader conversation about responsible AI development in a fast-paced industry.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.