Ingo Payments Generation Instant Overpayment Disbursements June 2024 Banner

OpenAI Forms Safety Committee as It Trains New Frontier Model

OpenAI

OpenAI formed a security committee following some high-profile concerns about its safety focus.

The artificial intelligence company announced in a Monday (May 28) blog post the makeup of its new Safety and Security Committee, led by board directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman.

The committee will be charged with making recommendations to the entire board on crucial safety and security decisions for OpenAI operations and projects, per the post. That includes artificial general intelligence, or AGI, a so-far unrealized version of AI with thinking and reasoning abilities that can match or exceed those of humans.

“OpenAI has recently begun training its next frontier model, and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI,” the post said. “While we are proud to build and release models that are industry-leading on both capabilities and safety, we welcome a robust debate at this important moment.”

It was concerns about AI that in part led to the departure of Jan Leike, one of the two heads of OpenAI’s “superalignment team,” which focused on the safety of future advanced AI systems.

The other member of the team, Ilya Sutskever, co-founder and chief scientist of OpenAI, resigned May 15 to pursue outside projects. However, with the two resignations, the superalignment team was effectively dissolved.

Leike wrote on social platform X earlier this month that he had reached a “breaking point” with OpenAI’s leadership over the company’s central priorities and argued that the firm did not pay enough attention to safety, especially where AGI is concerned.

Altman and OpenAI President Greg Brockman responded with their own message on X, saying they were aware of the risks and potential of AGI and adding that the company had called for international AGI standards and was one of the pioneers in the practice of examining AI systems for catastrophic threats.

Meanwhile, there is a new landmark agreement among AI companies to implement a “kill switch” that would halt the development of their most advanced AI models if certain risk thresholds are exceeded.

The decision has sparked a debate about the future of AI, with proponents seeing the kill switch as a necessary safeguard against the potential dangers of unchecked AI development.

Critics, however, question the effectiveness of the solution.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.