CareCredit Women's Wellness June 2024 Banner

Security Experts Wary of OpenAI’s Safety Play

OpenAI is making a move with its next-generation artificial intelligence model, but the company’s newly formed safety committee has some experts raising eyebrows.

The tech giant’s announcement Tuesday (May 28) that it is training a cutting-edge AI while launching a safety board might seem like a responsible step. Still, the devil is in the details: The committee appears stacked with OpenAI insiders, and outside voices are absent.

The AI arms race is a high-stakes game, with immense potential for profit and peril. As companies jockey to develop the most advanced AI systems, the pressure to prioritize speed over safety is intense. But with AI poised to revolutionize industries from finance to healthcare, the consequences of getting it wrong could be far-reaching.

“The board seems to be entirely OpenAI employees or executives,” John Bambenek, president of cybersecurity company Bambenek Consulting, told PYMNTS. “It’ll be difficult to prevent an echo chamber effect from taking hold that may overlook risks from more advanced models.”

OpenAI did not immediately respond to PYMNTS’ request for comment.

Diversity of Thought

OpenAI has begun training its next-generation AI model, which is expected to surpass the capabilities of its current leading model, GPT-4. The new model, dubbed the “next frontier model,” is a step toward OpenAI’s goal of achieving artificial general intelligence (AGI), a form of AI that can perform a wide range of tasks at a level comparable to humans. The new model will likely power various generative AI tools, including image generators, virtual assistants, search engines, and the company’s chatbot, ChatGPT.

OpenAI formed a new committee to evaluate and address any concerns in light of the potential risks associated with its advanced technology. The committee will be co-led by CEO and Co-founder Sam Altman, along with board members Bret Taylor, Adam D’Angelo and Nicole Seligman.

Additionally, five of OpenAI’s technical and policy experts will serve on the committee, bringing their expertise in preparedness, safety systems, alignment science and security. The committee will also consult with external safety, security and technical experts to support its work. Over the next 90 days, the committee’s primary focus will be to review and develop the company’s processes and safeguards, after which it will present its recommendations to the entire board.

As AI hurtles forward at breakneck speed, experts are sounding the alarm about the need for diverse perspectives in shaping its development.

“Encouraging diversity of thought in AI teams is also crucial to help combat bias and harmful training and/or output,” Nicole Carignan, vice president of strategic cyber AI at cybersecurity company Darktrace, told PYMNTS.

But it’s not just about avoiding pitfalls; it’s about unlocking AI’s full potential.

“Most importantly, AI should be used responsibly, safely and securely,” Carignan emphasized. “The risk AI poses is often in the way it is adopted.”

Data Integrity

Beyond the boardroom, experts also spotlighted the importance of data integrity in ensuring AI’s trustworthiness.

“As AI innovation continues to unfold at a rapid pace, we hope to see similar commitments for data science and data integrity,” Carignan said.

Zendata CEO Narayana Pappu pointed to other industries’ oversight models as potential templates for AI governance.

“Although the field of AI is fairly new, there are parallel institutions in other industries, such as institution review boards that govern medical research on human subjects, with equal significance,” Pappu told PYMNTS.

As the dust settles on OpenAI’s announcement, one thing is clear: The path to responsible AI innovation is paved with collaboration.

“OpenAI creating a new AI safety committee and starting to train its next major AI model is no surprise, especially after the recent agreement in Seoul where global leaders committed to responsible AI development,” Stephen Kowski, field chief technology officer at SlashNext Email Security+, told PYMNTS.

OpenAI’s safety committee may be a start, but it’s just the opening salvo in a much larger conversation about AI’s role in shaping the future of business and society. A culture of transparency, collaboration and accountability that extends beyond any company’s walls is essential to harnessing AI’s potential while navigating its risks.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.