OpenAI Vet Sutskever’s Startup Reportedly Raises $1 Billion

artificial intelligence

Safe Superintelligence, the company co-founded by OpenAI veteran Ilya Sutskever, has reportedly raised $1 billion.

The company plans to use the funds to boost its computing power and hire talent, management told Reuters in an interview published Wednesday (Sept. 4). Safe Superintelligence (SSI) declined to share its valuation, though sources told Reuters the firm is valued at $5 billion.

Investors in the round included high-profile venture capital outfits like Andreessen Horowitz and Sequoia, along with NFDG, an investment partnership run in part by SSI CEO Daniel Gross.

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross told Reuters.

Meanwhile, Sutskever — an OpenAI co-founder who had been the company’s chief scientist — said the new project made sense because he “identified a mountain that’s a bit different from what I was working on.”

Last year, Sutskever was part of the board at OpenAI that voted to unseat CEO Sam Altman over a “breakdown of communications,” though he quickly changed his mind and joined in an employee-led campaign for Altman’s reinstatement.

However, as Reuters notes, the incident “diminished” Sutskever’s role at OpenAI. He was removed from the board and stepped down in May. After he left, the company dissolved his AI-safety-focused “superalignment” team.

Sutskever announced the launch of SSI in June, saying the company would focus solely on developing — as the name suggests — safe superintelligence without the pressure that comes with commercial interests.

As PYMNTS wrote at the time, this has once again sparked a debate about whether such a feat is possible. Some experts question the feasibility of creating a superintelligent AI, given the limitations of AI systems and the obstacles to ensuring its safety.

“Critics of the superintelligence goal point to the current limitations of AI systems, which, despite their impressive capabilities, still struggle with tasks that require common sense reasoning and contextual understanding,” that report said. “They argue that the leap from narrow AI, which excels at specific tasks, to a general intelligence that surpasses human capabilities across all domains is not merely a matter of increasing computational power or data.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.