HMM: OpenAI promised 20% of its computing power to combat the most dangerous kind of AI—but never delivered, sources say.

OpenAI’s Superalignment team had been set up under the leadership of Ilya Sutskever, the OpenAI cofounder and former chief scientist, whose departure from the company was announced last week. Jan Leike, a longtime OpenAI researcher, co-led the team. He announced his own resignation Friday, two days after Sutskever’s departure. The company then told the remaining employees on the team—which numbered about 25 people—that it was being disbanded and that they were being reassigned within the company.

It was a swift downfall for a team whose work OpenAI had positioned less than a year earlier as vital for the company and critical for the future of civilization. Superintelligence is the idea of a future, hypothetical AI system that would be smarter than all humans combined. It is a technology that would lie even beyond the company’s stated goal of creating artificial general intelligence, or AGI—a single AI system as smart as any person.

Superintelligence, the company said when announcing the team, could pose an existential risk to humanity by seeking to kill or enslave people. “We don’t have a solution for steering and controlling a potentially superintelligent AI, and preventing it from going rogue,” OpenAI said in its announcement. The Superalignment team was supposed to research those solutions.

It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next four years” to the effort.

So either they decided the perceived threat will never materialize, management decided they don’t care if it does, or Skynet is already in charge of OpenAI.