OpenAI Disbands its Existential AI Risk Team
OpenAI's former chief scientist has expressed difficulty in obtaining resources to research the risks of existential AI, as the company has reportedly disbanded his team.
OpenAI, responsible for controlling the existential risks posed by future superhuman AI systems, has dissolved its Superalignment team. This news comes just days after the team's founders, Ilya Sutskever and Jan Leike, both left the company.
Since its establishment in July 2023, OpenAI's Superalignment team has been dedicated to preventing the future misalignment of superhuman AI systems. However, the team no longer exists. Reports suggest that their work will be integrated into OpenAI's other research efforts. The research on risks associated with more powerful AI models will now be led by OpenAI's co-founder, John Schulman.
Sutskever and Leike were renowned scientists within OpenAI focused on AI risk. Leike recently published a lengthy post, vaguely explaining his departure from OpenAI. He stated that he had been at odds with the company's leadership on core values, reaching a breaking point this week. Leike noted that the Superalignment team had been "swimming against the current," struggling to secure sufficient computational resources for critical research. He believed that OpenAI needed to prioritize safety, security, and alignment.
When asked about the dissolution of the Superalignment team, OpenAI's press team directed attention to Sam Altman's Twitter. Altman stated that he would be posting a longer thread in the coming days, emphasizing that OpenAI "has a lot of work to do."
Later, an OpenAI spokesperson clarified that Superalignment would now be deeply integrated into their research work, aiding in the better realization of Superalignment's goals. The company stated that this integration had begun "weeks ago," and eventually, Superalignment team members and projects would be transferred to other teams.
In a blog post released in July, the Superalignment team wrote, "We currently have no solution for guiding or controlling potential superintelligent AI and preventing it from going astray." "However, humans will be unable to reliably supervise AI systems that are much smarter than us, so our current alignment techniques won't scale to superintelligence. We need new scientific and technological breakthroughs."
It is currently unclear if the same level of attention will be given to these technological breakthroughs. Undoubtedly, OpenAI still has other teams focused on safety. Reports suggest that Schulman's team, responsible for absorbing Superalignment's responsibilities, is currently in charge of fine-tuning AI models after training.
Earlier this year, the team published a significant research paper on using smaller AI models to control larger AI models, considered the first step in controlling superintelligent AI systems. It is currently unknown who within OpenAI will be responsible for the next steps of these projects.