OpenAI Researcher Resigns, Citing Security Yielding to "Shiny Products"

2024-05-20

Jan Leike, a key researcher at OpenAI, has chosen to resign after co-founder Ilya Sutskever's departure. In a post on Friday morning, he pointed out that internally, "safety culture and processes have been compromised for the sake of flashy products."


Leike's statement comes shortly after Wired reported the dissolution of OpenAI's "Superalignment team," which was dedicated to addressing long-term AI risks. Leike was the leader of this team, which was established in July last year with the goal of "solving core technical challenges in implementing safety protocols" as OpenAI develops AI capable of human-like reasoning.

OpenAI was originally intended to provide their models openly to the public, hence the name. However, due to concerns about the potential destructive power of granting access to such powerful models, they have transitioned to proprietary technology.

In a follow-up post on Friday morning regarding his resignation, Leike stated, "We should have taken the impact of AGI (Artificial General Intelligence) very seriously a long time ago. We must make every effort to prepare for it. Only then can we ensure that AGI benefits all of humanity."


According to reports, John Schulman, another co-founder of OpenAI who supported Altman in last year's boardroom coup, will take over Leike's responsibilities. Sutskever announced his departure on Tuesday, having played a key role in the coup against Sam Altman.

"In recent years, safety culture and processes have been compromised for the sake of flashy products," Leike wrote in the post.

Leike's post reveals the escalating tensions within OpenAI. As researchers race to develop artificial general intelligence while managing consumer AI products like ChatGPT and DALL-E, employees like Leike express concerns about the dangers of creating superintelligent AI models. Leike states that his team has been downgraded and lacks the necessary resources, such as computing power, to carry out "crucial" work.

"I joined because I believed OpenAI was the best place in the world to conduct this research," Leike wrote. "However, I have consistently disagreed with the OpenAI leadership on the company's core priorities until we reached a breaking point."