Outgoing OpenAI Leader Claims No Company Is Prepared for AGI

2024-10-25

Miles Brundage, a senior advisor at OpenAI specializing in preparations for Artificial General Intelligence (AGI), issued a cautionary statement upon his resignation, highlighting that neither OpenAI nor any other entity is ready for the advent of AGI.

In his declaration, Brundage asserted that "OpenAI and other cutting-edge laboratories are not equipped for AGI, and similarly, the world lacks readiness." He clarified that this viewpoint is not contentious within OpenAI’s leadership, yet emphasized that the organization's and the world's ability to prepare in a timely manner remains uncertain.

Brundage's departure marks the latest in a series of significant exits from OpenAI's safety team. Previously, Jan Leike left because he felt that "safety culture and processes have been overshadowed by flashy products." Co-founder Ilya Sutskever also departed to establish a new company focused on developing secure AGI.

The disbandment of Brundage's "AGI Preparedness" team occurred shortly after the company dissolved its "Super Alignment" team dedicated to long-term AI risk mitigation, highlighting tensions between OpenAI's original mission and its commercial objectives. Reports indicate that OpenAI faces pressure to transition from a nonprofit organization to a for-profit public benefit company within two years, or risk having to return a portion of its recent $6.6 billion investment. This shift towards commercialization had previously raised concerns for Brundage when OpenAI established its for-profit arm in 2019.

In explaining his departure, Brundage pointed out increasing restrictions on research and publishing freedoms within the company and emphasized the importance of independent voices in AI policy discussions. Having already provided internal recommendations to the leadership regarding preparedness, Brundage believes that external engagement will now more effectively influence global AI governance.

This resignation may also indicate deeper cultural divisions within OpenAI. Many researchers initially joined to advance AI research but now find themselves in an environment increasingly focused on product development. Allocation of internal resources has become a contentious issue—reportedly, prior to disbanding, Leike's team was denied computing resources necessary for safety research.

Despite these tensions, Brundage mentioned that OpenAI has expressed willingness to unconditionally support his future endeavors by providing funding, API credit, and access to early models.