OpenAI Employees Warn: Artificial Intelligence Poses Existential Threat to Humanity

2024-06-06

OpenAI, the parent company of ChatGPT, has issued a warning about the existential threats posed by advanced artificial intelligence. In a detailed letter released on June 4th, a group composed of 13 current and former employees from OpenAI, Anthropic, and Google DeepMind outlined a series of threats related to artificial intelligence, despite acknowledging its potential benefits.

The letter states, "We are current and former employees of leading AI companies, and we believe that the potential of AI technology will bring unprecedented benefits to humanity." However, it also emphasizes concerns, stating, "These risks range from exacerbating existing inequalities to manipulation and misinformation, to losing control over autonomous AI systems, ultimately leading to human extinction."

Neel Nanda, the Responsible AI Lead at DeepMind and a former employee of AnthropicAI, is one of the signatories. In a post on X, he wrote, "This is not because I have anything specific to warn about my current or former employers, or any particular criticism of their treatment of whistleblowers." However, he believes that "General Artificial Intelligence (AGI) will have extremely serious consequences, as acknowledged by all labs, and it may pose an existential threat. Any lab seeking to build AGI must demonstrate that it is worthy of public trust, and employees having strong and protected whistleblower rights is a crucial first step."

Lack of accountability and regulation in artificial intelligence

Supporters argue that while both AI companies and governments worldwide recognize these dangers, the current corporate and regulatory measures are insufficient to prevent them. "AI companies have strong financial incentives to avoid effective oversight, and we believe that customized corporate governance structures are insufficient to change this," they argue.

Subsequently, it criticizes the transparency of AI companies, stating that they hold "a significant amount of non-public information about the capabilities and limitations of their systems, the adequacy of their safeguards, and the risk levels of different types of harm." It points out the lack of obligation for these companies to disclose such critical information, stating, "They currently have only weak obligations to share this information with governments and none to civil society."

These employees express an urgent need for government oversight and public accountability. "As long as there is no effective government oversight of these companies, current and former employees are among the few who can hold them accountable to the public," the group states. They also highlight the limitations of existing whistleblower protection mechanisms, which do not fully cover the unregulated risks posed by AI technology.

OpenAI under scrutiny

This open letter comes at a time of turmoil for major AI companies, particularly OpenAI. OpenAI has been launching AI assistants with advanced capabilities that can engage in real-time voice conversations with humans and respond to visual inputs such as video streams or written mathematical problems.

Scarlett Johansson, who played an AI assistant in the movie "Her," accused OpenAI of using her voice as a model for one of its products without her explicit consent. Although OpenAI mentioned the word "her" in relation to the voice assistant's launch, the company later denied using Johansson's voice as a model.

In May of this year, OpenAI disbanded a team dedicated to investigating long-term threats related to AI, less than a year after its formation. In July of last year, OpenAI's Director of Trust and Safety, Dave Werner, also resigned from his position.