OpenAI plagued by security concerns

2024-07-15

OpenAI is leading the race to develop AI that is as intelligent as humans. However, employees continue to voice their deep concerns about the safety issues of this $80 billion non-profit research lab in the media and podcasts. The latest report from The Washington Post includes an anonymous source claiming that OpenAI rushed through security testing and celebrated the product launch before ensuring its safety. "They planned the celebration after the release before ensuring the product's safety," an anonymous employee told The Washington Post. "We basically failed in the process." Security issues are particularly prominent at OpenAI and seem to be recurring. Not long ago, after the dissolution of OpenAI's security team following the departure of co-founder Ilya Sutskever, current and former employees signed an open letter demanding better security and transparency practices from the startup. OpenAI's key researcher, Jan Leike, subsequently resigned and claimed in an article that "at OpenAI, safety culture and processes have taken a backseat to flashy products." Safety is at the core of OpenAI's charter, which states that if a competitor achieves general artificial intelligence (AGI), OpenAI will assist other organizations in advancing safety rather than continuing to compete. OpenAI claims to be committed to addressing the inherent safety issues of such a vast and complex system. In the interest of safety, OpenAI even keeps its proprietary models confidential instead of making them public (which has sparked ridicule and lawsuits). However, these warnings seem to suggest that despite the importance of safety to the company's culture and structure, it is being sidelined. "We are proud of our track record in providing the most capable and safe AI systems and believe in our scientific approach to addressing risks," said OpenAI spokesperson Taya Christianson. "Given the importance of this technology, rigorous debate is crucial, and we will continue to collaborate with governments, civil society, and other communities worldwide to fulfill our mission." OpenAI and others researching emerging technologies point out that the stakes regarding safety are enormous. "Cutting-edge artificial intelligence development poses urgent and growing risks to national security," stated a report commissioned by the US State Department in March. "The rise of advanced AI and general artificial intelligence (AGI) has the potential to disrupt global security in a manner similar to the emergence of nuclear weapons." OpenAI's concerns are also accompanied by a boardroom coup last year when CEO Sam Altman was briefly removed. The board stated that he was removed for "failing to maintain consistent candor in communication," which led to an investigation that failed to reassure employees. OpenAI spokesperson Lindsey Held told The Washington Post that the release of GPT-4o did not compromise safety, but another anonymous company representative admitted that the timeline for security review was compressed to one week. "We are reevaluating our entire approach," the anonymous representative told The Washington Post. "This (compressing the timeline) is definitely not the best practice." In the face of mounting controversies, OpenAI is attempting to appease concerns through timely announcements. Last week, it announced a collaboration with the US Los Alamos National Laboratory to explore the safe use of advanced AI models like GPT-4o in biological research and repeatedly mentioned Los Alamos' safety record in the same announcement. The next day, an anonymous spokesperson told Bloomberg that OpenAI had created an internal standard to track the progress of its large language models towards general artificial intelligence (AGI). OpenAI's safety-focused announcements seem to be a defensive "window dressing" in response to growing criticism of its safety practices. It is evident that OpenAI is at the center of attention, but relying solely on public relations efforts is insufficient to protect society. What truly matters is the potential impact on people outside of Silicon Valley if OpenAI continues to develop AI without strict safety protocols: ordinary individuals cannot participate in the privatization of AGI development but are left with uncertainty regarding the level of protection when facing OpenAI's creations. "AI tools could be revolutionary," Federal Trade Commission (FTC) Chair Lina Khan told Bloomberg last November. However, "as it stands," she expressed concerns that "these tools' key inputs are controlled by relatively few companies." If the numerous accusations against OpenAI's safety protocols are true, it undoubtedly raises serious questions about its qualification as a guardian of AGI, a role that OpenAI has essentially bestowed upon itself. Allowing an organization in San Francisco to control potentially transformative technology is worrisome, and internally, there is a stronger need than ever for transparency and security.