Controversy Surrounding ChatGPT Voice Model Raises Questions About Ultraman
When OpenAI's board of directors fired Sam Altman at the end of 2023, board members stated that he "was not consistently candid in communication." This statement raised more questions than answers, indirectly labeling Sam Altman as a fraud, but what exactly is it about? Six months later, creatives and former employees once again called on the public to question OpenAI's credibility.
This month, OpenAI claimed that the voice model of ChatGPT, "Sky," never intended to imitate Scarlett Johansson's character in the movie "Her." This led the award-winning actress to issue a condemnatory public statement and threaten legal action. Now, this voice model has been taken down. Also this month, two AI industry giants from OpenAI's internal leadership of the safety team resigned. One executive, Jan Leike, stated upon leaving that OpenAI's "safety culture and processes have made way for flashy products." As Ed Zitron wrote, it is becoming increasingly difficult to believe OpenAI's surface words.
Firstly, the claim that Sky's voice is not like Johansson's is unbelievable. OpenAI's executives seemed to jokingly imply this similarity during the release. Altman tweeted "Her" on that day. The background image used by OpenAI's Audio AGI Research Director on X was a screenshot from the movie. We can all see OpenAI's intention. Secondly, Johansson claimed that Altman approached her twice, hoping she would voice the audio assistant for ChatGPT. However, OpenAI claims that Sky was voiced by a completely different actor, but this statement has made many people feel insincere.
Last week, Altman expressed his "embarrassment" for not knowing about the company's practice of forcing employees to remain silent about any negative experiences at OpenAI or give up their equity. This lifetime non-disparagement agreement was exposed by a Vox report that interviewed a former OpenAI employee who refused to sign the agreement. While many companies have confidentiality agreements, such an extreme agreement is not common.
Altman stated in an interview in January of this year that he did not know if Ilya Sutskever, OpenAI's Chief Scientist, was still working at the company. Just last week, Sutskever and his co-leader of the Superalignment project, Leike, left OpenAI. Leike stated that Superalignment's resources had been diverted by other departments within the company for months.
In March of this year, Chief Technology Officer Mira Murati stated that she was unsure if Sora had received training from YouTube videos. Chief Operating Officer Brad Lightcap evaded this question at the Bloomberg Technology Summit in May, further adding to the confusion. Nevertheless, The New York Times reported that senior members of OpenAI were involved in transcribing YouTube videos to train AI models.