OpenAI's new artificial intelligence video generation model, Sora, has amazed the world with its lifelike demonstrations.
However, on the afternoon of February 20th, the company's flagship product, ChatGPT, started returning nonsensical and meaningless outputs to users, leading to numerous complaints on X.
Some of ChatGPT's outputs were a mixture of Spanish and English, making them difficult to understand, while others consisted of jumbled words or repeated phrases, even though the language model-driven chatbot was not instructed to do so.
A savvy user compared these seemingly random and incoherent word strings to the unsettling "weird horror" alien graffiti from Jeff VanderMeer's groundbreaking 2014 novel "Annihilation".
Some interviewees joked that these strange outputs were the beginning of a "robot uprising," similar to what is depicted in numerous science fiction movies like "Terminator" and "The Matrix".
As of 3:40 PM on February 20th, OpenAI became aware of the issue and posted relevant information on its public webpage. By 3:47 PM Pacific Standard Time, the company stated that the problem had been identified and was being "fixed". By nearly 5 PM Pacific Standard Time, the company further mentioned that they were "continuing to monitor the situation".
At 10:30 AM on the 21st, the official verified account of ChatGPT on X announced, "There were some errors yesterday, but it should be back to normal now!"
Nevertheless, even if the issue can be quickly resolved, the origin of these unexpected errors and the endless vague responses raise doubts about the fundamental reliability and integrity of ChatGPT, as well as the suitability of using it or other OpenAI products (such as the LLM-GPT-4 and GPT-3.5 that support it) for enterprise purposes, especially for "safety-critical" tasks in fields like transportation, healthcare/pharmaceuticals, power, and engineering.