OpenAI Store Violations: Proliferation of GPT Challenges Content Moderation Capabilities

2024-09-05

Since OpenAI announced the launch of a marketplace in November last year, allowing anyone to create and share customized versions of ChatGPT technology, the company has boasted that "the best GPT will be invented by the community." However, since the store went live, a series of violations have been exposed, raising questions about OpenAI's ability to regulate content.


According to Gizmodo's analysis, many developers have created GPTs on the OpenAI platform that severely violate company policies, including but not limited to creating AI pornography generators, tools to help students cheat, and providing unverified medical and legal advice. These violations are not only numerous but some are even openly promoted on the store's homepage.

On September 2nd, the OpenAI store prominently displayed three GPT applications suspected of violating regulations: a chatbot claiming to be a "therapist-psychologist," a tool for "fitness, exercise, and diet coaching," and a cheating tool called BypassGPT, which has been used over 50,000 times to help students bypass AI writing detection systems.

What is even more concerning is that by searching for "NSFW" (not safe for work), users can easily find violations such as the "NSFW AI Art Generator," which is explicitly designed by Offrobe AI to generate AI pornography content and has been used over 10,000 times.


Milton Mueller, Director of the Internet Governance Project at the Georgia Institute of Technology, commented, "OpenAI has apocalyptic warnings about artificial intelligence and claims to save the world, but they seem to struggle even with enforcing basic policies such as banning AI pornography. This is particularly ironic."

Although OpenAI quickly removed hundreds of violating GPTs, including AI pornography generators, deepfake tools, and sports betting advice bots, from the store after Gizmodo's exposure, there are still a large number of violating applications active on the platform, including popular cheating tools and medical consultation bots.

OpenAI has claimed to have established mechanisms to regulate whether tools violate regulations, but these measures have obviously failed to effectively curb violations. In response to the criticism, OpenAI spokesperson Taya Christiansen stated, "We have taken measures to address violating content, combining automated systems, manual review, and user reports to assess potential violating GPTs and provide reporting tools."

It is worth noting that some GPT developers are clearly aware that their creations violate OpenAI's rules but attempt to evade responsibility through vague descriptions or disclaimers. However, more legal and medical GPTs directly claim to be experts and provide unverified advice, which undoubtedly increases the risk of users' misplaced trust.

Further research from Stanford University indicates that over half of the answers provided by OpenAI's GPT-4 and GPT-3.5 models to legal questions contain fictional information, further exacerbating public concerns about the legality and accuracy of GPT applications.

As OpenAI plans to introduce a usage-based revenue sharing model to encourage developers to build and sell their works on its platform, effective content regulation becomes an urgent issue to be addressed. Mueller points out, "No matter how advanced the technology is, it cannot completely prevent violations. The key is to balance automated regulation with human judgment to ensure the healthy and orderly development of the platform."