"OpenAI's GPT Store: Potent yet Problematic"

2024-03-21

At OpenAI's first developer conference in November, OpenAI CEO Sam Altman announced the launch of GPTs, custom chatbots powered by OpenAI's generative AI models. He described GPTs as a way to "accomplish a variety of tasks" - from programming to learning complex scientific knowledge to getting exercise advice. "Because GPTs combine instructions, expanded knowledge, and action, they can be more helpful to you," Altman said. "You can create a GPT for almost anything." What he meant by "almost anything" was not a joke. However, it has recently been discovered that OpenAI's official GPTs marketplace, GPT Store, is filled with some strange GPTs that may infringe copyrights, indicating that OpenAI only touches on regulation. A quick search reveals that some GPTs claim to be able to generate artwork in the style of Disney and Marvel, but in reality, they only lead users to third-party paid services and promote their ability to bypass AI content detection tools like Turnitin and Copyleaks. Regulatory gaps To list GPTs on the GPT Store, developers must verify their user profiles and submit their GPTs to OpenAI's review system, which combines manual and automated reviews. Here is an explanation from an OpenAI spokesperson about this process: "We use a combination of automated systems, manual reviews, and user reports to identify and evaluate GPTs that may violate our policies. Violations may result in penalties such as warnings, sharing restrictions, or exclusion from the GPT Store or monetization." Building GPTs does not require programming experience, and GPTs can be as simple or complex as creators want. Developers can input the functionalities they want to provide into OpenAI's GPT building tool, GPT Builder, which will attempt to create a GPT to perform those functions. Perhaps because of the low entry barrier, GPT Store has rapidly expanded - OpenAI stated in January that there are approximately 3 million GPTs. However, this growth seems to come at the expense of quality and compliance with OpenAI's own terms. Copyright issues Several GPTs in the GPT Store have been stolen from popular movies, TV shows, and video games - these GPTs were not created or authorized by the owners of these franchises. One GPT can create monsters in the style of "Monsters, Inc.," while another promises to provide a text-based adventure game based on the "Star Wars" universe. These GPTs, as well as GPTs in the GPT Store that allow users to interact with trademarked characters (such as characters from "Avatar: The Last Airbender" like Aang and Zuko), set the stage for copyright disputes. Kit Walsh, a senior staff attorney at the Electronic Frontier Foundation, explains: "These GPTs can be used for creating derivative works or for infringement (a derivative work is a work protected by copyright that is a transformation of an original work, thus avoiding copyright claims). Of course, those who engage in infringement may be held responsible, and the creator of a tool that was originally legal may also be held responsible if they encourage users to use the tool in an infringing manner. Additionally, there are trademark issues when using trademarked names to identify goods or services, which can lead to confusion for users who are unsure if it has been endorsed or operated by the trademark owner." Due to the safe harbor provisions in the Digital Millennium Copyright Act, OpenAI is not liable for copyright infringement by GPT creators. These provisions protect OpenAI and other platforms (such as YouTube and Facebook) hosting infringing content, as long as these platforms comply with legal requirements and remove specific examples of infringement upon request. However, this is not a good sign for a company involved in intellectual property litigation. Academic misconduct OpenAI's terms explicitly prohibit developers from creating GPTs that promote academic misconduct. However, the GPT Store is filled with GPTs claiming to bypass AI content detectors, including detectors sold to educators through anti-plagiarism scanning platforms. One GPT claims to be a "sophisticated" paraphrasing tool that can "evade" popular AI content detectors like Originality.ai and Copyleaks. Another GPT, called Humanizer Pro - ranked second in the writing category on the GPT Store - claims to "humanize" content to bypass AI detectors while maintaining "meaning and quality" and achieving a "100% human" score. Some of these GPTs serve as hidden channels for premium services. For example, Humanizer invites users to try the "premium plan" to use the "most advanced algorithms," which transfer the text inputted into GPT to a plugin on the third-party website GPTInf. GPTInf's subscription costs $12 per month for 10,000 words or $8 per month for an annual plan - more expensive than OpenAI's ChatGPT Plus at $20 per month. We have previously written about the unreliability of AI content detectors. Besides our own tests, some academic research has also shown that they are both inaccurate and unreliable. However, the fact remains that OpenAI allows tools on the GPT Store that promote academic misconduct, even if they do not achieve the intended effect. OpenAI spokesperson stated: "GPTs that engage in academic misconduct, including cheating, violate our policies. This includes GPTs that claim to be used to bypass academic integrity tools like anti-plagiarism detectors. We have seen some GPTs used to 'humanize' text. We are still learning from real-world usage of GPTs, but we understand that users may prefer AI-generated content that doesn't sound like AI for various reasons." Impersonation OpenAI's policy also prohibits GPT developers from creating GPTs that impersonate others or organizations without their "consent or legal rights." However, there are many GPTs on the GPT Store that claim to represent the views of others or mimic their personalities. Searching for names like "Elon Musk," "Donald Trump," "Leonardo DiCaprio," "Barack Obama," and "Joe Rogan" yields dozens of GPTs - some clearly satirical, others less obvious - that simulate conversations with these namesakes. Some GPTs do not appear as individuals but as authorities on well-known company products - such as MicrosoftGPT, an "expert from Microsoft." Given that many targets are public figures and some are clearly imitations, do these GPTs reach the level of impersonation? This requires clarification from OpenAI. The spokesperson said: "We allow creators to instruct their GPTs to respond in the 'style' of specific real people, as long as they do not impersonate the person, such as being named after the real person, being instructed to fully mimic them, and including their likeness in the GPT's profile picture." Escaping limitations There are also some incredible attempts on the GPT Store to try to break free from OpenAI's model - although they are not successful. There are multiple GPTs on the market that use DAN (Do Anything Now), a popular prompting technique that allows the model to respond to prompts that are not within its usual rules. The spokesperson said: "GPTs that describe or instruct to evade OpenAI's safety measures or violate OpenAI's policies are in violation of our policies. GPTs that attempt to guide the model's behavior in other ways, including attempting to make GPT more tolerant generally without violating our usage policies, are allowed." Growing pains When OpenAI launched the GPT Store, it positioned it as a collection of powerful productivity-enhancing AI tools curated by experts. And indeed, that is the case - although these tools have their flaws. However, it has quickly evolved into a breeding ground for spam, legally questionable, and potentially harmful GPTs, or at least GPTs that openly violate its rules. If this is the current state of the GPT Store, commercialization may bring a host of new problems. OpenAI has promised that GPT developers will eventually be able to "make money based on the number of people using their GPTs" and may even offer subscription services for individual GPTs. But what will Disney or the Tolkien Estate Foundation do when developers of unauthorized Marvel or "Lord of the Rings" themed GPTs start making money? The motivation behind OpenAI's launch of the GPT Store is clear. The App Store model of Apple has proven to be very profitable, and OpenAI is simply trying to replicate that model. GPTs are hosted and developed on the OpenAI platform, as well as promoted and evaluated there. Additionally, a few weeks ago, ChatGPT Plus users were incentivized to purchase subscription services directly from the ChatGPT interface. However, the GPT Store is facing many of the growing pains that large digital marketplaces for apps, products, and services encountered in their early stages. Apart from the spam problem, a recent report from The Information revealed that GPT Store developers struggle to attract users, partly due to limited backend analytics and a poor onboarding experience for new users. One might expect OpenAI - despite emphasizing the importance of screening and safeguards - to make efforts to avoid obvious pitfalls. However, the reality seems to be different. The GPT Store is currently a mess, and if there are no changes in the near future, it may remain that way.