OpenAI, the company behind ChatGPT, has developed a tool that can determine whether a piece of text is generated by a large language model. However, due to concerns about negative feedback from customers, they have been hesitant to release this tool.
This detection tool involves text watermarking technology, which makes subtle adjustments to the way ChatGPT selects words. With a tool that understands these watermarks, it can accurately detect these changes.
According to The Wall Street Journal, releasing this detection tool "is just a matter of pressing a button." However, a survey revealed that if ChatGPT deployed watermarking technology without its competitors' chatbots using watermarks, nearly 30% of ChatGPT users would reduce their usage of this AI tool.
Why hasn't OpenAI released the ChatGPT detection tool?
According to an OpenAI spokesperson, "the text watermarking approach we are developing has promising technical prospects, but we are also weighing its significant risks as we explore alternative approaches. Given the complexity involved and the potential impact on a broader ecosystem beyond OpenAI, we believe that the cautious approach we are taking is necessary."
The company has also updated a webpage on their website to provide detailed explanations for some of their reasons. One of their main concerns is that watermarks may have a disproportionate impact on certain groups, especially non-native English speakers who use AI as a writing tool.
They also express concerns about the ease of cracking watermarks, stating that "while watermarks are highly accurate and effective in preventing local tampering (such as rewriting), they are less resistant to global tampering; for example, using translation systems, rewriting with another generative model, or asking the model to insert and then delete a special character between each word—allowing malicious actors to easily bypass the watermark."
On the same webpage, OpenAI mentions that their focus is on developing tools for detecting visual and auditory content, as at this stage, images, audio, and video are "widely considered to pose higher risks to our models' capabilities."
Why is effective AI detection so important?
According to The Wall Street Journal, a recent survey conducted by the Center for Democracy and Technology revealed that 59% of middle and high school teachers believe that some students will use AI in their assignments, compared to only 42% last year. The debate about AI in education continues.
One of the issues is the lack of effective AI detection tools. While there are many such tools available on the market, the most proficient ones are often behind paywalls and are not without false positives and other failures. Although there are some methods to determine whether a text is written by AI, as the technology behind these large language models becomes more complex, detection will become more challenging.
As internal stakeholders at OpenAI become increasingly aware that watermarking does not affect the output quality of ChatGPT, opposition to withholding the tool is growing. Internal documents seen by The Wall Street Journal show that employees involved in testing the tool stated, "Since we already know that watermarks do not degrade output quality, we have a weak argument for not having text watermarks." The recent summary of a meeting on AI detection issues stated, "Without this, we risk losing credibility as responsible actors."