OpenAI is developing a new AI image detection tool.

2024-05-08

OpenAI has introduced a new tool to detect whether an image is generated by its DALL-E AI image generator, and has also implemented a new watermarking method to clearly label its generated content. In a blog post, OpenAI announced that it has started developing new source attribution methods to track and verify the origin of content generated by AI. This includes a new image detection classifier that uses AI to determine if a photo is AI-generated, as well as an anti-tampering watermark that adds invisible signals to label content, such as audio. This classifier can predict the likelihood of an image being created by DALL-E 3. OpenAI claims that the classifier can still work even if the image is cropped, compressed, or its saturation is altered. While the tool has an accuracy rate of approximately 98% in detecting images created using DALL-E 3, its performance in determining whether the content is generated by other AI models is not as impressive, only able to identify 5% to 10% of images created by other image generators like Midjourney. OpenAI has previously added content credentials to image metadata from the Content Authenticity Initiative (C2PA), which essentially serves as a watermark containing information about the image owner and how the image was created. OpenAI is a member of C2PA alongside companies like Microsoft and Adobe. This month, OpenAI also joined the C2PA's steering committee. The AI company has also started adding watermarks to its text-to-speech platform, Voice Engine, which is currently in limited preview. The image classifier and audio watermark signals are still being continuously improved. OpenAI states that it needs user feedback to test their effectiveness. Researchers and nonprofit news organizations can test the image detection classifier by applying it to OpenAI's research access platform. OpenAI has been researching the detection of AI-generated content for years. However, in 2023, it had to terminate a program that identified text written by AI due to the consistently low accuracy of the AI text classifier.