On Monday, Meta announced an update to its "Made with AI" label in its applications, changing it to "AI info". Users who click on the new label will be able to access more relevant information. Previously, some users complained that their photos were mistakenly labeled as "Made with AI", which sparked controversy.
Pete Souza, former White House photographer, also encountered this situation when he uploaded a film photo of a basketball game taken 40 years ago. He unexpectedly saw this label as well. He speculated that it might be triggered by using Adobe's cropping tool and image flattening function, which caused the AI misjudgment.
Meta spokesperson Kate McLaughlin said, "We have been committed to optimizing our AI products and working closely with industry partners to improve our AI labeling methods." The introduction of the new label aims to more accurately indicate that the content may have undergone certain modifications rather than being entirely generated by AI.
The root of the problem seems to lie in how image processing software like Adobe Photoshop manipulates image metadata and how platforms interpret these operations. As Meta expands the scope of its AI content labeling policy, some real-life photos have also been mistakenly labeled as "Made with AI" on platforms such as Instagram, Facebook, and Threads.
According to McLaughlin, the new label will first be launched on mobile applications and gradually expanded to web views. It is currently in the process of being rolled out across all platforms.
When users click on the new label, they will see similar explanatory information as the old label, explaining why this label is applied and indicating that it may be applicable to images generated entirely by AI or edited using tools that include AI technology, such as Generative Fill. Although metadata labeling technologies like C2PA should make it easier to distinguish between AI-generated images and real images, the realization of this vision will take time.