Meta is updating the way it labels content edited or manipulated using generative artificial intelligence (AI) on Instagram, Facebook, and Threads. In a recent blog post, Meta announced that its "AI Information" label will appear in a menu at the top right corner of images and videos edited with AI, rather than directly below the user's name.
Users can check for available AI information and read about any adjustments made by clicking on this menu. Previously, Meta applied the "AI Information" label to all AI-related content, whether it was lightly edited using AI-enabled tools like Photoshop or entirely generated by AI based on prompts.
The company stated that these changes are meant to "better reflect the extent to which AI is used in images and videos on the platform."
The label was introduced in July, after Meta faced criticism for mistakenly labeling authentic photos taken by creators and photographers with the "Made with AI" label. "For content we detect as being generated by AI tools, we will still show the 'AI Information' label and share whether the content is marked by industry-shared signals or self-disclosed," Meta stated in the update, adding that these changes will be gradually rolled out starting next week.
The "industry-shared signals" mentioned by Meta refer to systems like Adobe's C2PA-supported content credential metadata, which can be applied to any content created or edited using its Firefly generative AI tool. Similar systems exist, such as Google's SynthID digital watermark claimed to be applied to content generated by its AI tools. However, Meta has not disclosed which systems it will check or how many systems.
However, completely removing labels from authentic but manipulated images may make it more difficult for users to avoid being misled, especially as generative AI editing tools become increasingly realistic on new smartphones.