"Meta Mandates 'Made with AI' Label for AI-Generated Content"

2024-04-06

Meta announced that they will be updating the way they handle AI-generated content on their platform. This decision is based on the recommendations of a supervisory committee and a comprehensive review of company policies, which included public opinion surveys and expert consultations.

Starting from May of this year, once Meta detects AI-generated content that meets industry standards or when users voluntarily disclose that they are uploading AI-generated content, the company will begin labeling video, audio, and image content more broadly as "AI-generated." The new labels will cover a wider range of content than the media manipulation policy established in 2020, which primarily focused on videos that used AI to make someone appear to say something they did not say.

This practice is in line with the recommendations of the supervisory committee, as providing transparency and additional background information is seen as a more effective solution to the issue of media manipulation. According to Monica Bickert, Vice President of Content Policy at Meta, the company agrees with this approach, believing it can avoid the risk of unnecessarily restricting freedom of speech, and has decided to keep such content on the platform, only adding labels and background information.

During Meta's policy review process, in-depth consultations were conducted with over 120 stakeholders from 34 countries worldwide. They generally support labeling AI-generated content and strongly advocate for more prominent labels in high-risk situations. In addition, the company conducted public opinion surveys with over 23,000 respondents from 13 countries, with results showing that as many as 82% of people support warning labels for AI-generated content that depicts people saying things they did not say.

Currently, Meta's approach to media manipulation primarily focuses on videos that use AI to create or manipulate content to make someone appear to say something they did not say. However, the company recognizes that significant changes have occurred since the establishment of this policy in 2020. Realistic AI-generated content, such as audio and photos, has become increasingly common, and this technology is rapidly advancing.

Under the new policy, unless the content violates other policies specified in the company's "Community Standards," Meta will retain AI-generated content on the platform and add informative labels and background information to it. The company's nearly 100 independent fact-checkers will continue to review false and misleading AI-generated content, which will be ranked lower in the feed if identified as false or manipulated, and more information will be added.

Meta plans to start labeling AI-generated content in May 2024 and will stop removing content solely based on its policy on manipulated videos in July. This timeline gives people ample time to understand and familiarize themselves with the self-disclosure process before the company stops removing certain manipulated media. As AI technology advances, the company will continue to collaborate with industry peers and engage in dialogue with governments and civil society.