Already a subscriber? Make sure to log into your account before viewing this content. You can access your account by hitting the “login” button on the top right corner. Still unable to see the content after signing in? Make sure your card on file is up-to-date.
Meta, the parent company of Facebook and Instagram, has announced plans to label a broader range of artificial intelligence (AI)-generated content starting next month. In a shift, the tech giant will only remove AI content that breaches its other guidelines, moving away from its previous policy that mandated the removal of manipulated media.
The announcement was made through a blog post on Friday, revealing that Meta intends to stop removing content based solely on its manipulated video policy by July. This adjustment is designed to familiarize users with the process of self-disclosure before Meta scales back on eliminating a “smaller subset of manipulated media.” Meta also plans to introduce more conspicuous labels for digital creations or alterations that pose a significant risk of deceiving the public on critical issues, enhancing user access to vital context and information.
The update is part of Meta’s broader initiative, announced in February, to collaborate with industry partners on standards for identifying AI-generated content. This effort was further supported by recommendations from Meta’s Oversight Board, an independent entity funded by Meta. The board stressed the importance of revising the company’s AI content policies ahead of the crucial 2024 elections in the US.
Monika Bickert, Meta’s Vice President of Content Policy, said, “We agree that providing transparency and additional context is now the better way to address this content. The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling.”