Meta will require labels on more AI-generated content


Meta is updating its AI-generated content policy and will add a “Made with AI” label beginning in May, the company announced. The policy will apply to content on Instagram, Facebook, and Threads.

Acknowledging that its current policy is “too narrow,” Meta says it will start labeling more video, audio, and image content as being AI-generated. Labels will be applied either when users disclose the use of AI tools or when Meta detects “industry standard AI image indicators,” though the company didn’t provide more detail about its detection system.

The changes are informed by recommendations and feedback from Meta’s Oversight Board and update the manipulated media policy created in 2020. The old policy prohibits videos created or edited using AI tools that make a person say something they didn’t but doesn’t cover the wide range of AI-generated content that has recently flooded the web.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” Meta wrote in a blog post. “As the Board noted, it’s equally important to address manipulation that shows a person doing something they didn’t do.”

Meta is also changing how it moderates AI-generated material. Beginning in July, it will stop removing AI-generated material that doesn’t go against other community guidelines. This, too, came from the Oversight Board, which recommended adding more context rather than restricting the content. Material that violates other rules, like policies around bullying, voter interference, and harassment, will still be removed regardless of whether it was created using AI tools.



Source link