News

YouTube to tighten monetization rules to combat AI-generated content

YouTube is planning to implement new monetization regulations to address the proliferation of "inauthentic" and mass-produced content on its site, largely due to the emergence of generative AI tools. 

The company will revise its YouTube Partner Program guidelines starting July 15, emphasizing which types of content qualify for monetization.

Although the specific wording of the policy is still pending, YouTube has made it clear that this update aims to assist creators in understanding what is considered "inauthentic" in today's landscape. According to YouTube’s support resources, videos eligible for monetization must be “original” and “authentic,” criteria that some creators argue have become increasingly nebulous in the age of AI.

This update comes amid rising apprehensions about an influx of poor-quality, AI-generated content—termed “AI slop”—filling YouTube. Typically, these videos feature mechanical voiceovers combined with stock images, reused clips, or entirely fabricated news stories. Some channels that utilize AI-generated music and storytelling have garnered millions of views, raising concerns about the quality and credibility of content on the platform.

While creators have expressed worries that the new rules might jeopardize formats like reaction videos or those that incorporate licensed snippets, YouTube’s Head of Editorial and Creator Liaison, Rene Ritchie, noted that this policy update serves as a clarification rather than an additional restriction. In a video released on Tuesday, Ritchie reassured creators that content types such as reactions or commentary will still be permitted, provided they showcase originality and offer value.

“This is a minor adjustment to our existing policies,” he stated, underlining that mass-produced, repetitive material has traditionally been ineligible for monetization.

Nonetheless, with the capabilities of AI facilitating the rapid production of low-effort content, YouTube appears ready to establish a clearer boundary. Behind the scenes, the platform is likely preparing for stricter enforcement, especially as AI-generated misinformation and scams continue to pose risks to its integrity.

Leave A Comment