YouTube continues to stay ahead of the flood of AI-generated content appearing on the platform with a new set of tools to detect when AI-generated people, voices, and even music appear in videos. The recently updated Content ID system is expanding from finding copyright infringement to finding synthetic voices performing songs. There are also new ways to detect when fake faces appear in videos.
The “synthetic singing” voice identification tool for the Content ID system is fairly straightforward. The AI will automatically detect and handle AI-generated imitations of singing voices and alert users of the tool. Google plans to launch a pilot version of this system early next year before a broader rollout.
As for visual content, YouTube is testing a way for content creators to detect AI-generated videos that show their faces without their approval. The idea is to give artists and public figures more control over how AI-generated versions of their faces are used, particularly on the video platform. Ideally, this would prevent deepfakes or unauthorized manipulations from spreading.
Both features build on the policy that was quietly added to YouTube’s terms and conditions in July to address AI-generated mimicry. Affected individuals can request removal of videos with deepfake aspects of themselves through YouTube’s privacy request process. That was a big change from simply labeling the video as AI or misleading content. It improved the removal policy to address AI.
“These two new capabilities build on our track record of developing technology-driven approaches to address rights issues at scale,” YouTube's VP of creator products Amjad Hanif wrote in a blog post. “We're committed to bringing this same level of protection and empowerment to the AI era.”
YouTube's AI infusion
The flip side of AI detection tools is for creators who have seen their videos used to train AI models without their permission. Some YouTube video creators have been upset that OpenAI, Apple, Nvidia, and Google itself have chosen their work to train without any prompting or compensation. The exact plan is still in the early stages of development, but will presumably address at least Google’s use of it.
“We will continue to employ measures to ensure that third parties respect [YouTube’s terms and conditions]”This includes continued investments in systems that detect and prevent unauthorized access, to the point of blocking access to scrapers,” Hanif wrote. “That said, as the generative AI landscape continues to evolve, we recognize that creators may want more control over how they collaborate with third-party companies to develop AI tools. That's why we're developing new ways to give YouTube creators the option to choose how third parties can use their content on our platform.”
The ads are part of YouTube’s efforts to make AI a deeply integrated part of the platform that people trust. That’s why these kinds of protection ads often appear right before or after plans like YouTube’s Brainstorm with Gemini tool to generate inspiration for a new video. Not to mention anticipated features like an AI music generator, which itself pairs well with the new tool to remove copyrighted music from your video without deleting it entirely.