YouTube is developing tools to detect AI-generated voices and faces in videos
YouTube has announced it’s building tools that will be able to detect AI-generated voices and likenesses of people in videos on its platform.
In a blog post on Thursday (September 5), the video platform said it’s working on a “synthetic-singing identification technology” that will enable YouTube partners to automatically detect content that simulates singing voices.
The technology will exist within Content ID, the tool that YouTube developed in 2007 to identify music posted to its platform, enabling music rights holders to be paid for unlicensed uploads of their music. The tool was the breakthrough feature that ended a long-running dispute between YouTube and music rights holders over unauthorized music on the platform.
YouTube also said it’s working on a tool that will allow people from various industries, including musicians, actors, athletes and content creators, to “detect and manage” AI-generated content that shows their faces.
YouTube also stressed that scraping content on its platform without permission is a violation of its terms of service – a clear shot across the bow of people or businesses who would use existing YouTube videos to create AI-generated content without authorization.
“As the generative AI landscape continues to evolve, we recognize creators may want more control over how they collaborate with third-party companies to develop AI tools,” YouTube added in the blog post.
“That’s why we’re developing new ways to give YouTube creators choice over how third parties might use their content on our platform. We’ll have more to share later this year.”
The new tools at YouTube will likely be well received by the music industry, which has been at the forefront of efforts to rein in unauthorized use of people’s likeness and voice in AI-generated content.
The industry has thrown its weight behind a number of legislative efforts to combat the problem, including the No FAKES Act, a bill introduced in the US Senate this past July which would establish, for the first time, a right to one’s own likeness and voice under US federal law.
“As the generative AI landscape continues to evolve, we recognize creators may want more control over how they collaborate with third-party companies to develop AI tools.”
YouTube
That bill, and a similar bill working its way through the US House of Representatives, known as the No AI FRAUD Act, would grant individuals the ability to sue when their voice or likeness has been imitated without permission in AI-generated content.
YouTube’s move is part of a growing effort by media platforms to rein in misuse of AI technology. Both YouTube and TikTok previously announced policies that require AI-generated content to be labelled as such on their platforms. In July, YouTube announced a policy allowing people to file takedown requests for AI-generated videos that mimic their likeness.
At the same time, YouTube, as well as other platforms, are working to develop AI tools for their creators.
In September of last year, YouTube introduced a suite of AI-powered tools for creators, including Dream Screen, a tool that will allow creators of YouTube Shorts to generate video or backgrounds for video by typing an idea into a prompt.
The platform also announced a new mobile app called YouTube Create, similar to TikTok’s CapCut, for editing short-form videos on the go.
YouTube has also been in talks with the major recording companies to license music for the development AI-powered music-making tools. News reports suggest that earlier efforts to sign up artists for its AI tools yielded limited results.Music Business Worldwide
Source link