It’s time for streaming services to act on AI music
The following MBW Views op/ed comes from Ed Newton-Rex (pictured inset), CEO of the ethical generative AI non-profit, Fairly Trained.
A veteran expert in the world of gen-AI, Newton-Rex is also the former VP Audio at Stability AI, and the founder of JukeDeck (acquired by TikTok/ByteDance in 2019).
In this op/ed, Newton-Rex argues that “music made with AI products that don’t license their training data should either be banned [from DSPs] or should be downweighted in royalty calculations and recommendations…”
Over to Ed…
In April, when I wrote an article highlighting striking similarities between Suno’s output and copyrighted music (and later when I did the same for Udio), I gave them the benefit of the doubt. It was possible they had signed deals that let them train on the major labels’ music. It was even theoretically possible – though unlikely – that they hadn’t trained on copyrighted music at all, and the numerous likenesses were down to an uncanny level of coincidence.
Now, though, there is no room for doubt. The RIAA’s lawsuits against both companies reveal that there were no such deals in place for training. And the companies’ responses to the lawsuits admit – both using identical language – that the recordings they trained on “presumably included recordings whose rights are owned by the [major record labels]”.
Suno’s response goes even further, saying their “training data includes essentially all music files of reasonable quality that are accessible on the open Internet, abiding by paywalls, password protections, and the like”.
There was always going to come a time when streaming services had to make a call on what to allow on their platforms when it came to generative AI. That time is now.
Up until now, Spotify has had no policy explicitly banning AI-generated music. In 2023, Daniel Ek said that tools that mimic artists were not acceptable; these may be forbidden under the company’s Deceptive Content policy (the wording isn’t entirely clear). But, in the same interview, Ek specifically called out AI music that didn’t directly impersonate artists as something they would not ban at this stage.
And there are signs that, as a result, AI music is all over the platform. Chris Stokel-Walker recently wrote for Fast Company about a number of bands with hundreds of thousands of monthly listeners that are suspected to be AI-generated. Users of these AI music platforms disclose that they’re sharing AI music to DSPs.
People have reported being recommended music on Spotify in their Discover Weekly playlists that is clearly AI-generated. And, this month, an AI-generated song reached number 48 in the German pop chart, with more than 4 million Spotify plays to date.
For DSPs to continue to allow this is to actively permit the exploitation of musicians’ copyrighted work without a license to do so.
To quote more than 200 artists who signed an open letter about AI music earlier this year: “Some of the biggest and most powerful companies are, without our permission, using our work to train AI models. These efforts are directly aimed at replacing the work of human artists with massive quantities of ‘sounds’ […] that substantially dilute the royalty pools that are paid out to artists. For many working musicians, artists and songwriters who are just trying to make ends meet, this would be catastrophic.”
Up until now, there was some doubt whether Udio and Suno were doing what these artists were worried about: training on their music. That doubt is now gone.
When DSPs distribute music made using AI models that are trained on musicians’ work without a license, the dilution of the royalties paid to human musicians that these artists warned about is underway.
Musicians’ royalties are being diluted by products that are built using their work against their wishes. And DSPs are facilitating this.
What can be done?
First up, it’s worth saying that I don’t think DSPs should ban all AI music. There are clearly good use-cases for AI in music creation; if training data is licensed, these use-cases are worth supporting, at least in my book. (I do think a music streaming service will emerge that does explicitly reject all AI music, as Cara has done in the image space. And it will probably do well. But there are good reasons for most DSPs not to take such a blanket approach.)
As table stakes, DSPs should follow the example of other media platforms – Instagram and TikTok, for example – and label content that is generated by AI.
That way, music fans can at least choose what they listen to, and, therefore, what they support. Require uploaders to label AI music they upload, and introduce a post-upload moderation process for tracks that slip through the cracks. This is perfectly feasible. You hope that most uploaders will be honest – in general, people tend to prefer to be – and, for those who aren’t, there are a number of third-party systems that can detect AI music with a high degree of accuracy.
Of course, there is the question of how much AI involvement should trigger the application of a label.
Typing a text prompt and distributing the output on Spotify is clearly very different to using a MIDI generator as inspiration.
But this difficulty is not insurmountable and is not enough reason to avoid labeling entirely. DSPs simply need to be clear in their policies and apply them to everyone equally. As a starting point, a label could be applied if any generative AI has been used in the creation of the track at all.
But I think DSPs should go further than labeling. Music made with AI products that don’t license their training data should either be banned or should be downweighted in royalty calculations and recommendations.
Otherwise, it is going head to head with the music it is trained on – and this cannot be fair. (And if at this point you’re at all tempted to say, ‘But humans are allowed to learn from existing music and compete with it’ – please don’t. Training an AI model is nothing like human learning, and its effects on the market are also wildly different.)
“DSPs should go further than labeling. Music made with AI products that don’t license their training data should either be banned or should be downweighted in royalty calculations and recommendations. Otherwise, it is going head to head with the music it is trained on – and this cannot be fair.”
A problem here is that we don’t have an exhaustive list of which AI products fall into this category, since there is currently no requirement for AI companies to disclose what they train on. (There should be, but there isn’t.)
Udio and Suno have admitted it in court filings, but it’s possible there are other companies out there taking the same approach. However, again, this is no excuse for total inaction. DSPs should do their own due diligence, and if the balance of probabilities is that an AI model was trained on unlicensed music, I think it’s fair to subject music made using that model to different rules.
There will be those who say the DSPs should wait until these lawsuits work their way through the courts to decide how to act.
But royalties are being diluted now. And there is ample precedent for DSPs implementing content policies on principle, rather than because of specific legal rulings. According to Spotify, for example, it “invests heavily in detecting, preventing, and removing the royalty impact of artificial streaming” (think people leaving tracks playing silently on repeat overnight to up their play count), and takes action to reduce the royalty impact of “bad actors” gaming the system with white noise recordings.
The company believes changes like these “can drive approximately an additional $1 billion in revenue toward emerging and professional artists over the next five years”.
If that’s the aim, why not also take action against music made using AI models trained on those artists’ work without a license? Like white noise, it is being used to game the system and redirect royalties. Unlike white noise, it’s created using the work of the very artists it’s competing with.
I agree with Daniel Ek that there is a contentious middle ground when policing AI music. I would very much rather not ban all AI music: when it’s based on licensing, there are certainly use cases that are net positive for musicians.
But if a DSP’s mission is “giving a million creative artists the opportunity to live off their art”, I think it’s clear they should draw the line at recommending music made with products that exploit other musicians’ work without a license, diluting the royalty pool in the process.
DSPs will be tempted to defer decisions around how to treat this emerging threat to musicians until they are forced to make them. But if they don’t act soon, I suspect it won’t be long before we see the first artists pulling their music from these platforms in protest.Music Business Worldwide
Source link