The Copyright Trap with UMG and UDIO
Why the Music Industry’s AI Fight Could Backfire on Artists
There’s a lawsuit happening right now that could reshape the future of music creation. Independent artists, represented by passionate advocates, are suing AI music companies like Suno and Udio for training their models on copyrighted songs without permission. The premise is straightforward: these companies scraped millions of tracks, built billion-dollar valuations, and never asked for licenses or paid a cent to creators.
The outrage is understandable. The legal strategy seems sound. But there’s a problem that almost nobody is talking about: this lawsuit might be building the exact weapon that will destroy the careers it’s meant to protect.
The Question Nobody’s Asking
When Universal Music Group acquired a stake in Udio, one of the very companies being sued for allegedly stealing music, they weren’t joining the fight against AI. They were positioning themselves to own it. That partnership wasn’t an anomaly. It was a preview.
So I have to ask: what happens when the lawsuit succeeds? Who benefits if courts establish that training AI on music requires licensing agreements? The answer should terrify every independent artist paying attention.
How We Got Here
Most musicians understand AI music generation as theft with extra steps. The reasoning goes like this: these companies downloaded our songs without permission, fed them into algorithms, and now they’re selling a product built on our work. It feels like robbery because, well, it looks like robbery.
The lawsuits reflect this understanding. The legal argument is that training AI models on copyrighted music without a license constitutes infringement. Win this argument, the thinking goes, and AI companies will have to pay for the music they use. Justice served.
But this premise rests on a legal foundation that’s already cracking.
The Precedent That Changes Everything
In 2015, courts ruled in Authors Guild v. Google that Google could scan millions of copyrighted books without permission or payment. Not snippets. Entire books. The scale was massive, the copying was complete, and Google won anyway.
The court’s reasoning hinged on something called transformative use. Google wasn’t selling the books or replacing them in the market. They created a searchable database, a fundamentally different product that actually helped readers discover books. The fact that Google copied millions of works didn’t matter. What mattered was that the end product transformed the original material into something new.
Last year, a federal judge applied this same logic to AI. In Bartz v. Anthropic, the court ruled that training AI models on copyrighted books was, in the judge’s words, “quintessentially transformative” and “among the most transformative uses we will see in our lifetimes.” The reasoning was technical but clear: AI models don’t store copies of books or songs like files in a database. They learn statistical patterns, relationships between words or sounds, compressed mathematical representations of creative language.
Think of it this way. A jazz student listens to hundreds of recordings, internalizes the patterns of bebop phrasing, and creates something new using that knowledge. Legally, that’s exactly what these models are doing. The scale is different, but the principle is the same.
The “AI as student” analogy isn’t just convenient marketing from tech companies. Based on current precedent, it’s likely how courts will rule.
Why This Matters
If the legal argument against training fails, and there’s good reason to think it will, then what’s the lawsuit actually fighting for? The amended complaints tell the story. The focus has shifted to proving that Suno and Udio didn’t just train on copyrighted works, they scraped pirated content from YouTube and other platforms, violating the Digital Millennium Copyright Act by breaking through technological protections to access the underlying audio files.
This is a narrower, more technical argument. It might work. But even if it does, it only establishes that AI companies can’t use pirated sources. It doesn’t stop them from training on legally accessible music. It doesn’t create a royalty structure. It doesn’t protect working musicians from competition.
And here’s where the trap closes.
The Licensing Endgame
Let’s imagine the best possible outcome for artists. The lawsuit wins. Courts rule that training AI on music requires licensing agreements. Every AI music company now needs permission and must pay for the catalogs they use.
Who owns the largest, most easily licensed music catalogs on earth?
Universal Music Group. Sony Music Entertainment. Warner Music Group.
The moment licensing becomes mandatory, these three corporations control the game. Only companies with access to massive licensed catalogs can build competitive AI music platforms. Independent developers are shut out. Small cooperatives can’t compete. The entire infrastructure consolidates around whoever already owns the rights.
This isn’t speculation. We’ve already seen it happen. UMG didn’t partner with Udio to shut down AI music generation. They partnered to own it. Once the legal landscape clarifies that training requires licenses, the labels become the only entities capable of building or authorizing powerful music AI at scale.
What Artists Think They’re Getting
The promise implicit in these lawsuits is some kind of fair compensation. If AI companies have to license music, surely artists will get royalties, right? Surely there will be payments for every song used in training, for every output that resembles their style, for every stream of AI-generated content.
But think about the math. An AI model trains on millions of songs. Users generate thousands of tracks per day. When that content streams, the royalty pool gets divided into infinitesimally small fractions. The payout for any individual artist becomes microscopic.
We’ve seen this movie before. Streaming already pays fractions of a cent per play. Now imagine that same pool divided not among thousands of human artists but among millions of tracks, many of them AI-generated, all competing for the same listener attention. The cost of filing taxes on those royalties will exceed the royalties themselves.
This isn’t a revenue stream. It’s a mirage.
The Real Weapon
Here’s what actually happens when major labels control AI music generation.
They don’t use it to create the next Dark Side of the Moon. They use it to fill every niche that currently sustains working musicians. Playlist filler tracks. Streaming radio background music. Hold music. In-store ambiance. Sync licensing for commercials, games, television shows. Stock music libraries.
This is the economic engine for thousands of professional musicians. These aren’t glamorous gigs, but they pay rent. They fund the creative work. They make sustainable careers possible.
And all of it can be replaced by AI-generated content produced at near-zero marginal cost.
The labels won’t release one AI hit per week. They’ll release thousands of tracks daily, algorithmically optimized for playlist placement, genre-tagged for every possible use case, legally cleared because they own the training data and the output. They’ll saturate every market segment that once belonged to human creators.
Most listeners won’t notice. Most won’t care. Background music doesn’t require genius. It requires “good enough.” And AI delivers good enough at infinite scale.
The Strategic Mistake
The lawsuit treats AI companies as the enemy and major labels as allies, or at least as neutral parties with shared interests in protecting artist rights. This is the fundamental error.
When independent artists fight to establish that training requires licensing, they’re not building a system that protects them. They’re building a moat that only benefits whoever already controls the catalogs. The lawsuit’s public face is artists versus tech companies. But the actual outcome is labels versus artists.
The precedent being set won’t empower working musicians. It will hand the tools of music creation to the same corporations that have spent decades extracting value from artist labor while offering the smallest possible share in return.
And this time, the labels won’t even need the artists. They’ll own the entire production chain.
What About Fairness
Someone will object here that this isn’t fair. It’s not right that machines trained on our work should replace us. It’s unjust that corporations can profit from creativity they didn’t produce.
All of that is true. It’s also irrelevant.
Technological disruption has never cared about fairness. It doesn’t matter if taxi drivers think Uber is unfair. It doesn’t matter if journalists think social media destroyed their industry unjustly. It doesn’t matter if factory workers think automation betrayed them.
The market rewards efficiency, not justice. And AI-generated music, whatever its ethical problems, is brutally efficient.
The question isn’t whether this is fair. The question is what artists do now, given the reality in front of us.
The Uncomfortable Truth
Fighting the tool instead of the power structure is a losing strategy. Even if courts rule that AI training requires licenses, even if every company has to pay for the music they use, the economic model that sustained working musicians is still dead. The technology works. The cost structure favors volume over artistry. The gatekeepers who already dominated the industry now have a legal framework to dominate it permanently.
This is the part nobody wants to hear: protecting the old system might be impossible. The lawsuit can set precedents, establish rules, force compliance. But it can’t stop the fundamental shift happening underneath.
We’re demanding protected lanes for horses when the road is no longer horse-ready.
Where That Leaves Us
I don’t have clean answers. I’m not going to pretend that some clever legal strategy or grassroots movement will preserve the music industry as it existed five years ago. Maybe human authenticity will matter enough to sustain careers. Maybe direct fan relationships will replace streaming revenue. Maybe entirely new economic models will emerge that we can’t see yet.
Or maybe the profession of working musician, as we’ve known it, is ending for most people. Not because of evil corporations, though they’ll certainly profit. Not because of reckless AI companies, though they built the tools. But because the economics changed and the old model stopped working.
What I can say is this: artists need to stop fighting the wrong battle. The lawsuit isn’t protecting musicians from AI. At best, it’s protecting them from unlicensed AI. And if the only alternative is AI controlled by the same three corporations that have spent decades extracting maximum value from artist labor, then winning this fight means losing the war.
The future won’t be determined by whether Suno used pirated YouTube downloads or licensed catalogs. It will be determined by who controls the infrastructure and who builds the alternatives. Right now, major labels are positioning themselves to own both.
If artists want a different outcome, the conversation has to change. It can’t be about stopping the technology. It has to be about preventing the people who already monopolize the industry from using new legal frameworks to make that monopoly permanent.
The clock is ticking. And the lawsuit, for all its good intentions, might be winding it faster.
