Are AI Voices the New Copyright Villains? Jorja Smith’s Legal Odyssey

Have you ever heard a song on the radio and thought, “I know that voice,” only to find out it’s an artist you’ve never heard of? Now, what if it wasn’t a person at all? We are tumbling headfirst into a new, unnerving reality where a singer’s voice—their unique vocal signature—can be digitally replicated and used by anyone with the right software. This isn’t science fiction; it’s the central issue in a brewing storm in the music world, and the case of British singer Jorja Smith is the flash of lightning that’s illuminating just how dark and complex this storm could get.
The core of the conflict is a track called ‘I Run’ by the dance act Haven. Smith’s record label, FAMM, alleges that the song features an AI-generated clone of her voice. As reported by the BBC, FAMM’s position is uncompromising, stating, “This isn’t just about Jorja. It’s bigger than one artist or one song.” It’s a battle cry for a creative industry grappling with a technological reckoning.

So, what exactly is AI voice copyright? It’s a legal minefield, principally because it doesn’t technically exist yet in a clear, codified way. Traditionally, copyright protects a specific recording of a song—the tangible expression of an idea, not the idea itself. A singer’s voice, much like a painter’s signature style, has historically been considered an attribute, not a piece of intellectual property.
This is where the ground is shifting. AI doesn’t just mimic; it synthesises. Imagine a machine that studies every brushstroke Van Gogh ever made and then paints a brand-new “Van Gogh” that is indistinguishable from the real thing. The AI isn’t copying ‘Starry Night’; it’s copying the essence of Van Gogh. This is what’s happening with voices. AI models are trained on vast datasets of audio, learning the unique timbre, cadence, and emotional inflections of singers like Jorja Smith. The resulting output isn’t a sample; it’s a synthetic performance.
This pushes the conversation beyond copyright and into the domain of personality rights—an individual’s right to control the commercial use of their name, likeness, and other aspects of their identity. Your voice is as much a part of your identity as your face. Using it without permission feels like a profound violation, and artists are asking why the law doesn’t see it that way.

See also  From Hype to Reality: The Surprising Truth Behind the AI Market Crash

A Viral Track and a Digital Ghost

The case of Jorja Smith and Haven serves as a perfect, and troubling, case study. The dance duo, Harrison Walker and Jacob Donaghue, created a track, ‘I Run’, which quickly gained traction. Soon after, allegations of copyright infringement surfaced, leading to the original track being pulled from streaming services.

Haven’s Defence and FAMM’s Rebuttal

Harrison Walker openly admitted to using AI, stating, “It shouldn’t be any secret that I used AI-assisted vocal processing to transform solely my voice for I Run.” He claims he used his own vocals as the base and employed Suno, an AI music generator, to alter them into a female voice. Crucially, he denies any intention of specifically cloning Jorja Smith.
This is where the argument gets incredibly nuanced. Suno, like many AI companies, argues that training its models on copyrighted works is legally protected under ‘fair use’ doctrines. Yet, FAMM counters that even if the intent wasn’t to clone Smith specifically, the tools were trained on copyrighted material, and the final product exploits her vocal identity. The re-released version of ‘I Run’, with a different vocal, still went on to enter the UK Top 40, proving the commercial viability of the underlying track and raising the stakes for this dispute.

The Problem of Plausible Deniability

This situation creates a fog of plausible deniability. An artist can use an AI tool, generate a voice that sounds uncannily like a famous singer, and then claim they simply used a generic “female vocalist” setting. This poses a monumental challenge for proving infringement. How do you legally prove a digital voiceprint was stolen when the thief can claim the AI simply “imagined” it? This is the very essence of the creative industry disruption we’re witnessing.

See also  The Hidden Risks of AI Chatbots in Mental Health: What You Need to Know

The Law Catches Up, But Is It Too Late?

The legal frameworks we have are gasping for air in the age of generative AI. Courts and legislators are grappling with questions that were unimaginable a decade ago.

EU AI Act and the Push for Transparency

The EU AI Act implications are significant here. While not a copyright law, this landmark regulation prioritises transparency. It could enforce rules that mandate clear labelling for AI-generated or manipulated content. FAMM is a vocal proponent of this, arguing, “AI material should be clearly labelled.” This wouldn’t solve the copyright issue overnight, but it would give consumers and rights holders a clear understanding of what they are listening to.

What About “Deepfake Royalties”?

This leads to the fascinating and contentious idea of deepfake royalties. If an artist’s voice is used to train a model, or if a synthetic version of their voice performs on a track, should they be compensated? This would require a new system for tracking vocal “likenesses” and distributing payments. It’s a complex but necessary conversation if we want to build a future where artists can coexist with AI, rather than be replaced by it. It could create a new revenue stream, but managing it would be a logistical nightmare.

The Industry Draws a Line in the Sand

The music industry is not standing by idly. From global entities like Warner Music to independent labels like FAMM, there is a growing consensus that unregulated AI is an existential threat. The fear is palpable. A statement highlighted in the BBC’s coverage captures the mood perfectly: “Artists and creators are becoming collateral damage in the race by governments and corporations towards AI dominance.”
This isn’t an anti-technology stance. Artists have always been early adopters of new tools, from synthesisers to samplers. The difference here is that the tool itself is built upon the very work it threatens to replace. It’s one thing to use a synthesiser to create a new sound; it’s another thing entirely for the synthesiser to have learned how to create that sound by analysing and deconstructing your entire back catalogue without your permission.

See also  Google Releases Free Gemini Code Assist, Empowering Individual Developers with Advanced Coding Tools

The Future of Sound and Synthesis

This case is a bellwether. We are at a crossroads, and the path we choose will define the creative landscape for decades. One path leads to a digital wild west, where distinctive voices are scraped, cloned, and repurposed without consent or compensation, leading to a homogenised and ethically compromised music scene.
The other path involves thoughtful regulation. It means updating copyright law to include vocal and personality rights. It means establishing licensing frameworks for AI training data and creating systems for deepfake royalties. It’s a future where AI is a collaborator, not a replacement—a tool that augments human creativity instead of rendering it obsolete.
The Jorja Smith case isn’t just a legal skirmish; it’s a fight for the soul of artistry. It raises fundamental questions about identity, ownership, and the value of human creativity in an increasingly automated world. The outcome will resonate far beyond the music industry, affecting voice actors, audiobook narrators, and anyone whose identity is tied to their voice.
Where do you think the line should be drawn? Should artists have absolute control over their digital vocal likeness, or is this simply the next evolution in music production? The debate is just getting started, and its resolution is anything but certain.

(16) Article Page Subscription Form

Sign up for our free daily AI News

By signing up, you  agree to ai-news.tv’s Terms of Use and Privacy Policy.

- Advertisement -spot_img

Latest news

Unveiling the Hidden Dangers: Protecting Autonomous Systems with AI Security Strategies

The era of autonomous systems isn't some far-off, sci-fi fantasy anymore. It's here. It's the robot vacuum cleaner tidying...

Are AI Investments the New Frontline in Cybersecurity? A Look at Wall Street’s $1.5B Bet

Let's talk about money. Specifically, let's talk about the kind of money that makes even the most jaded corners...

From Reactive to Proactive: Discover Velhawk’s AI-Driven Cybersecurity Innovations

The perpetual cat-and-mouse game of cybersecurity just got a rather significant new player. For years, the standard playbook for...

Urgent: China’s Stopgap AI Guidelines Could Transform Global Tech Compliance

Everyone seems to be in a frantic race to build the next great AI, but the real contest, the...

Must read

Goldman Sachs Warns: The AI Bubble and What It Means for Your Business

Let's be brutally honest. Every boardroom from London to...

The High Stakes of Nvidia’s $2B Synopsys Investment: What It Means for Chip Design

Let's be perfectly clear: when a company like Nvidia...
- Advertisement -spot_img

You might also likeRELATED

More from this authorEXPLORE

How Australia’s AI Plan Could Make or Break Tech Leadership in APAC

It seems Australia has decided to take a rather different path...

Background AI Revolution: What You Need to Know to Stay Ahead in Operational Resilience

Whilst everyone is losing their minds over generative AI writing poems...

Navigating the Future: Indonesia’s AI Ethics Blueprint for Fintech Success

The Grown-Ups in the Room: Indonesia Just Rewrote the Rules for...

How Amazon’s Rufus AI Chatbot Amazingly Drove 100% Sales Growth This Black Friday

Let's not mince words. For the past couple of years or...