The core of the conflict is a track called ‘I Run’ by the dance act Haven. Smith’s record label, FAMM, alleges that the song features an AI-generated clone of her voice. As reported by the BBC, FAMM’s position is uncompromising, stating, “This isn’t just about Jorja. It’s bigger than one artist or one song.” It’s a battle cry for a creative industry grappling with a technological reckoning.
The Sound of Identity: AI Voice Copyright Explained
So, what exactly is AI voice copyright? It’s a legal minefield, principally because it doesn’t technically exist yet in a clear, codified way. Traditionally, copyright protects a specific recording of a song—the tangible expression of an idea, not the idea itself. A singer’s voice, much like a painter’s signature style, has historically been considered an attribute, not a piece of intellectual property.
This is where the ground is shifting. AI doesn’t just mimic; it synthesises. Imagine a machine that studies every brushstroke Van Gogh ever made and then paints a brand-new “Van Gogh” that is indistinguishable from the real thing. The AI isn’t copying ‘Starry Night’; it’s copying the essence of Van Gogh. This is what’s happening with voices. AI models are trained on vast datasets of audio, learning the unique timbre, cadence, and emotional inflections of singers like Jorja Smith. The resulting output isn’t a sample; it’s a synthetic performance.
This pushes the conversation beyond copyright and into the domain of personality rights—an individual’s right to control the commercial use of their name, likeness, and other aspects of their identity. Your voice is as much a part of your identity as your face. Using it without permission feels like a profound violation, and artists are asking why the law doesn’t see it that way.
A Viral Track and a Digital Ghost
The case of Jorja Smith and Haven serves as a perfect, and troubling, case study. The dance duo, Harrison Walker and Jacob Donaghue, created a track, ‘I Run’, which quickly gained traction. Soon after, allegations of copyright infringement surfaced, leading to the original track being pulled from streaming services.
Haven’s Defence and FAMM’s Rebuttal
Harrison Walker openly admitted to using AI, stating, “It shouldn’t be any secret that I used AI-assisted vocal processing to transform solely my voice for I Run.” He claims he used his own vocals as the base and employed Suno, an AI music generator, to alter them into a female voice. Crucially, he denies any intention of specifically cloning Jorja Smith.
This is where the argument gets incredibly nuanced. Suno, like many AI companies, argues that training its models on copyrighted works is legally protected under ‘fair use’ doctrines. Yet, FAMM counters that even if the intent wasn’t to clone Smith specifically, the tools were trained on copyrighted material, and the final product exploits her vocal identity. The re-released version of ‘I Run’, with a different vocal, still went on to enter the UK Top 40, proving the commercial viability of the underlying track and raising the stakes for this dispute.
The Problem of Plausible Deniability
This situation creates a fog of plausible deniability. An artist can use an AI tool, generate a voice that sounds uncannily like a famous singer, and then claim they simply used a generic “female vocalist” setting. This poses a monumental challenge for proving infringement. How do you legally prove a digital voiceprint was stolen when the thief can claim the AI simply “imagined” it? This is the very essence of the creative industry disruption we’re witnessing.
The Law Catches Up, But Is It Too Late?
The legal frameworks we have are gasping for air in the age of generative AI. Courts and legislators are grappling with questions that were unimaginable a decade ago.
EU AI Act and the Push for Transparency
The EU AI Act implications are significant here. While not a copyright law, this landmark regulation prioritises transparency. It could enforce rules that mandate clear labelling for AI-generated or manipulated content. FAMM is a vocal proponent of this, arguing, “AI material should be clearly labelled.” This wouldn’t solve the copyright issue overnight, but it would give consumers and rights holders a clear understanding of what they are listening to.
What About “Deepfake Royalties”?
This leads to the fascinating and contentious idea of deepfake royalties. If an artist’s voice is used to train a model, or if a synthetic version of their voice performs on a track, should they be compensated? This would require a new system for tracking vocal “likenesses” and distributing payments. It’s a complex but necessary conversation if we want to build a future where artists can coexist with AI, rather than be replaced by it. It could create a new revenue stream, but managing it would be a logistical nightmare.
The Industry Draws a Line in the Sand
The music industry is not standing by idly. From global entities like Warner Music to independent labels like FAMM, there is a growing consensus that unregulated AI is an existential threat. The fear is palpable. A statement highlighted in the BBC’s coverage captures the mood perfectly: “Artists and creators are becoming collateral damage in the race by governments and corporations towards AI dominance.”
This isn’t an anti-technology stance. Artists have always been early adopters of new tools, from synthesisers to samplers. The difference here is that the tool itself is built upon the very work it threatens to replace. It’s one thing to use a synthesiser to create a new sound; it’s another thing entirely for the synthesiser to have learned how to create that sound by analysing and deconstructing your entire back catalogue without your permission.
The Future of Sound and Synthesis
This case is a bellwether. We are at a crossroads, and the path we choose will define the creative landscape for decades. One path leads to a digital wild west, where distinctive voices are scraped, cloned, and repurposed without consent or compensation, leading to a homogenised and ethically compromised music scene.
The other path involves thoughtful regulation. It means updating copyright law to include vocal and personality rights. It means establishing licensing frameworks for AI training data and creating systems for deepfake royalties. It’s a future where AI is a collaborator, not a replacement—a tool that augments human creativity instead of rendering it obsolete.
The Jorja Smith case isn’t just a legal skirmish; it’s a fight for the soul of artistry. It raises fundamental questions about identity, ownership, and the value of human creativity in an increasingly automated world. The outcome will resonate far beyond the music industry, affecting voice actors, audiobook narrators, and anyone whose identity is tied to their voice.
Where do you think the line should be drawn? Should artists have absolute control over their digital vocal likeness, or is this simply the next evolution in music production? The debate is just getting started, and its resolution is anything but certain.


