What happens when your own voice starts talking back to you, without your permission? Not in a metaphorical sense, but literally, from the speakers of a public train, announcing stops you’ve never visited. This isn’t the plot of a Black Mirror episode; it’s a real-world dilemma that unfolded in Scotland, and it throws a harsh spotlight on the burgeoning field of AI voice ethics. We are rushing headlong into a world of synthetic media, yet the rulebook for this new game is yet to be written.
The recent controversy at ScotRail serves as a perfect, grounded case study. It’s a story about trains, timetables, and technology, but look a little closer and you’ll see it’s fundamentally about identity, consent, and what it means to own a part of yourself as personal as the sound of your own voice.
So, What Exactly is Voice Cloning?
Before we get to the platform drama at Edinburgh Waverley, let’s get our terms straight. AI voice cloning isn’t like a parrot simply mimicking a phrase. Think of it more like a digital master chef. You don’t just give it one of your finished cakes; you give it the ingredients (your voice recordings) and the recipe book (the unique patterns, cadence, and intonation of your speech).
With enough data, the AI doesn’t just replay your words; it learns the system of your voice. It can then use that system to bake entirely new cakes—or, in this case, say things you’ve never said, from “The next stop is Milngavie” to potentially anything else. This technology is already being used in everything from satnavs and accessibility tools for those who have lost their voice, to the dubbing of films and, yes, automated announcements in public service AI.
The ScotRail Saga: A Very British Voice Cloning Controversy
Enter Gayanne Potter, a Scottish voice artist. A few years ago, she did a job for a Swedish tech firm called ReadSpeaker. As she understood it, her voice recordings were for limited use, mostly in password-protected, text-to-speech applications. Imagine her surprise, then, when she started hearing what she claims was her voice—or a digital ghost of it—making announcements across ScotRail’s fleet of Class 380 trains.
This AI persona was named “Iona.” As Potter told the BBC, “It’s hard enough for people in the creative industry to sustain careers, but to be competing with a robotic version of yourself adds insult to injury.” Her complaint cuts to the very heart of the voice cloning controversy. It’s the unsettling feeling of losing control over a fundamental part of your identity and seeing it repurposed for a commercial context you never agreed to.
From a legal standpoint, Potter was in a tricky position. Jennifer Cass, an expert from the law firm Dentons, pointed out a massive gap in our legal framework: copyright law protects a specific recording, but not the likeness or style of a person’s voice. Your voice, as unique as your fingerprint, currently lacks the same legal protection. There’s no intellectual property right in the sound you make, leaving a wide-open field for this technology to operate in.
An Ethical Course Correction, or Just Good PR?
Legally, ScotRail could have dug its heels in. They could have pointed to their contract with ReadSpeaker and carried on. But they didn’t. Instead, they announced they were phasing out Iona entirely. This is the fascinating part.
Their solution is a masterclass in ethical implementation. ScotRail turned inward and found a new voice from within their own team: Vannessa Sloan. Sloan, a ScotRail employee, spent 18 studio sessions meticulously recording hundreds of words and phonetically spelling out famously tricky Scottish place names (like ‘Milngavie’, pronounced ‘Mill-guy’).
ScotRail’s customer operations director, Phil Campbell, put it perfectly: “Our people are at the heart of everything we do, so it’s fitting that the new voice of ScotRail comes from within our own team.” Is this good PR? Absolutely. But it’s also good practice. By getting explicit, informed consent and creating their new system transparently, ScotRail is demonstrating how public service AI ought to be deployed. They’ve sidestepped the legal quagmire by simply doing the right thing. The new system is, in their own words, “ethically produced,” and it will be fully rolled out by 2026.
Your Voice, Your Self, Your… Property?
The ScotRail case forces us to ask a much bigger question: what are our identity rights in the age of AI? We intuitively feel that our voice is part of us. It conveys emotion, personality, and trust. If a company can create a digital replica of your face and use it in an advert without your permission, there would be public outrage and, likely, swift legal action. Why should the voice be any different?
The challenge is that our legal systems were designed to protect tangible things—a piece of land, a copyrighted book, a patented invention. A voice is ephemeral. Yet, AI makes it concrete, replicable, and scalable. The technology has leaped ahead of the law.
This isn’t just a problem for celebrities and voice artists anymore. With just a few seconds of audio from a social media post, AI can create a convincing clone of anyone’s voice. The potential for misuse in scams, misinformation, and harassment is enormous. Recognising a voice as a piece of protectable intellectual property, or as a fundamental component of an individual’s identity, seems not just logical but urgently necessary.
The Future is Talking, But Who Owns the Conversation?
The ScotRail story is a small-scale skirmish in what will become a much larger conflict. As voice cloning technology becomes more sophisticated and accessible, these ethical dilemmas will pop up everywhere.
– Entertainment: Will actors be asked to sign away the rights to their “vocal likeness” for studios to use in perpetuity?
– Audiobooks: Could a publisher use an AI-clone of a famous narrator’s voice to produce more books without paying them for the new work?
– Personalisation: Could a brand use a cloned voice of a friend or family member to deliver a targeted ad?
The ScotRail incident provides a potential roadmap. It demonstrates that public pressure and a corporate commitment to ethics can drive change even when the law is lagging behind. The ethical implementation they opted for—transparency, explicit consent, and keeping a human in the loop—is a solid starting point.
Ultimately, technology is a tool. The real test is in how we choose to use it. The debate over AI voice ethics is just getting started, and it’s a conversation we all need to be a part of. We need clear regulations that protect our identity rights without stifling innovation. Because if we don’t set the rules now, we may wake up one day to find our own voices have been taken from us.
What do you think? Should your voice be a legally protected part of your identity, just like your face or your name? Let me know your thoughts in the comments below.


