Have you ever heard an AI assistant and done a double-take, thinking, “I know that voice”? For most of us, it’s a fleeting moment of uncanny valley weirdness. For David Greene, the former host of NPR’s ‘Morning Edition’, that moment has spiralled into a landmark lawsuit against Google that could redefine our digital identities. This isn’t just another tech spat; it’s a battle for the very essence of who we are in an age of artificial intelligence.
We are standing on the precipice of a new frontier in intellectual property: AI voice rights. The rapid ascent of sophisticated AI voice synthesis means that a person’s unique vocal fingerprint—their tone, cadence, and rhythm—can be captured and replicated with frightening accuracy. This case, and others like it, force us to ask a fundamental question: who owns your voice when a machine can wear it like a costume?
What Exactly Are AI Voice Rights?
Let’s be clear: AI voice rights are not some abstract, futuristic concept. They are the frontline in the fight to control our own biometric data. Think of your voice as a digital fingerprint. It’s uniquely yours, shaped by a lifetime of experiences, your accent, and even the filler words you use. The idea of biometric copyright suggests that this unique identifier should be protected a lot like a creative work or a patent.
When a company creates a synthetic voice, the lines blur. If that voice is trained on or designed to sound indistinguishable from a real person, who should profit? This is where the discussion around synthetic voice royalties begins. It’s a framework where the original ‘voice owner’ would be compensated every time their digital doppelgänger is put to work. This isn’t just about protecting celebrities; it’s about establishing a principle that your identity has value and cannot simply be appropriated for commercial gain.
The Audacity of AI: Greene vs. Google
David Greene’s lawsuit, as reported by TechCrunch, alleges that the male voice in Google’s NotebookLM tool is an uncanny replica of his own. It’s not just the timbre; it’s the distinctive cadence, the speech patterns, the very “sonic DNA” that millions of listeners came to know. In his own words, “‘My voice is, like, the most important part of who I am’.” For a professional voice artist, their voice isn’t just a tool; it’s their entire brand and livelihood.
Google’s response is predictably corporate and legally cautious. A spokesperson stated, “‘The sound of the male voice in NotebookLM’s Audio Overviews is based on a paid professional actor Google hired’.” This defence is fascinatingly strategic. Google isn’t denying the similarity; it’s simply stating the origin of the voice file.
This creates a perplexing legal grey area. Can you be held liable for creating a product that sounds just like someone, as long as you didn’t technically use their voice data to do it? It’s like hiring an artist to paint a perfect replica of the Mona Lisa, then claiming it’s an original because you paid the painter. The argument is clever, but does it pass the common-sense test?
A Roaring Chorus from the Entertainment Industry
This legal drama isn’t happening in a vacuum. It’s the latest expression of deep-seated anxiety within the creative community. The entertainment union demands from organisations like SAG-AFTRA have been growing louder, focusing squarely on protecting actors from being replaced or replicated by AI without consent or compensation.
We saw a high-profile preview of this conflict when Scarlett Johansson called out OpenAI for its “Sky” voice in ChatGPT-4o, which she claimed was “eerily similar” to her own, despite her having previously declined to license her voice to the company. OpenAI promptly pulled the voice, but the incident laid bare the tech industry’s “ask for forgiveness, not permission” playbook.
These cases highlight a glaring collision between celebrity likeness laws, which have historically protected visual appearance, and the new world of audio replication. For voice actors, the threat is existential. If a studio can create a perfect digital copy of their voice from a few hours of recording, what’s to stop them from using it in perpetuity for any project they want, without further payment? This is precisely what unions are fighting to prevent.
The Ethical Tightrope Walk
Beyond the courtroom, there’s a serious ethical minefield to navigate. The central pilar of this debate is consent. Not just a tick-box on a 50-page legal document, but clear, informed, and ongoing consent. Can an actor truly give informed consent to have their voice—their identity—used in ways they can’t possibly imagine ten years from now?
For tech giants like Google, this is a strategic balancing act. The race to develop the most human-like, engaging AI is ferocious. A voice that is familiar and trusted is a massive competitive advantage. Yet, moving too fast and breaking things, as the old Silicon Valley mantra goes, now comes with the risk of immense reputational damage and precedent-setting lawsuits. Innovation cannot come at the cost of trampling over fundamental individual rights.
The Future of Voice as Property
So, where do we go from here? It seems almost inevitable that we are heading towards a future where voices are treated as licensable assets. The framework for synthetic voice royalties will need to be hammered out, likely through a combination of litigation, union negotiation, and, eventually, legislation.
Imagine a world where your vocal print is registered, and any commercial use of a synthesized version triggers a micro-payment back to you. This could create entirely new economies for creators and public figures. But it also raises questions for the rest of us. What happens when our voices, scraped from years of social media videos and voicemails, are used to train the next generation of AI assistants? Will we see a “Creative Commons” for voices, or will every utterance become a potential piece of intellectual property?
These are no longer hypothetical questions. David Greene’s lawsuit is not just about one man’s voice. It’s a test case for the future of digital identity. The outcome will send a powerful signal about whether our unique human characteristics are ours to control or simply raw material for the next wave of technology. The tech industry, creatives, and regulators need to come to the table now and set clear, fair rules for the road.
What do you think? Should a person’s voice be protected with the same rigour as a copyrighted song or a patented invention? Share your thoughts below.


