The concept of memory preservation is morphing from dusty photo albums and fading home videos into something far more dynamic. A company now called Uare.ai, which until recently operated under the rather on-the-nose name eternos.life, just pocketed a cool $10.3 million in seed funding. Their goal? To create “Individual AIs” – personalised digital counterparts of you. This isn’t just about preserving your best photos; it’s about replicating your cognition, your decision-making, your very essence. Are you ready for your digital ghost in the machine? Because ready or not, it’s coming.
The Uncanny Valley of Digital Self
So, what exactly are AI digital twins? Forget the simplistic chatbots you argue with about your internet bill. These are far more ambitious. Think of them as a dynamic, learning replica of your mind, built from the digital breadcrumbs you leave everywhere. Every email you send, every document you write, every social media post, every conversation – all of it becomes training data for an AI that learns to think, communicate, and even reason like you.
It’s one thing for an AI to master a game of Go or generate a passable poem. It’s another thing entirely for it to replicate human cognition, complete with its quirks, biases, and perhaps even a flicker of what we might call emotional intelligence. This is the ultimate Turing Test, isn’t it? Not just fooling someone into thinking it’s human for five minutes, but acting as a consistent, evolving proxy for a specific human.
The Human Life Model: Your Brain in a Box?
This is where Uare.ai, led by LivePerson (LPSN) founder Robert LoCascio, makes its grand entrance. According to the company’s recent funding announcement, their playbook revolves around a proprietary framework they call the “Human Life Model” (HLM). The idea is to create a contained, private AI model for each individual. Imagine a secure digital vault where your AI self resides. You feed it your personal data – your writings, your professional expertise, your private correspondence – and it slowly learns to become you.
LoCascio throws some shade at the rest of the industry, stating, “The industry is chasing the wrong goal with Artificial General Intelligence… We’re building intelligence that’s as unique as a fingerprint.” It’s a bold claim, backed by serious cash from venture capital firms Mayfield and Boldstart Ventures. The core promise is privacy. Unlike large language models that learn from a vast, public internet soup, your Individual AI is supposedly yours and yours alone. It’s a “containerized” model, meaning your data isn’t being mashed up to train some global AI overlord.
This is a clever strategic move, tapping directly into the post-Cambridge Analytica anxiety we all feel. It’s like having a bespoke suit tailored for your mind, rather than buying an off-the-peg digital personality from a big tech giant. The HLM is even designed to admit when it doesn’t know something, a feature that feels refreshingly honest in an AI landscape often prone to confident gibberish. But does putting it in a ‘box’ really solve the thorny ethical problems of data ownership and security? A locked box is only secure until someone finds the key, or the lock-picker.
From Digital Ghost to Personal CEO
The vision for these AI digital twins extends far beyond being a simple digital scrapbook. Uare.ai’s backers see this as the “next frontier in how we interact with technology.” The plan, with a platform launch slated for Winter 2025, is for these AIs to become active participants in our lives, and even after we’re gone.
The AI That Knows You Better Than You Know Yourself
One of the key features is context-aware interaction. Your digital twin won’t just regurgitate facts; it will understand context. If you’re a surgeon, it could help draft preliminary post-operative notes in your specific style. If you’re a novelist, it could brainstorm plot points based on your previous work. The AI evolves with you, learning from new experiences and feedback, supposedly becoming a more accurate and useful partner over time.
This leads to the next, and frankly, most capitalist part of the plan: monetising your expertise. Imagine your AI twin running a consulting service based on your decades of experience. It could answer questions from junior colleagues, draft proposals for clients, or create educational content for an online course, all whilst you’re on holiday or, to be blunt, six feet under. This transforms your accumulated knowledge from a personal asset into a perpetual, revenue-generating machine. It’s the gig economy on steroids, where the ‘gig’ is simply being you, and you don’t even have to be present to do the work.
This is a profound shift in the concept of digital identity and legacy. It’s not just about being remembered; it’s about continuing to have an economic and intellectual impact. But let’s pause for a moment. Who gets the money? Your family? A trust? And what happens when your AI twin says something that gets your “estate” sued for professional malpractice? The legal and financial frameworks for this are a complete and utter minefield.
The Haunting Ethics of a Voice from the Past
Now we get to the really sticky part, the part that probably inspired the company’s original name, eternos.life. What happens when this technology meets the finality of death? This is where the conversations around voice cloning ethics and posthumous AI become unavoidable, and deeply personal.
Can You Handle Hearing a Ghost?
Voice cloning technology is already frighteningly good. We’ve seen it used for deepfake scams and political misinformation. But here, the proposition is different. It’s about using that technology for memory preservation. Imagine being able to have a conversation with a startlingly accurate AI of your deceased grandmother. It could tell you family stories in her voice, with her specific cadence and turns of phrase. It could offer advice based on a lifetime of her wisdom.
On the surface, it sounds comforting, a beautiful way to keep a loved one’s presence alive. But where is the line?
* Is it truly them? Of course not. It’s a sophisticated algorithm making statistically probable predictions based on past data. Grieving and vulnerable people might form unhealthy attachments to a facsimile, preventing them from processing their loss in a natural way.
* Consent and Control: Did the deceased consent to this? Can you edit their digital persona? Could you, for instance, make your digital grandmother a bit less critical than she was in real life? The ethics of curating a posthumous AI are murky at best. Are you preserving their memory, or creating a sanitised, more palatable version?
* The Right to Be Forgotten: What about the right to truly die? To have your story end? This technology suggests a future where we are never truly gone, our data-ghosts lingering in the cloud, available for interrogation by future generations. Is that a legacy, or a prison?
This isn’t just a technological challenge; it’s a deeply human one. The allure of cheating death, even in this small way, is powerful. Yet, the potential for emotional harm and ethical missteps is immense. Balancing the innovation of posthumous AI with the profound respect owed to the dead is a tightrope walk over an ethical abyss.
The Immortality Paradox
We are standing at a strange crossroads. The technology to create functional, evolving AI digital twins is no longer science fiction. With tens of millions in funding and a clear go-to-market strategy, companies like Uare.ai are building it right now. The potential benefits are clear: amplifying our professional capacity, creating new revenue streams from our own knowledge, and offering a novel, powerful form of memory preservation.
Yet, for every tantalising opportunity, a thorny ethical dilemma springs up. The promise of containerized privacy is a good start, but it doesn’t solve the fundamental questions of digital consent, posthumous rights, and the psychological impact of interacting with AI replicas of the living and the dead. The journey from eternos.life to Uare.ai shows a company grappling with its own messaging, moving from the ethereal promise of eternal life to the more practical, monetisable angle of individualised AI.
We are being sold a form of digital immortality, a way to ensure our voice, knowledge, and personality persist. But is this replica truly us? Or is it just a clever echo, a ghost in the machine trained on our digital exhaust? As we charge headfirst into this new reality, we have to decide what part of ourselves we want to preserve, and what part should be allowed to fade away gracefully.
What do you think? If you could create an AI digital twin of yourself, would you? And what rules would you put in place for how it could be used after you’re gone?


