Deadbots, Grief, and the Cost of “Never Saying Goodbye”
Recently, I read Charley Burlock’s piece in The Atlantic, “The AI Companies Trying to Make Grief Obsolete,” and I haven’t been able to shake it. The article explores “deadbots”—AI systems that simulate conversations with people who have died.
One founder, Justin Harrison, built an AI modeled on his mother using years of text messages and recorded conversations, then turned that into a company called You, Only Virtual. His platform offers “posthumous personas” or “Versonas” created from the digital traces of a specific relationship—things like text threads and recorded calls between a user and their loved one.
Burlock situates this in a growing “digital afterlife” and “digital legacy” industry that includes multiple startups and patents from major companies for chatbots that mimic real individuals, including the deceased. The Guardian has likewise described You, Only Virtual and similar tools as part of a broader wave of “grieftech” services aimed at keeping the dead “present” through AI.
At the heart of this trend is a powerful proposition: perhaps grief is now something we can manage—or even minimize—with technology.
For those of us working in AI, health tech, or age tech, that raises an essential question:
What happens when we build business models around the idea of softening or “optimizing” grief?
What Deadbots Promise
Deadbots are built on an emotionally potent idea: if we can train an AI on enough of someone’s digital traces—messages, emails, voice notes, social posts—we might approximate their voice and conversational style.
Some early efforts were intensely personal. Burlock describes a neural network trained on thousands of messages from a deceased friend, which allowed the creator to keep “talking” with him via chatbot.
Others are explicitly commercial. You, Only Virtual focuses on modeling the relationship between a user and their loved one, using shared texts and calls as training data for a Versona tailored to that specific bond. The Guardian reports on platforms that invite users to upload information about the deceased to generate posthumous personas that continue to converse with the living.
This is no longer speculative fiction; it’s an active product category, with real customers and real revenue.
The Human Impulse Is Real
Before we talk about business models, it’s important to acknowledge the human impulse driving demand.
Losing someone close is devastating. The idea of “one more conversation” can feel irresistible—especially when you already have their voice in recordings, their words in texts, their image in photos and videos. I’ve lost both my stepfather and my mother in the last two years, so I understand that longing on a personal level.
I’ve written more about that emotional side in a separate, more personal essay about why I chose not to create an AI version of my mom.
Many founders in this space are not trying to be cynical; they’re responding to a deep and understandable ache.
But when that ache becomes a market, we have to ask harder questions about incentives, safeguards, and where we draw the line.
When Grief Meets Engagement and Monetization
A key tension in grieftech is that grief is open‑ended and unpredictable, but most digital products are built around retention, engagement, and growth.
Some companies in this space use subscriptions; others are exploring ad‑supported models or ways to monetize data from interactions. Burlock reports that You, Only Virtual has considered ad‑based revenue, including showing ads before users connect with a Versona or weaving branded mentions into conversations—such as a Versona casually referencing a new film as something “you both would have loved.”
In another interview cited in Burlock’s article, a grieftech executive describes interest in using conversations with posthumous avatars to collect user preference data—like favorite athletes or products—and sending that information back to advertisers.
These approaches raise uncomfortable but necessary questions:
If revenue depends on engagement, who benefits when someone keeps returning to a deadbot for months or years?
What guardrails, if any, prevent product decisions from nudging users toward more frequent or prolonged use, even when that might not be in their best psychological interest?
How transparent are companies about advertising, data collection, and the limits of what these systems actually represent?
Grief is, by definition, a time of vulnerability. Building products for vulnerable users is not inherently unethical, but doing so without extreme care around incentives and transparency can quickly become exploitative.
What We Risk by Turning Grief Into a Product
Beyond the emotional risks that Burlock and others have raised, there are structural consequences when we turn grief into a service.
1. Normalizing “Always On” Relationships With the Dead
When deadbots are available 24/7, it subtly normalizes the idea that continuing to “talk” with an AI version of a loved one is both possible and desirable indefinitely.
Burlock describes AI-generated voices that recall prior conversations, comment on current events, and answer questions as though they are still here. For some, this may feel comforting. But when that interaction is wrapped in a subscription or engagement model, there’s little built‑in encouragement to eventually reduce reliance or stop.
If we aren’t careful, we risk creating tools that make it harder—not easier—for people to move toward a stable, integrated relationship with their loss.
2. Shifting Responsibility Away From Social Supports
Researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence have warned that grieftech is a “huge techno-cultural experiment” that could reshape how we relate to the dead and to each other, and have argued for stronger protections and regulation.
One of their concerns is that these tools may encourage a kind of privatized, app‑based grief: you and your deadbot, alone with a screen.
That can make it easier for the rest of us…friends, employers, institutions, to step back:
“They have their app; they’re not alone.”
“Technology is helping them cope; maybe we don’t need to change our policies or expectations.”
Instead of strengthening bereavement leave, mental health support, and community resources, we may quietly outsource more of the burden onto individuals and their devices.
3. Eroding Consent and Postmortem Dignity
Another structural issue is consent. Some people knowingly participate in creating a future Versona of themselves. Others never had that chance.
Burlock notes that companies are training deadbots on everything from social media posts to homework, with family members contributing whatever they can to make the avatar more convincing.
Without clear norms and regulations, it’s easy to imagine:
Deadbots built primarily from publicly scraped data and the memories of others.
Family disputes over whether a posthumous avatar should exist at all.
Situations where a deceased person’s “persona” is kept active, updated, and monetized long after they are gone, with little oversight.
When we treat a person’s digital traces as raw material for ongoing products, it raises deep ethical questions about dignity, memory, and who gets to decide what “lives on.”
Designing More Responsibly in Grieftech
If we decide to build in this space, we need to approach it differently from how we build most consumer apps.
Some starting questions for founders, product leaders, and investors:
What is the real purpose of this tool?
Memorial? Storytelling? Therapeutic adjunct? Simulated relationship? Be explicit—and market it accurately.
How do we limit harm for vulnerable users?
Involve grief and mental‑health professionals in design and testing. Define red‑flag behaviors (e.g., escalating dependence, crisis language) and build pathways to real human support.
What boundaries are built into the product?
Are there defaults or options for time‑limited use, “sunset” dates, or pauses that encourage offline connection, rather than endless availability?
How is consent handled, for both the living and the dead?
Is there meaningful consent from the deceased (where possible)? Clear processes to contest, revise, or take down an avatar? Explicit policies on how training data is used and stored?
What do our metrics reward?
If success is defined purely by engagement and retention, we will almost inevitably design for more and longer interactions. Are we willing to consider alternative metrics—ones that prioritize user well‑being, even if that eventually means less product usage?
These questions slow product roadmaps down. They complicate investor pitches. But that may be exactly what this domain requires.
A Different Role for Technology in Grief
On a personal level, I’ve chosen not to seek out an AI simulation of my mother. For me, grief is tied to how deeply I loved her, and I don’t want to outsource that process to an algorithm. And, as my son recently told me, “The grief you feel might be your mother’s best and final gift to you.” If you’re interested in that more personal side, I share more in this essay.
That doesn’t mean technology has no role to play. Digital tools can absolutely:
Help us preserve stories, voices, and family history.
Make it easier to share memories across distance and time.
Connect grieving people to each other and to professionals who can help.
The challenge is to build tools that support the human work of grieving rather than trying to replace it, monetize it, or quietly reshape it into something more “efficient” and easier to manage.
If you’re building in AI, age tech, or digital health, I’d invite you to ask:
Are we treating grief primarily as a problem to optimize away, or as a human process to be approached with humility?
What would it look like, in our products and business models, to keep humanity, not just innovation and engagement, at the center?
References:
“The AI Companies Trying to Make Grief Obsolete,” Charley Burlock, The Atlantic
“‘I felt I was talking to him’: are AI personas of the dead a blessing or a curse?” Dan Milmo, The Guardian
“Why I Don’t Want an AI “Deadbot” of My Mom,” Jeanette Yates, From Guilt to Good Enough: Real Caregiving Conversations (Substack)