When AI Lets Us Talk to the Dead
When AI Lets Us Talk to the Dead
I just heard about this app today on a podcast I was listening to. I was surprised by the goal of the app—for about a minute. Then my reaction was, of course this exists, and I’m sure others are working on similar ideas too.
Shortly after, I saw the promotional video for the app circulating on X. The video is polished, emotionally compelling, and clearly designed to highlight connection and comfort. I’ll include the link here so you can watch it for yourself and form your own impressions.
The app allows you to record a loved one for about three minutes. It captures their image, voice, and mannerisms, then creates an AI-generated avatar you can interact with—even after that person dies.
At first glance, it might sound comforting. What could be wrong with preserving someone you love?
Here are a few thoughts that immediately came to mind.
First, it may interfere with the natural grieving process. Grief requires acknowledging that someone is gone. If it feels like a person never truly dies, the process of mourning, acceptance, and integration can stall. For generations, knowledge, values, and wisdom are passed down while people are alive. When someone dies, their children and grandchildren pick up where they left off and continue to grow. That’s how it has always worked.
Second, it could impact independence. If you can continue to consult parents or grandparents indefinitely, there may be less pressure to become fully autonomous. Part of adulthood is learning to trust your own judgment—especially in moments when the people you once relied on are no longer there to guide you.
Third, the AI version of a loved one can never give the exact advice that person would have given. It may sound like them. It may resemble them. But it is still an interpretation—filtered through data, probability, and algorithms—not lived experience.
Fourth, there’s the issue of influence and agenda. You could be taking advice on topics your parent or grandparent would never have commented on, shaped instead by the app’s design, the content it is fed, current cultural or political climates, or even malicious actors behind the scenes.
Fifth, there’s the risk of replacing human connection with technology. Comfort, guidance, and emotional support may become something we access through an app rather than through living, breathing relationships.
Beyond that, consent becomes complicated. Did the person truly agree to be recreated indefinitely, evolving over time in ways they could never anticipate? Identity drift is inevitable. Language changes. Values shift. Over time, the digital version of someone may begin to say things that directly contradict who they were in life.
There’s also the risk of rewriting memory. People may begin to remember the AI version more vividly than the real person, slowly replacing authentic memories with synthetic interactions.
And grief itself may become transactional—something you access on demand—rather than a difficult but necessary process that shapes resilience and growth.
For children especially, this raises concerns about modeling healthy coping. Kids learn how to grieve by watching adults. Normalizing ongoing digital conversations with the deceased may unintentionally teach avoidance rather than acceptance.
If you want to dig deeper into the technology, controversy, and broader ethical questions surrounding this app, here is a tech-focused article that explores how it works and why it’s generating such strong reactions.
This isn’t really a conversation about whether the technology is impressive. It is.
It’s a conversation about how humans grow, how we separate, how we grieve, and how we become ourselves—without outsourcing those experiences to something that can never truly be human.
Stay connected.
~ Ryan