Grief Bots and the Future of Mourning

The idea of AI tools that let people 'talk' to the dead used to belong to the realm of science fiction. But now, they're officially here. Known as grief bots, these programmes use a person's digital remains, texts, emails, voice recordings, and social media history, to simulate conversation after death.

For some, they offer comfort and connection; for others, they raise unsettling questions about consent, authenticity, and emotional health. As a psychologist and author, I've explored these tensions across media outlets including BBC and CNN, and in my book All the Ghosts in the Machine. Grief bots reveal our deepening entanglement with technology and how we're learning to negotiate loss, memory, and mortality in the digital age.

Key Takeaways:

  • Grief bots are AI programmes that simulate conversations with the deceased using data such as texts, emails, and social media posts.

  • They reflect a growing intersection between AI mourning, psychology, and technology, offering comfort to some but raising deep ethical and emotional questions.

  • These tools can create a sense of connection during digital grief, but also raise questions about the line between remembering the human and interacting with an algorithm.

  • Issues of consent, privacy, and ownership remain unresolved, such as who controls a person's data, and whether that data should ever be used to recreate them.

What Are Grief Bots?

Grief bots are AI systems designed to mimic the voice, personality, appearance, or communication style of someone who has died. They are built using the vast wealth of digital remains we all leave behind: messages, emails, recordings, writings, images, and social media posts. Every piece of digital data we leave behind can be used to train an AI that simulates that individual. By analysing this data, these systems can generate conversations that feel eerily familiar, allowing users to engage in what feels like a dialogue with the deceased.

Not all grief bots serve the same purpose. Some function as memorial chatbots, designed to preserve memories or answer questions about a loved one. Others act as conversational 'resurrection' AIs, recreating the tone and rhythm of a person's responses. Still others are framed as therapeutic grief tools, aiming to support users through loss rather than imitate the dead directly.

These chatbot bereavement AI technologies represent a new frontier in how we mourn, blurring the line between remembrance and simulation. They raise profound questions about whether comfort can come from the artificial presence of the dead, and what it means to grieve when technology allows us to interact with simulations of the deceased.

Examples of Grief Bots

A number of real-world projects have already brought the idea of grief bots to life. Each one reflects different motivations, different creators, and different purposes, revealing just how varied and idiosyncratic our relationships with these technologies can be.

Project December, for example, uses a language model to simulate text conversations with the dead. Users can upload a loved one's past messages to create a chatbot that 'speaks' in their style. Some describe profound comfort from these exchanges, while others find the experience disorienting or even distressing.

Other platforms, like HereAfter AI, take a more structured approach. Rather than imitating spontaneous conversation, they record and store personal stories whilst the person is alive, later allowing relatives to 'ask' questions and hear responses in that person's voice. This transforms remembrance into a kind of interactive archive rather than a simulation of presence.

More experimental ventures such as Seance AI and You, Only Virtual push the boundary further by promising ongoing dialogue and emotional connection after death. These systems blend memorialisation with imitation, forcing us to confront what it means to 'stay in touch' when the other person no longer exists.

Beyond these dedicated services, grief bots have become remarkably accessible. Anyone with a subscription to ChatGPT or access to other widely available AI platforms can create a grief bot by uploading texts, emails, or other digital traces of a deceased person. This means grief bots are no longer limited to specialised companies or expensive bespoke services. They can be homegrown, created privately, and tailored entirely to the creator's wishes. This democratisation increases both the potential for meaningful personal use and the risk of impulsive decisions made during acute grief.

But grief bots aren't only created by commercial services or by the bereaved themselves. Sometimes strangers create bots or characters based on dead people they never knew, as has happened on platforms like Character.AI. For those who actually knew and loved the person, discovering these stranger-created versions can be deeply distressing, a violation of something intimate and private.

Family members, too, create grief bots for varied and sometimes unexpected purposes. Some design them for advocacy, creating bots that speak about the social issues that led to their loved one's death, such as domestic violence or gun violence. Others use them for ongoing family connections, like programming a deceased grandparent's voice to read bedtime stories to grandchildren via Alexa. Some create them for public memorials, as with appearances at events or tributes.

In one well-known example, Kanye West gifted Kim Kardashian a hologram of her late father, Robert Kardashian. The intention may have been meaningful, but such gestures are inherently risky. What one person finds moving, another might find disturbing or intrusive. In another case, a Korean television programme used virtual reality technology to allow a bereaved mother to interact with a digital recreation of her deceased child, helping her process the loss and say goodbye. For her, it was reportedly therapeutic; for others watching, it raised uncomfortable questions about the boundaries of grief and spectacle.

These varied examples reveal a crucial truth: there is no universal response to grief bots. What brings comfort to one person may feel like a profound violation to another. Grief is deeply personal, and the technologies we use to navigate it are no exception. It's also worth noting that the 'death tech space' is rapidly evolving, with new companies and services emerging regularly whilst others fold or pivot. This volatility adds another layer of uncertainty for anyone considering engaging with these tools.

Why Are Grief Bots Emerging Now?

The rise of grief bots sits at the intersection of technology, culture, and psychology. Recent advances in large language models and generative AI have made it possible to create digital systems that sound convincingly human. They can recall facts, but they are also advanced enough to mirror tone, rhythm, and emotional nuance. Combined with breakthroughs in emotional computing and voice synthesis, the technical foundation for digital resurrection has never been stronger.

Yet the emergence of grief bots is also a cultural phenomenon shaped by recent history. The pandemic was a turning point. During lockdowns, many of us experienced digital grief firsthand by attending virtual funerals, messaging memorial pages, or revisiting conversations frozen in chat histories. Physical distancing normalised technologically mediated deep connections in ways that felt unprecedented. The distinctions we once held between shallow, transactional digital communications and the deep, meaningful conversations reserved for physically co-present interactions began to erode.

We became accustomed to getting our emotional and social needs met through technology, and that normalisation has carried forward. We grew more comfortable with the idea that intimate, emotionally significant interactions could happen through screens, that someone or something disembodied but humanlike could meet real relational needs.

Psychologically, grief bots speak to a timeless human instinct: the desire to keep attachments alive. Whether through letters, photos, voicemail recordings, or online memorials, we've always sought to stay connected to the dead. Grief bots are simply the next step in that lineage: a high-tech extension of the ways we use tools to remember, to hold on, and to say what was left unsaid. 

How Do Grief Bots Affect the Way We Grieve?

Grief bots mark a profound shift in how mourning unfolds, not just from traditional rituals, but from the digital practices that emerged more recently. Over the past couple of decades, many bereaved people have maintained connections with the dead through one-way digital conversations: posting birthday messages on a deceased person's Facebook timeline, texting their old phone number, or sending emails to an account that remains active but is no longer read. These practices allowed for expression and connection, but the dead remained silent.

Grief bots change that fundamentally. Now, the dead can appear to 'talk back'. An AI can respond in what feels like a loved one's voice, creating the illusion of reciprocal conversation. This represents a massive leap from posting a memorial message and not expecting a response. The shift from one-way tribute or communication to two-way dialogue, even simulated dialogue, introduces psychological and emotional dynamics we're only beginning to understand.

At first, this interactivity can feel comforting. Hearing familiar words or a voice we remember can bring a sense of presence and reassurance. Psychologists know this as continuing bonds theory: the idea that our relationships with the dead don't end, but transform. Grief bots take those bonds from passive remembrance to active interaction.

Yet there is no single 'right way' to grieve, and no universal timeline. Each bereavement is unique, with needs and preferences differing wildly between individuals. What works for one person may feel deeply wrong to another. When an AI can respond, when it can seem to know you, some users may find this helpful. Others may find themselves confused about whether they're engaging with memory or simulation, or troubled by the experience.

My concern is not that grief bots exist, but that they risk positioning death and grief as problems requiring technological solutions, when grief is not a problem to be solved, but a profoundly human experience to be lived through.

Are Grief Bots Helpful or Harmful?

Whether grief bots are helpful or harmful isn't a simple question: the answer depends largely on context, intention, and the unique nature of each person's grief. There is no universal grief process, no 'stages' to complete, no single path through loss. Grief is as varied as the people who experience it, which makes sweeping claims about grief bots' effects particularly problematic.

On one hand, they offer solace: the chance to 'speak' again, to continue unfinished conversations, or to hear a voice that’s been lost. Some bereaved individuals report feeling seen, heard, or less alone when interacting with a grief bot. On the other hand, we must scrutinise the risks of manufactured intimacy through emotional experiences that feel increasingly real but are ultimately generated by algorithms.

Emotional Authenticity vs. Simulation

One of the greatest psychological tensions with grief bots lies in the gap between how they feel and what they actually are. A grief bot may generate responses that sound exactly like something your loved one would say: the right turns of phrase, the familiar tone, even characteristic humour or warmth. In that moment, the experience can feel emotionally authentic.

But it isn't. The grief bot has no feelings, no consciousness, no actual relationship with you. It's a simulation built from pattern recognition, generating plausible text based on data rather than genuine understanding or care. The interaction may align perfectly with your memories and trigger real emotions, yet it remains fundamentally hollow.

This mismatch can be disorienting. Users may find themselves responding emotionally as though the person is truly present, only to be reminded, sometimes abruptly, that they're engaging with an algorithm. The grief bot might say something slightly off, repeat itself, or fail to grasp context in a way the real person never would. These moments expose the simulation and can deepen distress rather than ease it.

The risk is that the illusion of reciprocity, the sense that someone is 'there' responding to you, can foster confusion about what's real and what's generated. Some users may become emotionally dependent on these exchanges, mistaking algorithmic output for genuine connection and finding it harder to navigate their grief when the simulation becomes their primary way of 'maintaining' the relationship.

Comfort, Distress, and Exploitation in Practice

In real-world usage, users vary widely in how they experience grief bots. Some speak of real emotional relief or feeling like a voice, a memory, or even a relationship has been preserved in a meaningful way. Others report disorientation or distress, especially when the bot's responses feel scripted or out of character, or when they find themselves limiting other areas of their life because they prefer interaction with the grief bot.

The commercial landscape surrounding grief bots raises significant concerns. Bespoke grief bot services, which may offer everything from text exchanges to virtual reality encounters, are for-profit companies. Their business models depend on emphasising the benefits and necessity of their products, and they often prey upon the specific anxieties that accompany grief: the fear of forgetting or being forgotten, the fear of losing contact with someone we love, the visceral fear of permanent loss.

These anxieties are actively monetised. Marketing messages are carefully framed to suggest that grief bots are not merely optional tools, but essential solutions to the pain of bereavement. In the immediate period after a loss, the 'searching and calling' reflex can be extraordinarily strong. Bereaved people may sign up for services without fully understanding the longer-term implications, both emotional and financial. Subscription models, premium features, and algorithmic reinforcement can keep users engaged and paying, exploiting their vulnerability at a moment when they are least equipped to make clearheaded decisions.

There's also the question of what happens when the relationship with the grief bot ends. Critics warn of 'second deaths': the emotional complexity of eventually 'killing off' something you created that resembles your lost person. This can happen in two ways. Sometimes it's a choice you make, deciding to turn off the bot, which brings its own painful questions and complexities. Other times, the loss is out of your control. A company goes out of business, a service shuts down, or a technical failure erases the bot. Each scenario is complicated in different ways, but both can add additional layers of grief to an already difficult process.

Privacy, Consent, and Identity After Death

Another core ethical fault line is who owns a person's digital traces, and whether consent for their use after death was ever given. When a grief bot is built from emails, texts, or voice recordings, it raises pressing questions of posthumous consent. Did the deceased agree, explicitly or implicitly, to become a chatbot? Can they influence how their digital legacy is deployed?

Our legal and ethical right to take someone's digital remains and alter or use them in particular ways is deeply contested. In many jurisdictions, data rights expire or vanish upon death, leaving families with little recourse. Big tech platforms may own and control access to digital traces. 

Some experts now recommend that people include 'do not bot me' clauses in their wills to express their wishes clearly. However, these clauses are not legally enforceable in most places, which highlights the gap between what people might want to control about their posthumous data and what the law actually protects.

These concerns map directly onto broader debates about digital inheritance. Without clear policies or regulation, we risk creating a digital afterlife in which the dead are remixed, monetised, or reinvented without their agency or the informed consent of their loved ones.

There's another troubling dimension: the line between making a grief bot and impersonating someone's identity is blurry and easily crossed. Whilst many grief bots are created with genuine mourning in mind, the same technology can be weaponised. People might create 'bots' of dead individuals not for grief-related reasons, but for criminal purposes such as financial fraud, harassment, or manipulation. Even a grief bot constructed with the best intentions could theoretically be repurposed or misused for impersonation or other harmful ends. This dual-use problem adds yet another layer of ethical complexity to an already fraught landscape.

What Does the Rise of Grief Bots Mean for the Future of Mourning?

The growing presence of grief bots signals a profound shift in how we mourn, remember, and relate to the dead. For centuries, every generation has adapted its mourning rituals to new technologies, from death masks to photography, from voicemail to social media memorials. The rise of AI introduces a new dimension: interactive remembrance, where the dead can appear to respond, and where continuing bonds can involve a literal ongoing dialogue.

In one sense, this could influence the future of mourning in positive ways. AI-driven tools might support therapy, offering comfort or guided reflection to those coping with loss. Digital archives could help preserve family stories, allowing future generations to 'ask' questions of ancestors whose voices and values might otherwise have faded. Used responsibly, these technologies can deepen remembrance and extend empathy.

But there are also risks, both emotional and ethical. When an algorithm can echo a loved one's words, it can blur the distinction between memory and simulation. We may begin to confuse comfort with continuation, or mistake an imitation for intimacy. The tools marked as ‘solutions’ may have unintended consequences. 

I believe the challenge lies not in rejecting these technologies outright, but in resisting the normalisation of grief as something requiring a technological solution. There is no universal grief process, no 'stages' to complete, no single path through loss. Grief is as varied as the people who experience it.

We need boundaries, transparency, and consent, both for the living who use grief bots and for the dead whose data fuels them. It's crucial to ask: Why am I engaging with this? What do I hope to feel, or to avoid feeling? And is this technology serving me, or am I being sold a solution to a problem I don't actually have?

We must also confront the environmental cost: every AI we train has a carbon footprint. When we're convinced that mourning requires a technological solution, our grief literally harms the planet—and we don't talk about that nearly enough.

I explore these dilemmas more deeply in All the Ghosts in the Machine, which examines how technology reshapes death, grief, identity, and the digital afterlife, and in the 'Digital Afterlife' chapter of Reset: Rethinking Your Digital World for a Happier Life.

Why Understanding Grief Bots Matters for the Living

Grief bots reflect our enduring unease with mortality and our growing belief that technology can fill the spaces left behind by those we love. Each new innovation, from memorial pages to chatbot bereavement tools, reveals the same impulse: to keep hold of what death tries to take away.

As we step further into this digital era, we must decide what it means to remember ethically, to grieve without being exploited, and to maintain our agency in how we mourn.

———————————————————————————————————————

Dr. Elaine Kasket is a Visiting Professor at the Centre for Death and Society at the University of Bath and one of the leading voices exploring how technology is transforming death, grief, and identity. Her work has been featured across outlets including BBC, CNN, and many others internationally, and her books All the Ghosts in the Machine and Reset: Rethinking Your Digital World for a Happier Life offer critical insights into our digital lives.

In her keynotes, Dr. Kasket reveals how phenomena like digital remains and grief bots have unexpected implications for organisations—from employee wellbeing and data governance to brand reputation and ethical AI deployment. Her talks challenge audiences to think beyond the obvious about technology's human impact.

If your organisation or corporate event would like to explore these questions further, you can get in touch here to discuss speaking opportunities on digital wellbeing, AI ethics, death and technology, and the psychology of online identity.

———————————————————————————————————————

Frequently Asked Questions About Grief Bots

What is a grief bot?

A grief bot is an AI programme designed to imitate or preserve the voice, language, and personality of someone who has died. By drawing on messages, emails, and social media content, these systems create simulated conversations that allow the living to 'speak' with a digital version of the deceased. Whilst some view them as a form of comfort or remembrance, others see them as a troubling blurring of the boundary between life and death.

Does society approve of grief bots?

Public opinion remains deeply divided. Some people welcome grief bots as compassionate tools that help keep loved ones close, whilst others find them unsettling or even exploitative. Cultural attitudes also play a role. Societies that value emotional openness and technological experimentation may be more accepting, whilst others see such tools as interfering with what they assume are ‘natural’ or ‘normal’ grieving processes. In my experience, people's reactions often depend less on technology itself and more on how and why it's used.

Is the development of grief bots ethical?

The ethics of grief bot development are still being debated. Key questions include consent, such as whether the deceased agreed to have their data reused, and the emotional safety of users who interact with these simulations. Without clear regulation, grief bots risk turning bereavement into a commercial or manipulative experience. I believe ethical design must prioritise transparency, respect for digital remains, and the psychological wellbeing of the living.

Can grief bots actually help with bereavement?

Some people find comfort when using grief bots, whilst others experience renewed pain, confusion, or dependency. There is no universal grief process. Each bereavement is unique, and what helps one person may harm another. Used consciously and with clear boundaries, grief bots may offer short-term support for some individuals, but they are no substitute for human relationships or professional help, and grief itself is not a problem requiring a technological fix.

Who owns a person's data after death?

Ownership of digital data after death depends on each platform's terms of service. In most cases, users licence rather than own their content, which means families may not have automatic access. This lack of clarity raises major questions about consent and the use of personal information in grief bot creation.

Are grief bots being used in therapy?

Some developers and practitioners are experimenting with grief bots in therapeutic settings, but this use remains controversial. The research base is extremely sparse, and claims about benefits have yet to be validated by rigorous study. Given how new and rapidly evolving these technologies are, we simply don't yet understand their long-term implications, or how they may have different effects for different people. Many therapists remain sceptical that algorithms can provide the genuine human empathy required when supporting bereaved individuals, and it’s not yet clear what responsible and ethical use of grief bots in therapy might look like.

Next
Next

Digital Afterlife: What Happens to Our Data When We Die