The Platformisation of Grief: Why the Commodification of Loss Matters
Getty Images for Unsplash+
The compulsion to remain connected with those we've lost and those who’ve gone before is as old as time. As my colleague Carl Öhman explains, the ancient Natufians kept their dead close, burying their bodies under their dwellings but reserving the skulls to cover with plaster, giving them seashells for eyes, and keeping them in the rooms they inhabited.
In more recent historical memory, we had the technologies of the Industrial Revolution. The table-topping séances popularised by the hoaxster Fox sisters in the early 19th century mimicked the hot new long-distance communication device of the time, the telegraph. As soon as photography emerged, enterprising studios offered spirit photography, assuring you that the ghost of your dear departed would materialise behind or alongside you in the image, or your money back. Thomas Edison imagined that his phonograph might be sufficiently sensitive to capture the voices of the Great War dead.
And in modern-day Japan, on a high bluff overlooking the sea where a devastating tsunami once rolled in, the Wind Telephone — a phone box with a rotary phone connected to nothing — facilitates thousands of people from all over the country in having one-way conversations with those they've lost.
With each evolution, the technology is different, but the fundamental instinct to grieve and remember using available technologies remains the same. Now, though, technology isn’t merely used as a tool to reach across the life/death divide; it’s where the dead ‘live’, as preserved in the data they leave behind.
That changes everything. In this piece, I’m going to tell you why. I want to talk about the platformisation of grief: how the technologies we now use to mourn are not neutral tools, but platforms with their own logics that are extracting data, generating profit, and powerfully shaping how we encounter the dead.
Looking for a keynote speaker on grief technology, AI ethics, and the psychology of loss in the digital age? Explore my speaking offering here.
If you want more of my work on grief technology and the digital afterlife:
All the Ghosts in the Machine — my book on the digital afterlife
You can also hear me discuss these topics on APM's Marketplace Tech and CNN's Terms of Service with Clare Duffy.
Key Takeaways
The Industrial Revolution didn't just give us technologies for ‘speaking' to the dead. It gave us a culture that put us on a trajectory towards pathologising emotion and reifying control and optimisation.
Grief is now being positioned as a problem with a technological solution, something to be 'designed out' through technologies such as AI-powered griefbots.
'Platformisation' involves datafication, commodification, and algorithmic selection. The digital dead, and our emotions, are not exempt from this process.
Unlike DIY grief technologies created by individuals, profit-driven ‘death tech’ often sells the idea that natural human grief responses are suboptimal and require technological intervention.
Observing what we do with the data of the dead should sharpen our awareness of how the living are treated as data subjects, vulnerable to exploitation.
The Industrial Revolution's Emotional Legacy
I need to do this in order. Before I talk about the platformisation of grief, I want to think about why we think about emotion the way we do, and how we conceptualise grief in particular.
The Industrial Revolution gifted us a culture, a mindset. It reified and imposed the values of productivity, efficiency, optimisation. It introduced, at scale, the fantasy or nightmare of machines doing the work formerly only accomplishable by humans. During that time, our pillaging of the earth increased exponentially as we searched out fuel for factories and materials for mass-produced goods for a rapidly expanding population. Wood, ore, coal: the Industrial Revolution was hungry.
And as we changed how we worked and did business and consumed, we changed how we thought, spoke, and felt. I wonder: what were the rules of emotion and emotional expression before the Industrial Revolution? Thomas Dixon, in From Passions to Emotions (2003), extrapolates from the language we use to conclude that modern people probably don’t conceptualise 'emotion' in the same way as previous generations. Older frameworks for talking about feelings were more varied: ‘passions’, ‘affections’, ‘sentiments’. The rise of the now all-encompassing word ‘emotion' was a reductive linguistic shift. I’m always interested in linguistic shifts, as language is the house of meaning.
The Latin root of emotion is emovere: 'e' meaning out, and 'movere' meaning to move. So emotion is something that moves you out of yourself, out of equilibrium, and potentially out of control. Pathologisation is baked into the word: emotion is perturbation, deviation from a norm. In a world that values smoothness and control, being stirred up is a problem to be solved.
Using the word 'emotions' subtly nudges us towards the assumption that we should be classifying and managing them. Countless psychotherapy and coaching clients have come to me lamenting that they cannot control their feelings, as though this signals something is broken within them. And we're not just supposed to control them; we’re meant to know them and be able to name them. Those who cannot identify the words for how they’re feeling may be diagnosed with the neuropsychological condition ‘alexithymia.’
But what taxonomy of emotion are we evaluating alexithymics against? With whose system are people with alexithymia unable to comply? The answers to these questions depend heavily on context: the emotional rules and expectations of the dominant culture.
Norbert Elias, in The Civilising Process (1939), argued that emotional expression became increasingly regulated from the medieval period onward. He thought that growing state centralisation and social interdependence produced internalised self-restraint. Emotional control, he said, intensified with the Industrial Revolution. Peter Stearns' American Cool (1994) also argues that 20th-century American culture developed a distinctive emotional style that emphasised restraint, rooted in Victorian and Industrial era values. In Eva Illouz's Cold Intimacies: The Making of Emotional Capitalism, she describes how therapeutic culture and capitalism intertwined in the 20th century, creating ‘emotional capital’.
Max Weber was a German sociologist whose early 20th-century work helped shape how we understand modern capitalism. In The Protestant Ethic and the Spirit of Capitalism (1905), Weber introduced the concept of 'disenchantment': how modernity strips mystery and meaning from the world, reducing human experience to something to be explained, measured, and managed. Calling our modern situation an 'iron cage’, he argued that rationalisation, once unleashed, becomes inescapable. We build rational and logical systems to serve us, and then we are forced to serve them.
In Weber’s description of modernity, wherein we are compelled to render nature predictable and controllable, the project extends to our inner nature, too. Our thoughts and emotion become targets for optimisation and control.
We become so accustomed to the iron cage that we internalise it. It becomes our very skeleton.
The Market's Grief Messaging
In the West, our vocabulary and understanding of grief is still dominated by the late Swiss psychiatrist Elisabeth Kübler-Ross, author of On Death and Dying (1969) and the inspiration for the ‘stages of grief’ model: denial, anger, bargaining, depression, and acceptance. Kübler-Ross wasn't actually investigating grief, but rather people approaching their own deaths. Still, Kübler-Ross’ stages were swiftly seized upon and applied to grief, and since the publication of On Death and Dying, many of us have taken the stages-of-grief model as gospel.
You can see how a staged model with an end point appeals to people, how it fits with the modern ideal of predictable, controllable emotions. But these stages were never empirically validated through research, and on the contrary, research has consistently failed to support the idea that grief unfolds in a predictable series of emotions. The perennial human instinct is to remember and stay connected to our ancestors and loved ones, so ‘closure’ is a 20th century notion.
With their landmark book, Continuing Bonds: New Understandings of Grief (1996), Dennis Klass and his colleagues reminded us of that. Continuing bonds theory, the idea that our relationships with the dead change but do not end, is a theory with empirical support and ample historical evidence.
For a long time, death scholars like myself have been frustrated by the societal stranglehold of stage models of grief, frameworks that include ‘closure’ and ‘moving on’. More recently, we’ve been additionally frustrated that the algorithmically driven online environment reinforces the supposed truth of the stages of grief, as well as the idea that cutting emotional ties with the dead is psychologically necessary. For years now, after I wrote All the Ghosts in the Machine — about digital afterlives in the social-media era — the number-one journalist question has been whether posthumously persistent social-media profiles prevent the bereaved from ‘moving on.’
Now that technology can create an AI companion from virtually anyone you’ve lost, the worm has turned again, and the change has created a new problem. The market is now selling us a line in stark contrast to ‘moving on’ and ‘closure’. Now, we’re explicitly encouraged to avoid closure, to keep simulations of the dead with us, to continue those bonds with the help of a burgeoning array of death-tech apps and services. Digital resurrections. Holograms. Interactive virtual reality (VR) representations. ‘Griefbots’: chatbots of the dead trained on digital remains and powered with artificial intelligence.
To what end? Well, for one, the ultimate form of emotional control: the complete avoidance of aversive emotional experiences like sadness, loneliness, and grief. Justin Harrison, founder of You, Only Virtual, has stated his goal is to eliminate grief entirely.
Follow the thread from the Industrial Revolution through to now. If we accept the flawed premises that emotions are processes to be managed and that difficult emotions are a problem or pathology, then a technological solution for those hard feelings starts to seem like a good idea. What is the optimal amount of grief? In the eyes of tech founders like Justin Harrison, the answer to that is zero, and he wants to help us hit that number. If you never have to break contact with your loved one, he reckons, if you have indefinite access to a convincing simulation of them, you never have to experience loss.
The current market is helping create a society where people feel so frightened, so unable to cope, so unsupported and self-pathologising in the face of strong emotion, that total elimination of hard emotions seems like a constructive goal.
I was working in a mental-health facility as a graduate psychology student when I first heard the word iatrogenesis. As patients were hospitalised, then admitted again and again, as they were given diagnoses and seen as sick people, they got worse. Was it the progression of their illnesses, or was their interface with the psychiatric system making them sicker. Sometimes, the cure leads to another kind of disease.
This bears repeating: grief is not a disease. Sadness is not a disease. Emotions that arise unbidden and which seem ungovernable are not diseases. They are signs that everything is working properly, signs that you are not yet a machine. But the inexorable platformisation of grief isn’t designed to help you remember that.
From DIY to AI: The Eras of Online Mourning
Over the history of the Internet, mourning online has gone through a number of stages.
First came the DIY era, when people created chatrooms to support one another, when free sites like the World Wide Cemetery were established for online memorials, and when those with deeper technical know-how constructed chatbots of lost loved ones. Eugenia Kuyda (now of Replika) chatbotted her late friend Roman Mazurenko. James Vlahos built a Dadbot to stand in for his late father and documented the process in Wired magazine.
Following and overlapping with the DIY era was the era of social media memorialisation and ‘death tech’ services offering archiving, memory preservation, and posthumous communication with mourners. ‘In memory of’ pages cropped up on Facebook from its earliest days. When Facebook started facilitating memorialisation of in-life profiles in 2007, dead people’s accounts were converted or repurposed into places of mourning, beginning the gradual transformation of social-media platforms into digital cemeteries.
And now, we have digital mourning using AI. An advertisement hawking one AI death-tech app portrays a woman recording her mother using 2wai. The woman doesn't seem to be entirely honest with her mother, who is dancing and chatting and playing up to the camera. 'What is this?' the older woman says. 'An interview?' The daughter demurs and doesn't say anything about what's going on. Later, after the older woman has died, it becomes clear what the purpose had been: she is now a griefbot, an interactive figure whose presence in the lives of her descendants continues down the generations, as though she never left.
'You can always give me a call,' she says to her now-adult grandson, who’s showing off the ultrasound of his future child and asking her for parenting advice.
This particular AI use case nakedly pushes grief as a pathology with a technological solution. Grief is something to be designed out, avoided through creating legions of digital ghosts. Continuing bonds on steroids, platformed and sold.
Grief is Being Sold as a Problem with a Technological Solution
So what’s the problem with using grief tech to feel better, including AI resurrections and continuations? If you’ve read my past writing, you might challenge me. Haven't you always said that individual grievers need what they need, and that that shouldn't be policed or judged? These technologies may help some people. If people want them or enjoy them or find comfort from them, what's the big deal. Why are you changing your message now?
The difference here is critical. In a short amount of time, we’ve moved from individuals using existing tools idiosyncratically to grieve in their own way, to commercial services with their own logics, incentives, and extractive business models. We’ve undergone the platformisation of grief.
Our current scenario involves persuading people that there is a problem with how they are feeling, that grief is suboptimal.
This is capitalist, for-profit forces once again dictating how we should or should not be experiencing emotion, and forcing a model of increasing rigidity and control that does nothing whatsoever for our psychological flexibility.
This is being hypnotised into the idea that the natural, instinctive processes that we go through in our griefs (rendered plural for a reason) have something wrong with them, that we are not capable of walking these paths without technological assistance, that there is something pathological about how we organically are or react, and that we can pay someone to fix ourselves through technology.
This is an absurd use case of AI. If we are going to continue desiccating the planet, my god, let it at least be for defensible reasons.
Max Weber's increasingly imprisoning and narrowing and rigidifying iron cage is a terrible place to be. I hate the messages about emotions and their management that are being promulgated by so many death tech platforms. We have never needed technology’s help to assure our impact and legacy in those who come after us. Although we have used it, we have never needed technological assistance to remember and to embody the memories and influence of those gone before. This is a technological solution to a non-problem.
When we utilise many of today’s grief or grief bot platforms, we’re buying into something more than what they’re selling. What assumptions, teachings, understandings are you also buying into or participating in through your purchase of such a service? What are you afraid of? What are you unwilling to experience? What are you assuming you can't handle? What is the anxiety? What are you trying to solve for? When did you cease to trust yourself? When did you become unable to tolerate this? What has changed about your humanity?
It's not even grief treatment or grief coping that is being sold. It is a removal, an erasure of a little piece of your humanity.
In the futuristic Apple TV series Pluribus, when the protagonist yells at a member of the hive mind surrounding her, they go into uncontrollable seizures, falling to the ground in fits. Some numbers of them die.
I think this neatly expresses what might become of us if we join the hive mind of the optimising, frictionless, discomfort-free AI-era Internet: how afraid we may yet become of difficult feelings, the more painful passions, and the lengths to which we may be tempted to go to avoid them. To my psychologist’s eye, those would be problems.
And those problems would have something to do with encroaching platformisation.
Platform Logic and the Digital Dead
As media scholars José van Dijck, Thomas Poell, and Martijn de Waal describe in The Platform Society (2018), platformisation arises from a set of values or priorities derived from 'platform logic’: commodification, datafication, and algorithmic selection.
Datafication renders all human life into data. Commodification turns that data into market value, tradeable assets. And selection means that algorithmic curation controls what you encounter.
Beyond these three aspects, platformisation particularly by big tech also involves dependency - becoming reliant, enframed, and entrapped by that dependence.
In this era of platformised AI grieving, the dead are datafied, or at least, the datafication that occurred in life only continues. This includes their voices, texts, social media, correspondence, videos, their work products - any and all digital remains left behind, which are increasingly ample.
Then, the dead are commodified. Our attachment to the dead, our pain and loss, becomes a market opportunity. Bereaved people are seen as a distinct market to pitch to, leveraging the fear about losing loved ones, and the deeper existential horror of one’s own eventual non-being and non-influence.
Then, algorithms influence how we encounter the dead, the facets and dimensions of them we encounter. The digital remains to which individuals have access can differ wildly. Depending on which subset of digital remains you’re accessing, which are used as training data, you can create endless recombinations and versions of the dead person in terms of personality, appearance, and behaviour.
And dependency? As in the famous episode of Charlie Brooker’s TV series Black Mirror, 'Be Right Back,' bereaved users can become locked into platforms. Perhaps more insidiously, we could become locked into an ideology: that the experience of grief is not to be endured, and must be technologised out.
Would Max Weber Have Made a Griefbot?
Max Weber knew something about people dying under a cloud of unresolved tensions. In 1897, he fought with his father over how his mother was being treated. Two months later, his father died, without their having reconciled. Weber had a kind of breakdown, struggling with sleep, depression, and anxiety. His ordeal was so terrible that he had to stop working and spend time in a sanatorium.
I can imagine what Weber would have thought about the rampant platformisation of emotional life. Still, he was in such deep pain. If he could have, if the technology had existed in the early 20th century, would Weber have constructed a 'griefbot' of his father? Would he have tried to have a last conversation? Griefbot apps advertise healing conversations and last goodbyes as a major benefit of their services. Might he have been tempted?
The Vulnerability We Share With the Dead
I’m concerned with living mourners, yes, and the impact upon them of the platformisation of grief. But I’m also concerned about the dead. Some find it peculiar, the idea that we should worry about the data of the deceased. The dead are dead, they say. They don’t know what’s happening with their information. What more harm can come to them?
And yet, the fate of the data of the dead should make us look to ourselves. In every dead-celebrity hologram, artificially intelligent griefbot, or VR simulation of a deceased loved one, we should see our own, living situation reflected. We are more agentic, less helpless, and more in control than the dead, but we still have something in common.
Observing what we do with the data of the deceased and reflecting on the platformisation of grief, we hone our awareness of the existential crisis in which we are entangled. Profit is extracted from previously uncommodifiable realms: the raw materials of our humanity and personhood, our thoughts and emotions and memories, the things that make us us. All of it is strip-mined like precious ore from a colonised and stolen land.
Remember the giant’s threat in Jack and the Beanstalk?
Fee-fi-fo-fum!
I smell the blood of an Englishman.
Be he alive, or be he dead,
I’ll grind his bones to make my bread.
If you're reading this, you live yet, but you have something in common with the dead: you are exquisitely vulnerable to being dehumanised, treated as mere data to be used. Some tech companies have good, albeit sometimes misguided, intentions for their 'death tech' products. Some are more ruthless and uncaring. Every single one has a beady eye trained on the bottom line.
Frequently Asked Questions
What is the platformisation of grief?
The platformisation of grief refers to how mourning and bereavement are being transformed by digital platform logic — datafication (reducing the dead and their mourners to data points), commodification (turning grief into a market opportunity), and algorithmic selection (allowing platforms to mediate how we encounter and interact with the dead).
What are griefbots and how do they work?
Griefbots are AI-powered chatbots designed to simulate conversations with deceased loved ones. They're trained on the digital remains left behind by the dead — texts, social media posts, emails, voice recordings, videos — and use artificial intelligence to generate responses that mimic the deceased person's communication style and personality.
Are griefbots harmful?
The answer depends on context. Individual grievers creating their own memorial technologies as part of their unique grief journey is very different from for-profit companies marketing the message that natural grief responses are suboptimal and require technological intervention. The concern is not with the technology itself, but with the capitalist framing that positions human emotion as a problem to be solved and monetised.
What is 'continuing bonds' theory?
Continuing bonds theory, developed by Dennis Klass and colleagues, challenges the 20th-century assumption that healthy grief requires 'moving on' or achieving 'closure.' Instead, it recognises that maintaining ongoing connections with the deceased — through memory, ritual, conversation, and other means — is a normal, healthy, and historically universal aspect of human mourning.
How does platformisation affect the data of the dead?
The dead have no legal personality and cannot consent. Their digital remains become resources for mining, training AI systems, and generating profit. Unlike living users, who have at least some (limited) control over their data, the deceased are entirely unagentic. The data of the dead is often plundered with even more impunity than that of the living.
What can we learn from how the dead are treated online?
How we treat the data of the dead reveals the broader logic of platform capitalism and its treatment of all humans as data subjects. The vulnerability of the deceased to exploitation mirrors the vulnerability of the living — we too are treated as data to be mined, commodified, and used. The platformisation of grief is a canary in the coal mine for the platformisation of all human experience.
Sources
Öhman, C. (2024) The Afterlife of Data: What Happens to Your Information When You Die and Why You Should Care. Chicago: University of Chicago Press.
Dixon, T. (2003) From Passions to Emotions: The Creation of a Secular Psychological Category. Cambridge: Cambridge University Press.
Elias, N. (1939) The Civilising Process. Basel: Haus zum Falken. [English translation 1969, Oxford: Blackwell]
Stearns, P.N. (1994) American Cool: Constructing a Twentieth-Century Emotional Style. New York: New York University Press.
Illouz, E. (2007) Cold Intimacies: The Making of Emotional Capitalism. Cambridge: Polity Press.
Weber, M. (1905) The Protestant Ethic and the Spirit of Capitalism. [English translation 1930, London: Allen & Unwin]
Kübler-Ross, E. (1969) On Death and Dying. New York: Macmillan.
Klass, D., Silverman, P.R. and Nickman, S. (eds.) (1996) Continuing Bonds: New Understandings of Grief. Washington, DC: Taylor & Francis.
Kasket, E. (2019) All the Ghosts in the Machine: The Digital Afterlife of Your Personal Data. London: Robinson.
Vlahos, J. (2017) 'A Son's Race to Give His Dying Father Artificial Immortality', Wired, July 2017.
van Dijck, J., Poell, T. and de Waal, M. (2018) The Platform Society: Public Values in a Connective World. New York: Oxford University Press.
Looking for a keynote speaker who examines grief technology, AI ethics, and the human cost of platformisation? Explore Elaine’s speaking offering here.
Elaine is available for keynotes, workshops, panel discussions, and leadership events in the UK, Europe, and beyond. Her talks challenge audiences to question the assumptions embedded in death tech, grief technology, and the broader commodification of human emotional experience.