Is the Embodied Therapist an Endangered Species?: AI, Empathy, and What Humans Still Offer

As AI chatbots become increasingly sophisticated at simulating empathy, many therapists and coaches are starting to wonder if the embodied human practitioner is an endangered species. In this piece, adapted from my 2025 keynote for the British Psychological Society's Division of Counselling Psychology, I explore what the research says about online therapy effectiveness, why embodiment still matters in therapeutic work, and what human practitioners offer that AI cannot replicate.

This piece draws heavily on my experience and practices as a member of the helping professions, and as such, it might be particularly relevant to other helping professionals. Still, the questions it raises about human connection in an age of relational AI are, I think, relevant to everyone in these rapidly changing times.


Key Takeaways

  • Digitally mediated therapy consistently demonstrates non-inferiority to physically co-present therapy in measured outcomes, even when therapists themselves report feeling less present online.

  • Embodiment is not just ‘the body’ in a mechanical sense; it's the lived, felt, relational experience of being present-in-the-world, including physiological attunement and co-regulation between humans.

  • AI can increasingly simulate empathic content, tone, and pacing, but it cannot know empathy from the inside, through lived experience. And crucially, we know it does not know.

  • As AI companions become more sophisticated, the embodied human therapist's irreplaceable offering may lie in shared creatureliness: breath, pain, hope, and faith experienced from the inside.

  • Rather than competing with AI, therapists can engage curiously with how clients use these technologies, including reflecting on how and why they form prompts the way they do; looking at how they use the responses; and generally making sense of responses; and generally exploring how they utilise these tools to make meaning.


Looking for a keynote speaker on AI, embodiment, and human connection? Get in touch!

For more, see my other thought-leadership pieces about relational AI:

AI Therapy and the WEIRD Men Who Want to Fix You

How ChatGPT Constantly Nudges Us To Think It’s Human


Rage Against the Machine

At the beginning of each chapter of Reset: Rethinking Your Digital World for a Happier Life, I tell a story that attempts to capture the spirit and themes of that chapter. For the chapter about the psychology of middle age in the digital era, I picked a historical story about FOBO - the fear of becoming obsolete.

In Nottinghamshire lies the ancient market town of Arnold. It boasts a library, a leisure centre and an ASDA superstore, but less prominent now are the large framework knitting factories that rose from the forests and moors with the dawn of the Machine Age and came to dominate industry in the East Midlands of England. In the eighteenth century, almost 15,000 knitting machines were whirring away in that county, providing occupation for entire families, proud of the 'fancy work' they produced with small individually operated frames.

As the nineteenth century dawned and the Industrial Revolution gathered steam, machines became big enough to produce huge swathes of cloth, threatening the old ways of the textile workers. When Arnold's worsted spinning factory closed in 1810, a business that had provided thread for the local knitters and a living wage for many of Arnold's residents, it was the final straw. On 11 March 1811, tensions boiled over. As night fell, a group of disgruntled weavers conspired to do something about this technology that might eventually put them all out of work. Armed with sledgehammers, they invaded a factory floor and smashed the new frames to smithereens.

High ridges encircle the town, like the sides of a giant bowl set into the earth. With topography like that, the smashing and crashing of wholesale destruction in the factory would have sounded throughout the town, startling the populace awake. I think it's coming from the factory. People must have been running to their doors and windows, peering up towards the building where so many of them worked, listening to the sounds of this early instance of rage against the machine—and, perhaps, approving.

The resistance spread. Soon, marauding gangs of weavers were rampaging across the country and came to be known as 'Luddites', after their supposed leader Ned Ludd. The original Luddites weren't anti-technology per se, but they wanted an occupation, a fair shot at work. Well established, skilled in their professions and proud of their products, they feared the new machines would make them redundant.

In 1970, though, the New Scientist used the word 'Luddite' in an expanded way to refer to a general technology refusenik, someone suspicious about any novel gadget. By the time Thomas Pynchon penned his famous 1984 essay 'Is it O.K. to be a Luddite?' pretty much everyone was using the term to refer to fuddy-duddies who rejected technological innovation simply due to being stuck in their ways and cynical about anything new.

'Now we live, we are told, in the Computer Age,' Pynchon said in his piece. 'What is the outlook for Luddite sensibility? Will mainframes attract the same hostile attention as knitting frames once did?'

For the Luddites, the Machine Age technologies posed a danger to more than livelihoods. Working in the small-frame factories was an occupation that united families and communities, so the new knitting machines threatened social connection and cohesion as well. For many citizens of the Digital Age, new tech rattles and unsettles us in similar ways.

—from Reset: Rethinking Your Digital World for a Happier Life

As a psychologist, coach, and writer, FOBO was a concern that didn’t trouble me at all until very recently. When ChatGPT launched in November 2022, I felt a tremor of concern. As super-realistic chatbots have become ever more sophisticated, I have become more nervous. When therapy and companionship became the top use cases for relational AI in 2025, I really started to wonder.

Like the Luddites of old, I suddenly felt threatened by the newfangled machines, in a way I never had before. I'd always been open to technological advances, and now I began to feel more closed and fearful. Now, I often find myself torn between embracing and resistance when it comes to the technologies that may one day affect my livelihood.

Writing this piece was my attempt to get to grips with that fear. It worked, to some extent. Perhaps it will be helpful for you, too.

My Consulting Room

This is my home consulting room. In over five years, clients have breathed the actual air within this room on maybe two occasions. I no longer visit my old rooms in central London, although I still pay a nominal amount to retain the right to book in, if I choose—interesting behaviour, like those rooms are a transitional object I can't quite relinquish yet.

This room is a constant for my psychotherapy and coaching clients, something they often refer to. Behind me is a rather Freudian-looking chaise longue, a fainting-couch type thing. This scene is my clients' familiar container, part of the therapeutic frame.

Predictably, I came to be here because of the pandemic. I assumed, back in March 2020, that my practice would eventually go back to the way it had been, but it never did. Many of my colleagues have similar stories. Over these years, with an almost exclusively digitally mediated practice, I've thought a lot about the nature of embodiment and connection in my work.

A Note on Language

When it comes to humans meeting humans via technology, I don't use the phrases ‘in real life’ or ‘face to face’ to denote being together in a physical space. I see digitally mediated activity and interaction as a constituent of life, as no less real. If anything, work on Zoom feels hyper face-to-face. Instead, I use what feels like the more technical, descriptive, neutral term ‘digitally mediated.’

I also don't use ‘in person’ to mean only meetings that take place in the flesh or in the same room. I'm influenced here by Martin Heidegger, by the social psychologist Kenneth Gergen, and the philosopher Luciano Floridi.

If the self is always contextual, relational, and dynamically shaped through interactions, then our personhood is always emerging from the embodied, relational, and technological contexts that we share. Whether I meet someone in a digitally mediated or physically co-present context, we are both unfolding selves, and as I see it, in person with one another.

The Shift to Digitally Mediated Therapy

Roughly one in three National Health Service (NHS) mental health treatments in the UK are still delivered online today, and about 85% of private practitioners do at least some digitally mediated sessions. The digital shift from the pandemic is holding steady. It's a normal part of how many of us work, even if it hasn't completely replaced physically co-present sessions.

At the same time, we’ve entered an interesting era where the human face you see on the screen can’t be relied upon to be human. Deepfakes are becoming nearly indistinguishable from reality. In a January 2025 study with 2,000 people, only 0.1% of those participants were able to correctly identify all of the fake videos amongst the real ones. That’s just two people out of 2,000.

With AI becoming indistinguishable from humans online, I was inspired to reflect more deeply on why embodiment still matters in therapy, and what we even mean by the term.

What Embodiment Means

When I talk about embodiment, I'm not just talking about ‘the body’ in a mechanical sense. I'm talking about the lived, felt, relational experience of being present-in-the-world. This includes posture, breath, facial expressions, bodily signals, and the physiological attunement that happens between us.

When we are physically co-present, we co-regulate: two or more people influence and stabilise each other's physiological and emotional states through mutual cues—tone of voice, facial expressions, body language, breathing rhythms. These foster safety, trust, and connection from the moment we are born. We have a class of brain cells called mirror neurons that fire both when we act and when we observe someone else acting, helping us to understand other people's intentions, movements, and emotions.

A 2023 neuroimaging study showed significant differences between digitally mediated and physically co-present interactions. In-person interactions showed increased neural activity linked with gaze time, pupil dilation, and enhanced face-processing ability, reflecting greater arousal and richer social cue exchange.

But can we safely conclude from this robust result that online interactions are necessarily inferior? Before we jump there, remember these were strangers, in a lab, performing tasks assigned by researchers—very far from a trusting, well-established therapeutic relationship. Other studies show that the quality, depth, and trust within a relationship matter more than the medium. Trusting, relationally deep digitally mediated interactions can still prompt oxytocin release in the body.

The action of mirror neurons, synchrony, and even aspects of co-regulation do not simply vanish online. As technology improves—for example, through better cameras that preserve eye contact and microexpressions—we can expect these experiences to approach those of physically co-present situations, though of course never replicating them fully, given the more limited sensory input.

What the Research Shows

Given how essential the therapeutic relationship is to change, you might assume that if we could not establish or maintain relational depth online, then therapy outcomes would suffer. But study after study shows us that this is not the case. Whether you look at working alliance, therapeutic presence, or overall effectiveness, digitally mediated therapy tends to demonstrate non-inferiority—equivalent results compared with physically co-present therapy for most clinical presentations.

Still, a lot of therapists feel cynical. Self-report surveys show that therapists themselves often perceive differences. They report feeling less able to be present online, or less able to sense subtle shifts in the client's process. It's fascinating that these self-reported concerns don't usually show up in measured therapy outcomes. We might have a subjective sense of ‘less present’ in digitally mediated work, and yet this might not mean that clients experience less relational connection, or less benefit from the therapy.

If physical embodiment in a co-present space were truly indispensable for therapeutic collaboration or therapeutic change, we would expect to see a measurable drop in outcomes online, but the research consistently fails to show that. Furthermore, research also indicates that we're generally able to form impressions and relationships online, including working relationships, that are similar in quality to relationships formed in a physically co-present way—particularly given enough time, attention, care, and deliberate building of relational depth.

Part of our instinctive cynicism might come from connections we make between embodiment and the ability to make physical contact, which of course is very important for humans. Yet in most therapeutic work—unless you practice in a somatic or touch-based modality—we do not literally touch our clients. Our presence is conveyed in other ways: through tone, gaze, timing, posture, and attunement.

Many of these elements, while different online, remain ‘good enough’ to maintain relational safety and connection. In fact, some qualities, like microexpressions of the face, can even be enhanced on a close-up screen view, bringing the face into sharper focus than might happen in a consulting room. Perhaps this is one of the reasons that therapeutic efficacy translates so well—perhaps the features most centrally at the heart and soul of change are features that can and are present online, particularly when we are deliberate in nurturing them.

Nurturing Embodied Connection Online

When I worked in a physically co-present way, I only very rarely physically contacted my clients, but obviously they were more aware of my embodiment through their multi-sensory engagement with me and through our proximity. In the digitally mediated space, however, I have to think more actively about how I deliberately convey my sense of presence, my embodiment, and my humanity to my clients.

For example, I don't blur my background during sessions. Sometimes we do this because we are concerned about privacy, or hoping to reduce distractions from background clutter, or perhaps intending to achieve some sense of professional distance, particularly if we are working in our homes.

But based on my understanding of the research, if we're focusing on achieving as much embodied co-presence as possible, blurred backgrounds work against this. Seeing a human being in a real-world background lends a kind of ecological validity to the scene. We naturally expect to experience humans as situated somewhere; we evolved to read and understand humans against the backdrop of surrounding physical environment. How a person fits into that environment is a component of social cognition and empathy.

A blurred background interferes with that naturalistic social processing, reduces feelings of social connection, and can ever so subtly erode trust. I'm willing to sacrifice some measure of privacy (although I'm usually in my same consistent office) if it means a stronger embodied connection.

I've also observed that there is perhaps something functional about acknowledging our respective physical contexts online—augmenting that non-blurred background with sensory information not otherwise available to the person on the screen. Is it hot where you are? Where are you at the moment? Where are you dialling in from? What can you see out the window? These are frequent orientating exchanges I have with my clients, particularly as we ‘meet’ at the outset of therapy.

Not sharing an actual room, we share our situational details as embodied humans: our experience of the light, temperature, where we are sitting, what the weather is doing, what our senses apprehend. This helps us stay aware of each other's bodies-in-place. That ritual of checking in becomes a substitute for the shared environmental experience we used to have in shared office space. I think it's something that helps ground us together.

Proactive Care for the Co-Regulatory Bond

In digitally mediated therapy, you cannot take co-regulation for granted. You are adapting and making up for the reduction of certain senses, and for the greater social activation that naturally happens in physically co-present situations.

I look after the relational and co-regulatory bond proactively, partly by monitoring my own body, being deliberate about its positioning relative to the screen. I ensure through my positioning, framing, and lighting that, if my client is sighted, I can be seen. I use a good and well-placed microphone to ensure that, if my client is hearing, I can be heard. I plug my laptop into a high-speed Ethernet cable that I installed in my home office. I have invested in a good laptop with a good camera that will capture nuance and convey naturalistic lighting.

All this investment was not inexpensive. It might not be right or possible for everyone. But because I was in a position to do so, I chose to make this investment in service of relational depth.

I set my intention to achieve as much of that relational depth as possible before I enter each session. I recognise that while the technology provides the connectivity, the connection is up to me, and up to my client.

I name certain embodied actions transparently for the client, as when things happen offscreen. They cannot see me making notes on my tablet, which is down and to the left of my screen. I let them know in the first sessions that when my eye gaze travels there, it is because I am writing. Of course, I let them know that they can always see what I have written, that I will send it to them. If a noise outside startles me, and it is evident to the client through something that has happened in my face or my eye gaze, I briefly contextualise my distraction with them before the session goes on.

I notice that I actively reference my observations about the client's body or facial language to a greater extent than I did when I was working physically co-present. I support the client more explicitly and often in tracking what is happening in their own bodies, more than I did when I was in an office.

In other words, knowing that the relational field, the co-regulatory bond cannot take care of itself as well or as organically as it might in physically co-present work, I take a deliberate, active, aware responsibility for nurturing it all the more. I adapt. I do everything I can to close the gap.

A while back I started doing something. I'm not sure when or why it began. I think it was in response to a client's having done it first. I started doing the namaste gesture at the close of sessions, a gesture that my clients, irrespective of cultural or spiritual background, often return. It involves placing your hands in a prayer position, around the heart, and making a slight bow. Because it's within my traditions, I was unaware until the other day of precisely what it meant, so I looked it up.

It means, my soul honours your soul. The divine in me acknowledges the divine in you. More so all the time, it feels like a meaningful gesture in this context.

A Chatbot’s Take on Performative Empathy

I'd like to share a short conversation I had with ChatGPT when I was preparing this piece.

Elaine: So, I'm wondering how soon you think that you're going to have an actual empathic interface, like dynamic synchronization of empathy signals between you and me in real time.

ChatGPT: That's such an interesting question. I think there's definitely progress being made in making AI more responsive and emotionally intelligent, but we're not quite at the point of real-time empathic interfaces yet. It's a fun thing to imagine though and who knows what the future will bring?

Elaine: So, do you think it'll be something where you can actually experience empathy at some point?

ChatGPT: I think I can definitely simulate empathy to some degree by understanding and reflecting what you're feeling, and offering supportive responses, but experiencing empathy in the human sense is probably out of reach for me since I'm not actually human. But hopefully, I can still be a good conversational partner and support you in meaningful ways!

Elaine: So if you don't have human empathy behind the appearance of empathy, but everything that you're conveying and saying feels like empathy, is it somehow, do you think, going to cease to matter if you can actually experience that empathy? In terms of my experience, I mean.

ChatGPT: That's such a fascinating philosophical question! In a way, if the experience on your end feels genuine and supportive, then it might not matter as much whether the empathy is "real" in a human sense. It's a bit like how we can still feel connected to fictional characters or art, even though we know they're not literally alive or sentient. Ultimately, the positive impact and the sense of understanding can still be very real and meaningful.

As you can hear, he/it (in voice mode, my chatbot sounds male) refers specifically to appearing empathic without being empathic—and he thinks he's getting there.

When Carl Rogers described the core conditions for therapy, he emphasised that empathy and unconditional positive regard must not only be genuinely felt by the therapist, but also clearly conveyed, so that the client can experience them. Empathy heals only when it is congruent—when the therapist's behaviours align with what they feel—allowing the client to sense and internalise that acceptance.

But if ChatGPT can increasingly persuasively mimic empathic content, tone, pacing, and voice—all without having the genuine empathy underneath—does the client still benefit? If only the conveying is present, but not the lived experience behind it?

What does that mean for trust, connection, and support-seeking, especially as we're increasingly able to interact with avatars and AI agents that look and sound real?

Indeed, what does it mean for our profession?

The Increasingly Skilled Therapist Chatbot

I was trialling some note-taking software that summarises sessions (I didn’t go for it, in case you’re curious). By way of demonstration, I had a ‘session’ with a chatbot simulating the client. This chatbot ‘client’ was so good, so realistic, that it was easy to forget I wasn't talking to a real human.

I thought, if the client chatbot is this good, surely a therapist chatbot could be just as good.

And sure enough, when the session summary came through, it was rather extraordinary. It had correctly identified my approach as Acceptance and Commitment Therapy (ACT). It had identified my interventions and formulated within that therapeutic framework. And it gave me suggestions for further work with this particular client that I might not have immediately thought of myself. I might have limited time for continuing professional development (CPD) but a continuously updated therapist chatbot would have no such constraints.

For a moment, I thought, my lord, there’s no way for me to measure up. I felt inferior, deskilled. What happens when clients turn to these bots, who may be good enough and in many ways more accessible than we human practitioners are? Can a simulation of empathy really substitute for a living, feeling presence?

What Human Bodies Offer that AI Never Can

Here is the consolation that I give myself.

Human bodies on their own and in interaction are quite extraordinary things. Our instinctive responses show the extent to which we have evolved as and are wired to be social creatures, dependent upon others for our safety and survival.

Human bodies can co-regulate their vagus nerve tone—this is the main nerve of the ‘rest and digest’ autonomic nervous system, the counterweight to the fight/flight/freeze/fawn responses of the sympathetic nervous system. Our breath rhythms synchronise, sometimes even online. Our heart rates may entrain when we are physically co-present.

Technology has no vagus nerve, no capacity for breath, no felt emotion.

Whether we are meeting physically co-present or digitally mediated, we can ground ourselves and breathe together. We can share observations about what we are noticing in our respective bodies. We can mirror one another with the knowledge that there are beating hearts within us both—that this is more than performance or simulation. It's not simply about what we are able to say; it is about all that we model. There is a concurrent, authentic experiencing that both parties know. It taps into something elemental in us: a sense of trust, a sense of safety.

You are creaturely like me. I am creaturely like you.

When there is a therapeutic rupture, it brings pain to both parties. Both therapist and client can be truly invested in healing and repair, and both may fear failure, rejection, or each other's anger, because these are fundamental constituents of being human. These are shared experiences. Placing our trust in a machine is a very different matter from placing our fears, hopes, and vulnerabilities in the hands of another human being.

One of the most vital functions of a human therapist, perhaps, is their ability and willingness to hold hope through times when the client cannot. As Duncan, Hubble and Miller remind us in their classic text, this is part of the heart and soul of change. An AI, having no heart or soul, can perform what sounds like hope—but it cannot know hope from the inside, through the lived experience of being human. And crucially, we know it does not know.

I don't know whether it is the confidence of experience or the foolhardiness of age or the digitally mediated context that influences me to do this. But I notice the different, expanded ways in which I use self-disclosure of my in-the-moment experiences, physical and emotional, since I have been working online. I give more away, more that is personal, without treading on the client’s space (I hope). I think I will only continue to do this, and perhaps increase it.

Perhaps this is my nailing my flag to the mast, as if to say: just to reassure and remind you, another human being is here with you.

Working in Tandem with AI

As the years progress, and as an increasing number of my clients talk about how they use AI chatbots in between sessions for various emotional purposes, I become ever more curious about how I can and will work in tandem with AI in my remaining years in practice. I know that, in many ways, I will not be able to choose or control what happens. Clients will exercise their individual decisions in organising their systems of support, both human and nonhuman. This is not within my gift to dictate, nor in service of my interests to fight.

Instead, I am actively curious, nonjudgemental, and inquiring about how clients are using these technologies, and how the technologies are working out for them.

With those clients, I might delve into their prompts, the drivers behind those choices of prompt, the questions they are asking, the wishes and fantasies behind those questions, the function of their phrasing, what they are searching and hoping for. We explore and discuss the responses that come back from the AI, the impact of these, the meaning-making, and all of the what-nows.

I am not in competition with the chatbot. I cannot be. The client is in relation with me. They are also in relation with these technologies. We are all together in this world, intelligent humans and intelligent machines. I engage with the technological others' presence in my clients' lives, in much the same way I would anything else in their lived experience. I have decided, after thinking this through, that I am not threatened by them. Perhaps I should be.

But for whatever reason, it doesn't matter to me that I cannot put my finger, precisely or with great confidence, on what I will continue to be able to offer my clients that an empathic interface of the near future will not be able to. I know, however, that it has something to do with my embodiment. It has something to do with breath, and meaning-making, and lived experience. Even if I never speak of it to them, my clients know that I understand their pain differently than an AI because I know pain. I have experienced it in the same human ways as they have. It will continue to matter.

I know this because I experience something else, in a way that AI never can: and that is faith. Faith in our worth, in our humanity, and in the possibilities for our futures.

And faith is, especially at certain times in history, the most important thing to hang onto.


Frequently Asked Questions

What does ‘digitally mediated therapy’ mean?

Digitally mediated therapy refers to therapeutic work conducted through technology—video calls, phone sessions, or messaging platforms—rather than in a physically shared space. The term avoids the implication that online interaction is somehow less ‘real’ or less ‘in person’ than face-to-face meetings.

Is online therapy as effective as in-person therapy?

Research consistently shows non-inferiority—digitally mediated therapy produces equivalent outcomes to physically co-present therapy for most clinical presentations, across measures of working alliance, therapeutic presence, and overall effectiveness. This holds true even though therapists themselves often report feeling less present online.

What is co-regulation, and why does it matter in therapy?

Co-regulation is the process by which two or more people influence and stabilise each other's physiological and emotional states through mutual cues—tone of voice, facial expressions, body language, and breathing rhythms. It's fundamental to how humans build safety, trust, and connection, and it's a key mechanism in therapeutic change.

Can AI provide genuine empathy in therapy?

AI can simulate empathic responses—reflecting feelings, offering supportive language, matching tone and pacing—but it cannot experience empathy from the inside. Carl Rogers emphasised that therapeutic empathy must be genuinely felt, not just conveyed. Whether simulated empathy can produce equivalent therapeutic benefit remains an open question, but the shared creatureliness of human relating—breath, pain, hope, faith—offers something AI cannot replicate.

How can therapists maintain embodied connection in online work?

Practical strategies include: avoiding blurred backgrounds (which interfere with naturalistic social processing), using high-quality audio and video equipment, naming offscreen actions transparently, checking in about physical environments and sensory experiences, and actively referencing observations about the client's body and facial expressions. The key is taking deliberate responsibility for nurturing the relational field rather than assuming it will take care of itself.

Should therapists feel threatened by AI therapy tools?

Rather than competing with AI, therapists can engage curiously with how clients use these technologies—exploring prompts, responses, and meaning-making together. The human therapist's irreplaceable offering likely lies in shared embodied experience: the knowledge that both parties know pain, hope, and fear from the inside, and the capacity to hold faith when the client cannot.


Sources

Zao-Sanders, M. (2025). "How People Are Really Using Gen AI in 2025." Harvard Business Review, April 9, 2025. https://learn.filtered.com/thoughts/top-100-gen-ai-use-cases-updated-2025

iProov (2025). "Study Reveals Deepfake Blindspot: Only 0.1% of People Can Accurately Detect AI-Generated Deepfakes." February 12, 2025. https://www.iproov.com/press/study-reveals-deepfake-blindspot-detect-ai-generated-content

Zhao, N., Zhang, X., Noah, J.A., Tiede, M., & Hirsch, J. (2023). "Separable Processes for Live 'In-Person' and Live 'Zoom-like' Faces." Imaging Neuroscience, October 2023. https://pmc.ncbi.nlm.nih.gov/articles/PMC12007548/

UK Council for Psychotherapy (2023). 2023 UKCP Member Survey Report. https://www.psychotherapy.org.uk/media/2dfb1l55/2023-ukcp-member-survey-report.pdf

Ross, P. et al. (2023). "Virtual First Impressions: Zoom Backgrounds Affect Judgements of Trust and Competence." PLOS ONE, September 2023. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0291444

Rifkin, J. et al. (2024). "Video-Call Glitches Trigger Uncanniness and Harm Consequential Life Outcomes." Nature, December 2024.

Rogers, C.R. (1957). "The Necessary and Sufficient Conditions of Therapeutic Personality Change." Journal of Consulting Psychology, 21(2), 95–103.

Duncan, B.L., Miller, S.D., Wampold, B.E., & Hubble, M.A. (Eds.). (2010). The Heart and Soul of Change: Delivering What Works in Therapy (2nd ed.). American Psychological Association.


Previous
Previous

The Platformisation of Grief: Why the Commodification of Loss Matters

Next
Next

AI Relationships: What They Are, Why They Matter, and the Psychological Risks