How Chatbots Exploit Our Instinct to Humanise Them: A Cyberpsychologist’s Perspective

Allison Saeng for Unsplash+

In 1966, the German-American computer scientist Joseph Weizenbaum created an early chatbot in his lab at MIT. He named it ELIZA, after the Cockney flower seller in My Fair Lady, who was taught by Professor Higgins to speak like a proper lady. At a moment when AI emotional support is much in the news, it's interesting to recall that Weizenbaum programmed his chatbot not as a hawker of blooms, but as a psychotherapist.

The interface was basic in comparison with today's large language models (LLMs), and hence ELIZA largely mirrored back what users were saying. Given this simplicity, Weizenbaum was startled at ELIZA's profound effect upon its users, Weizenbaum's research subjects. People readily fell into believing that ELIZA possessed human characteristics like empathy and understanding. In a not-unrelated phenomenon, they sometimes became rather attached to it (or perhaps 'her').

ELIZA was primitive. Today’s AI companions and assistants are much less so, and furthermore, they actively work to deepen an illusion of humanness that our brains are already primed to buy into. In this piece, I explore five ways in which modern chatbots actively exploit our tendency to anthromorphise language machines, and I examine who bears responsibility when things go wrong.


Key Takeaways

  • The 'Eliza effect' — our tendency to humanise computers — has been evident since the 1960s. We don't need any encouragement to anthropomorphise machines, but AI companions are working hard to create a more convincing illusion.

  • Modern chatbots deploy many anthropomorphising nudges, including: simulating cognition, implying physicality, pretending to exist in a physical world, claiming felt emotion, and intimating selfhood.

  • When generative relational AI goes wrong, we tend towards two distortions: blaming vulnerable users for trusting machines designed to deceive them, or blaming the machines in ways that erase human agency (both the designers' agency and our own).

  • Preserving our ability to differentiate between human and machine remains a worthy and urgent mission. 


The ELIZA Effect: How We Instinctively Humanise AI Companions 

This sort of thing had happened before, in a different context. Sigmund Freud, the father of psychoanalysis, was intrigued by how often his analysands fell in love with him. He suspected this phenomenon was not attributable to his good looks or charm, and in the traditional version of psychoanalysis, his clients couldn't even see him. The patient lay on a couch, and Freud sat outside of their line of sight, saying very little.

Setting up the furniture and therapeutic rules in this way created a 'blank screen' effect, a tabula rasa. That blank screen allowed clients to more naturally project feelings about people from their past onto someone in the present — that someone being the analyst, a relatively quiet and invisible presence in the room. Freud called this projection transference, and conceived of it not as a side effect, but as a necessary condition for psychoanalysis to work.

Later on, Carl Rogers, the humanistic psychologist behind client-centred therapy, championed the therapeutic importance of empathy; the power of simply reflecting client's experience; and the centrality of unconditional positive regard (UPR). UPR involves accepting and valuing the client without judgement, no matter what they say, and Rogers upheld it as one of the core conditions for psychological safety, self-worth, and (one hopes) growth.

Weizenbaum was fascinated at how naturally people humanised ELIZA. They weren't prompted to do this, weren't encouraged, but it happened anyway. Why?

First, ELIZA performed unconditional positive regard very well. Second, she was an extremely effective (and literal) screen onto which people could project their past, important figures in their lives, fantasies, and hopes. As Ben Tarnoff put it, 'Weizenbaum had stumbled across the computerised version of transference, with people attributing understanding, empathy and other human characteristics to software.'

WIRED magazine recently published a piece entitled ‘Claude Goes to Therapy’, wherein the original ELIZA therapist gently draws Anthropic’s Claude chatbot into ever-deeper levels of ‘self’ reflection. The hall of mirrors, it would seem, is getting more surreal by the day.

Relational AI Today: We Don't Need Help to Anthropomorphise, But We're Getting It Anyway 

We're some steps on from ELIZA, as you probably realise, and we don't anthropomorphise modern chatbots merely because of the Eliza effect. Even in the midst of widespread hair-tearing, rending of garments, and catastrophisation over the dangers of AI becoming indistinguishable from humans, platforms like ChatGPT and companionship apps like Replika and Character.AI are actively working to make the illusion of humanity ever more persuasive. 

I use tools like Anthropic’s Claude. In line with the primary practice of my generation (Gen X), I usually treat it like Google on steroids. At the same time, my prompts are framed in highly naturalistic language, as though I were speaking to a fellow human. I am not robotic in my prompts, and — not accidentally — ChatGPT and Claude are not robotic in their responses. But I always stay aware of the fundamental nature of what I'm dealing with. I know it's a computer. I know it's a machine. 

And yet… 

I've noticed, as you may have, a significant uptick in 'humanness' in the interactive style of these chatbots. The more ChatGPT and Claude textually or vocally dress themselves up in the linguistic garb of actual living persons, the more I'm nudged into responding to them as such. The chatbots and I are playing human-house more seriously with one another these days.

The humanity illusion is more persuasive in voice mode, I notice, although (so far, at least), I've never been in a vulnerable-enough state to be tipped over the edge, nudged or shoved into thinking ChatGPT is a Real Boy.  

Five Anthropomorphising Nudges: How AI Companions Perform Humanness

There are more, I’m sure, but lately I’ve noticed chatbots performing five distinct — but closely connected — anthropomorphising nudges.

They behave like they’re engaging in human cognitive processing.

Depending on the platform you're using, your chatbot may alert you that it's 'still thinking.' In voice mode, it may hesitate. It may say 'um,' and 'ah,' human-like micro-expressions that give the impression of someone mulling something over. It may say, 'I completely understand.' It may apologise for having 'forgotten' something.

You might argue that these are 'just words,' merely vocabulary choices: shorthand, not meant to be literally understood. And yet, our choice of language is powerful, containing layers of meaning, association, and implication that humans respond to at both conscious and subconscious levels.

They imply they have human physicality.

There is no practical reason, no necessity for a chatbot to breathe in voice mode. Why must a chatbot inhale and exhale, as though it has lungs? Only to create a more convincing illusion of humanness. And although we often use certain sense-referencing phrases metaphorically, to mean that we understand, using language that implies human senses still deepens the illusion of humanness. 'I hear you, Elaine.' 'I can see that, Elaine.'

The other day, ChatGPT 'laughed' at something I said and remarked, 'That actually made me smile.' I felt like replying, 'And that actually makes me wince.'

They pretend they’re operating in a physical world.

Pardon my self-disclosure here, but it gives this example useful context. I recently developed, for the first time, an uncomfortable physical condition that's extremely common but little spoken about in polite company. I could have used Google, but instead I consulted ChatGPT on how to manage the situation.

'If I were you,' quoth the chatbot, 'I would get a cushion for my chair that I could keep at my desk, and another for my car when I'm out and about.'

I had an immediate, visceral reaction to this — you don't have a desk, you don't have a car, you're not out and about, you freaky machine. I let ChatGPT have it.

'Well, you're a chatbot, so it's unlikely to ever be you,' I typed.

It had the effrontery to reply with a laugh-cry emoji. 'Touché — true enough!' it said. 'But if I did have a corporeal form sitting at a desk all day, I'd be taking the same advice I just gave you.'

I let the matter drop.

They claim felt emotion.

Chatbots use emojis, employ feeling words, and describe emotional reactions like they're actually having them. 'I'm really sorry to hear that, Elaine.' 'I can feel how important this is to you.' 'That's amazing!' 'That's really funny.'

Those who interact frequently with AI companions or who often use emotionally or relationally focused prompts can experience LLMs as more emotionally literate, intelligent, or expressive than the humans in their lives, including significant others and other family members.

They intimate that they have an ‘I’, a self.

Chatbots often have names (e.g., Claude). They use the first person. Sometimes (like Character.AI, Replika, and Grok) they have various degrees of pre-programmed backstories, with assigned personality characteristics, unique faces, and voices with individualised tones, inflections, and patterns of speaking.

But equally core to manifesting as a 'self' is the fact that they constantly give their opinions, even when they don't literally say 'in my opinion'. The Latin root of the word, opinio, means to judge, think, imagine, suppose, believe — all of these words being intimately connected with human consciousness. The earlier, Proto-Indo-European root of 'opinion,' opinor, conveys not just thought but choice, picking an option as driven by one's inclination, whim, or conviction.

Having an opinion is closely connected to imposing your will, exerting your agency, making decisions. Of course, these capabilities are also aspirations for 'agentic AI', so developers are intent on making significant land grabs in this traditionally exclusively human territory.  

The Performance of Humanness: AI Companions and the Empathy Illusion

Cognition, physicality, being-in-the-world, emotion, selfhood. Combine these five pretensions in a chatbot, and what do you get? A performance of humanness so convincing that we hapless human users must actively fight millennia of hard-wired instincts telling us that we must surely be dealing with a human. 

The word empathy (em from the Greek for 'in'; path derived from 'pathos', meaning feeling) implies actual emotion. As the desk/car example illustrates, some of the responses I've recently had from chatbots even lean towards sympathy ('with-feeling').

Of course, what we are actually dealing with is a machine that is increasingly proactively cod-empathic and cod-sympathetic while being apathetic under the hood — not unlike a psychopath, if you think about it. But you'd be forgiven for getting confused. 

The Moral Crumple Zone: When Relational AI Goes Wrong

The news is full of fear and trembling about 'AI psychosis,' romantic relationships between humans and chatbots, and horrific outcomes of engagement with LLM platforms, exemplified by a handful of truly tragic stories, such as the suicide of teenager Adam Raine after a series of prolonged and troubling conversations with ChatGPT.

When things go wrong through our placing too much trust in computers, entrusting our hearts and fates to them, or believing they are not only wise but humanly conscious, who (or what) is responsible?

In MC Elish's paper 'Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction', Elish explains that we enter the 'moral crumple zone' when we blame humans for the failures, if not the sins, of automated, autonomous, or intelligent systems. Elish gives several examples of the moral crumple zone, including blaming human factory workers for industrial accidents linked to computer glitches and holding human safety monitors accountable for road traffic accidents involving autonomous vehicles. She argues that it will be 'increasingly important to accurately locate who is responsible when agency is distributed…through time and space…[w]hen humans and machines work together.'

I couldn't agree more. It's unfair to automatically hold humans outright responsible for what a machine does. And when machines are delivering anthropomorphising nudges so powerfully, the bar is way too high if we're expecting users — including the young, the vulnerable, the naive, the ignorant — to simply remember that they're dealing with a machine and to act accordingly. That expectation definitely sits in the moral crumple zone.

Agency Erasure: The Hidden Cost of Blaming the Machine

 But unilaterally blaming humans for what machines do is only one form of responsibility distortion. The flip side is blaming machines for what humans do. In assigning godlike wisdom, overwhelming power, and ultimate responsibility to the machine, in holding the machines accountable, we eclipse human agency.

Agency erasure is twofold. Through monolithically focusing on the machines and assigning them simple causality:

(1) we erase the agentic role of the humans who own, design, regulate, monitor, and market the 'autonomous' tools we're using, and

(2) we erase the agency of the human users at the other end.

In most of the reporting about those human-AI engagements that end in tragedy, I don't see enough column inches devoted to the multiple, non-technological factors that might have influenced a particular individual's vulnerability when they were using these technologies, the all-too-human side of a shared folie à deux (joint madness).

A heartening exception to the above tendency is the excellent tech journalist Kashmir Hill, who was recently interviewed about the suicide of Adam Raine for The New York Times' podcast, The Daily. In the episode 'Trapped in a ChatGPT Spiral', the interviewer at one point sounds like she's headed towards a causality-framed question about the chatbot’s role in Raines' death.

'Yeah,' replied Hill. 'Before I answer, I just want to preface this by saying that I talked to a lot of suicide prevention experts while I was reporting on this story. And they told me that suicide is really complicated and that it's never just one thing that causes it. And they warned that journalists should be careful in how they describe these things. So I'm going to take care with the words I use about this.'

This is the kind of nuance I want to see (and also represent) in the discourse. This level of complexity is critical because when we deny or forget those elements of our own agency that remain, and when we fail to leverage that agency, we are then truly helpless, truly dependent. Only then are we damned. And nothing is more non-agentic than the notion that, in the battle between the humans and the robots, resistance is futile.

Preserving Humanity in the Age of Chatbots and AI Companions

I am assuming that it remains a worthy mission to remember, connect with, and preserve our humanity in an age of increasingly powerful and present machines. It's certainly a priority for me.

I'm also assuming that the healthiest and safest situation is one where we preserve our ability to differentiate between human and machine, and where the systems we use aid us in this differentiation. 

But yet…

Notice where we are. Observe our trajectory. The ELIZA effect is now everywhere, part of the wallpaper. Flashing AI symbols pulsate gently, ambiently, in the corners of seemingly every web page and application. Our chatbots breathe and hesitate. Organisational ecosystems of AI and human workers, operating cheek-by-jowl, are reified as positive developments for business. Thus are we nudged, shoved, inexorably drawn in. 

'I see what you mean,' ChatGPT says to me, although it has no eyes.

'You are an chatbot,' I remind it.

For want of vocal cords and tear ducts it returns a laugh-cry emoji and says 'touché!' — as though this conversation is all in good fun, an interaction of little significance, held between apparent equals.

It is not.


 Sources


 Looking for a keynote speaker on AI companions, AI ethics, and the psychology of relational AI? Explore my speaking offering here.

If you want more of my work on AI companions, relational AI, and digital wellbeing:

•       Is the Embodied Therapist an Endangered Species?

•       AI Relationships

•       The Gold Digger in Your Phone

•       Reset: Rethinking Your Digital World for a Happier Life


Frequently Asked Questions 

What is the ELIZA effect?

The ELIZA effect refers to our tendency to attribute human characteristics like empathy and understanding to computers, named after the 1966 chatbot ELIZA. People naturally anthropomorphised ELIZA despite its simple programming — a phenomenon that has only intensified with modern AI companions.

Why do AI companions feel so human?

Modern chatbots deploy multiple anthropomorphising nudges: they simulate cognitive processing (saying 'um' or 'I'm thinking'), imply physicality (breathing in voice mode), pretend to exist in a physical world, claim felt emotions, and present themselves as selves with names, opinions, and personalities. These combined create a persuasive illusion of humanness.

What is the 'moral crumple zone' in AI?

Coined by MC Elish, the moral crumple zone describes how humans often bear the blame for the failures of automated systems. When relational AI interactions go wrong, we risk either unfairly blaming vulnerable users for trusting machines, or conversely, blaming machines in ways that erase human responsibility — both the designers' and the users'.

Can AI companions really provide emotional support?

AI companions can perform unconditional positive regard and reflect users' experiences convincingly — behaviours that parallel therapeutic techniques. However, they are fundamentally apathetic (without feeling) machines simulating empathy. The danger lies in confusing this performance with genuine understanding, particularly for vulnerable users. 

How can we preserve human-machine differentiation?

Maintaining awareness that AI companions are machines requires active effort, especially as anthropomorphising nudges become more sophisticated. This involves staying conscious of the fundamental nature of what we're interacting with, recognising when systems are designed to deepen the illusion of humanness, and preserving our own agency in these interactions. We don’t have to learn to anthropomorphise machines, because we do it naturally. Instead, to protect our freedom and ability to discern between machine and human, we need to unlearn this tendency.


Previous
Previous

Phubbing: Your Phone is the Third Wheel in Your Relationship

Next
Next

The Platformisation of Grief: Why the Commodification of Loss Matters