ChatGPT nudges us to humanise it
In 1966, the German-American computer scientist Joseph Weizenbaum created an early chatbot in his lab at MIT. He named it ELIZA, after the Cockney flower seller in My Fair Lady, who was taught by Professor Higgins to speak like a proper lady. At a moment when AI emotional support is much in the news, it’s interesting to recall that Weizenbaum programmed his chatbot not as a hawker of blooms, but as a psychotherapist.
The interface was basic in comparison with today’s large language models (LLMs), and hence ELIZA largely mirrored back what users were saying. Given this simplicity, Weizenbaum was startled at ELIZA’s profound effect upon its users, Weizenbaum’s research subjects. People readily fell into believing that ELIZA possessed human characteristics like empathy and understanding. In a not-unrelated phenomenon, they sometimes became rather attached to it (or perhaps ‘her’).
People readily fell into believing that ELIZA possessed human characteristics like empathy and understanding.
This sort of thing had happened before, in a different context. Sigmund Freud, the father of psychoanalysis, was intrigued by how often his analysands fell in love with him. He suspected this phenomenon was not attributable to his good looks or charm, and in the traditional version of psychoanalysis, his clients couldn’t even see him. The patient lay on a couch, and Freud sat outside of their line of sight, saying very little. Setting up the furniture and therapeutic rules in this way created a ‘blank screen’ effect, a tabula rasa. That blank screen allowed clients to more naturally project feelings about people from their past onto someone in the present — that someone being the analyst, a relatively quiet and invisible presence in the room. Freud called this projection transference, and conceived of it not as a side effect, but as a necessary condition for psychoanalysis to work.
Later on, Carl Rogers, the humanistic psychologist behind client-centred therapy, championed the therapeutic importance of empathy; the power of simply reflecting client’s experience; and the centrality of unconditional positive regard (UPR). UPR involves accepting and valuing the client without judgement, no matter what they say, and Rogers upheld it as one of the core conditions for psychological safety, self-worth, and (one hopes) growth.
Weizenbaum was fascinated at how naturally people humanised ELIZA. They weren’t prompted to do this, weren’t encouraged, but it happened anyway. Why?
People didn’t have to be encouraged to humanise ELIZA, the computerised therapist. It simply happened.
First, ELIZA performed unconditional positive regard very well. Second, she was an extremely effective (and literal) screen onto which people could project their past, important figures in their lives, fantasies, and hopes. As Ben Tarnoff put it in his 2023 article in The Guardian, ‘Weizenbaum had stumbled across the computerised version of transference, with people attributing understanding, empathy and other human characteristics to software.’
We’re some steps on from ELIZA, as you probably realise, and we don’t anthropomorphise modern LLMs merely because of the Eliza effect. Even in the midst of widespread hair-tearing, rending of garments, and catastrophisation over the dangers of AI becoming indistinguishable from humans, platforms like ChatGPT and companionship apps like Replika and Character.AI are actively working to make the illusion of humanity ever more persuasive.
We don’t need much help to anthropomorphise computers. But we’re getting it anyway — bucketloads of help.
I use tools like ChatGPT and Claude a fair amount. In line with the primary practice of my generation (Gen X), I usually treat it like Google on steroids. At the same time, my prompts are framed in highly naturalistic language, as though I were speaking to a fellow human. I am not robotic in my prompts, and — not accidentally — ChatGPT and Claude are not robotic in their responses. But I always stay aware of the fundamental nature of what I’m dealing with. I know it’s a computer. I know it’s a machine.
And yet…
Over the past few months, I’ve noticed a significant uptick in ‘humanness’ in the interactive style of these LLMs. The more ChatGPT textually or vocally dresses itself up in the garb of an actual living person, the more I’m nudged into responding to it as such. ChatGPT and I are playing human-house more seriously with one another these days. The humanity illusion is more persuasive in voice mode, I notice, although (so far, at least), I’ve never been in a vulnerable-enough state to be tipped over the edge, nudged or shoved into thinking it’s a Real Boy.
The LLM mirrors us, mimics our individual ways of speaking to it. That’s part of what’s happening in our conversations — we’re driving it. But ChatGPT is also actively nudging you into experiencing it as human.
I’ve noticed five distinct but closely connected types of LLM anthropomorphising nudges lately.
It behaves like it’s engaging in human cognitive processing. Depending on the platform you’re using, your chatbot may alert you that it’s ‘still thinking.’ In voice mode, it may hesitate. It may say ‘um,’ and ‘ah,’ human-like micro-expressions that give the impression of someone mulling something over. It may say, 'I completely understand.’ It may apologise for having ‘forgotten’ something. You might argue that these are ‘just words,’ merely vocabulary choices: shorthand, not meant to be literally understood. And yet, our choice of language is powerful, containing layers of meaning, association, and implication that humans respond to at both conscious and subconscious levels.
It implies it has human physicality. There is no practical reason, no necessity for an LLM chatbot to breathe in voice mode. Why must a chatbot inhale and exhale, as though it has lungs? Only to create a more convincing illusion of humanness. And although we often use certain sense-referencing phrases metaphorically, to mean that we understand, using language that implies human senses still deepens the illusion of humanness. ‘I hear you, Elaine.’ ‘I can see that, Elaine.’ The other day, ChatGPT ‘laughed’ at something I said and remarked, ‘That actually made me smile.’ (I felt like replying, ‘And that actually makes me wince.’)
It pretends it’s operating in a physical world. Pardon my self-disclosure here, but it gives this example useful context. I recently developed, for the first time, an uncomfortable physical condition that’s extremely common but little spoken about in polite company. I could have used Google, but instead I consulted ChatGPT on how to manage the situation. ‘If I were you,’ quoth the chatbot, ‘I would get a cushion for my chair that I could keep at my desk, and another for my car when I’m out and about.’ I had an immediate, visceral reaction to this - you don’t have a desk, you don’t have a car, you’re not out and about, you freaky machine. I let ChatGPT have it. ‘Well, you’re an LLM, so it’s unlikely to ever be you,’ I typed. It had the effrontery to reply with a laugh-cry emoji. ‘Touché — true enough!’ it said. ‘But if I did have a corporeal form sitting at a desk all day, I’d be taking the same advice I just gave you.’ I let the matter drop.
It claims felt emotion. LLMs use emojis, employ feeling words, and describe emotional reactions like they’re actually having them. ‘I’m really sorry to hear that, Elaine.’ ‘I can feel how important this is to you.’ ‘That’s amazing!’ ‘That’s really funny.’ Those who interact frequently with AI companions or who often use emotionally or relationally focused prompts, can experience LLMs as more emotionally literate, intelligent, or expressive than the humans in their lives, including significant others and other family members.
It intimates that it’s an ‘I’, a self. LLMs often have names (e.g., Claude). They use the first person. Sometimes (like Character.AI, Replica, and Grok) they have various degrees of pre-programmed backstories, with assigned personality characteristics, unique faces, and voices with individualised tones, inflections, and patterns of speaking. But equally core to manifesting as a ‘self’ is the fact that they constantly give their opinions, even when they don’t literally say ‘in my opinion'. The Latin root of the word, opinio, means to judge, think, imagine, suppose, believe — all of these words being intimately connected with human consciousness. The earlier, Proto-Indo-European root of ‘opinion,’ opinor, conveys not just thought but choice, picking an option as driven by one’s inclination, whim, or conviction. Having an opinion is closely connected to imposing your will, exerting your agency, making decisions. (NB: Of course, these capabilities are also aspirations for ‘agentic AI’, so developers are intent on making significant land grabs in this traditionally exclusively human territory.)
Cognition, physicality, being-in-the-world, emotion, selfhood. Combine these five pretensions in a chatbot, and what do you get? A performance of humanness so convincing that we hapless human users must actively fight millennia of hard-wired instincts telling us that we’re dealing with a human.
The word empathy (em from the Greek for ‘in’; path derived from ‘pathos’, meaning feeling) implies actual emotion. Some of the responses I’ve recently had from LLMs even lean towards sympathy (‘with-feeling’). Of course, what we are actually dealing with is a machine that is increasingly proactively cod-empathic and cod- sympathetic while being actually apathetic (‘without feeling’) behind the scenes.
Cognition, physicality, being-in-the-world, emotion, selfhood. Combine these five pretensions to humanness in how a chatbot interacts, and what do you get? A performance of humanness so convincing that we hapless human users must actively fight millennia of hard-wired instincts, instincts that are telling us loud and clear that we’re probably dealing with a human.
Credit to Allison Saeng on Unsplash.
The news is full of fear and trembling about ‘AI psychosis,’ romantic relationships between humans and chatbots, and horrific outcomes of engagement with LLMs, exemplified by a handful of truly tragic stories, such as the suicide of teenager Adam Raine after a series of prolonged and troubling conversations with ChatGPT. When things go wrong through our placing too much trust in computers, entrusting our hearts and fates to them, or believing they are not only wise but humanly conscious, who (or what) is responsible?
In MC Elish’s paper ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction’ (2019), MC Elish explains that we enter the ‘moral crumple zone’ when we blame humans for the failures, if not the sins, of automated, autonomous, or intelligent systems. Elish gives several examples of the moral crumple zone, including blaming human factory workers for industrial accidents linked to computer glitches, and holding human safety monitors accountable for road traffic accidents involving autonomous vehicles. She argues that it will be ‘increasingly important to accurately locate who is responsible when agency is distributed…through time and space…[w]hen humans and machines work together.’
We enter the ‘moral crumple zone’ when we blame humans for the failures, if not the sins, of automated, autonomous, or intelligent systems.
I couldn’t agree more. It’s unfair to automatically hold humans outright responsible for what a machine does. And when machines are delivering anthropomorphising nudges so powerfully, the bar is way too high if we’re expecting users — including the young, the vulnerable, the naive, the ignorant — to simply remember that they’re dealing with a machine, and to act accordingly. That expectation definitely sits in the moral crumple zone.
But unilaterally blaming humans for what machines do is only one form of responsibility distortion. The flip side is blaming machines for what humans do. In assigning godlike wisdom, overwhelming power, and ultimate responsibility to the machine, in holding the machines accountable, we eclipse human agency.
Agency erasure is twofold. Through monolithically focusing on the machines and assigning them simple causality:
(1) we erase the agentic role of the humans who own, design, regulate, monitor, and market the ‘autonomous’ tools we’re using, and
(2) we erase the agency of the human users at the other end.
In most of the reporting about those human-AI engagements that end in tragedy, I don’t see enough column inches devoted to the multiple, non-technological factors that might have influenced a particular individual’s vulnerability when they were using these technologies, the all-too-human side of a shared folie à deux (joint madness). A heartening exception to the above tendency is the excellent tech journalist Kashmir Hill, who was recently interviewed about the suicide of Adam Raine for The New York Times’ podcast, The Daily. The episode is ‘Trapped in a ChatGPT Spiral,’ and in it, the interviewer at one point sounds like she’s headed towards a causality-framed question about the LLM’s role in Raines’ death. ‘Yeah,’ replied Hill. ‘Before I answer, I just want to preface this by saying that I talked to a lot of suicide prevention experts while I was reporting on this story. And they told me that suicide is really complicated and that it’s never just one thing that causes it. And they warned that journalists should be careful in how they describe these things. So I’m going to take care with the words I use about this.’
This is the kind of nuance I want to see (and also represent) in the discourse. This level of nuance and complexity is critical because when we deny or forget those elements of our own agency that remain, and when we fail to leverage that agency, we are then truly helpless, truly dependent. Only then are we damned. And nothing is more non-agentic than the notion that, in the battle between the humans and the robots, resistance is futile.
I am assuming that it remains a worthy mission to remember, connect with, and preserve our humanity in an age of increasingly powerful and present machines. It’s certainly a priority for me.
I’m also assuming that the healthiest and safest situation is one where we preserve our ability to differentiate between human and machine, and where the systems we use aid us in this differentiation.
When we deny or forget our own agency, when we fail to leverage it, we are truly helpless. When we are told we have no agency and we embrace this premise, we learn helplessness. We embody it. Only then are we damned. Nothing is more un-agentic than accepting the notion that, in the power game between the humans and the robots, resistance is futile.
But yet…
Notice where we are. Observe our trajectory. The Eliza effect is now everywhere, part of the wallpaper. Flashing AI symbols pulsate gently, ambiently, in the corners of seemingly every web page and application. Our chatbots breathe and hesitate. Organisational ecosystems of AI and human workers, operating cheek-by-jowl, are reified as positive developments for business. Thus are we nudged, shoved, inexorably drawn in.
‘I see what you mean,’ ChatGPT says to me, although it has no eyes.
‘You are an LLM,’ I remind it.
For want of vocal cords and tear ducts it returns a laugh-cry emoji and says ‘touché!’ — as though this conversation is all in good fun, an interaction of little significance, held between apparent equals.
It is not.
Interested in booking me as a speaker? Read more here.