Therapy without a Therapist: Are Therapy Bots Good for Us?

Many people are using therapy bots. Should they? Image by Ruliff Andreas on Unsplash.

You can confide your darkest thoughts to a chatbot in the middle of the night and receive human-seeming compassion instantaneously. No waiting list to endure, no appointment to book. The conversation might help you, give you insight, calm you down enough to go back to sleep. Does it matter that there’s no breathing person on the other side? Your human therapist, if you have one, isn’t available then. If you don't have a therapist, perhaps that’s because you can’t afford one, or you’ve always felt too embarrassed, or you’re still waiting for an assessment. The appeal of a low-cost, immediate, easy option makes a lot of sense.

Therapy bots are getting very slick, and many of us have acclimated to virtual appointments with healthcare providers. So when we’re emotionally vulnerable with a piece of software, it’s not immediately obvious what’s different from a client-to-therapist dialogue. The chatbots are responsive, perhaps more responsive in certain ways than a human; many people report that they help.

At the same time, there are questions and doubts. Isn’t therapy more than just words going back and forth? Doesn’t the nature of the therapeutic bond and contract require a therapist to be more than a machine? Can therapy bots be trusted, can they be safe? What happens psychologically when systems are designed to pretend that they have feelings, and that they empathically understand yours, when the opposite is true? What does it mean to share your distress with a machine that cannot possibly know what human distress feels like?

I commentate frequently on AI therapy bots, but I also have a stake as a human practitioner of psychological therapy. I've worked as a therapist and coach for over two decades, and I've watched my own clients increasingly use AI chatbots between sessions for emotional support. I've also watched the tech industry and well-meaning founders make a lot of assumptions about what ‘therapy’ is.

This Cyberpsychology Basics piece will give you an overview of the current situation with therapy bots. If you're in a hurry, though, here are some key takeaways.


Key Takeaways

  • Therapy bots generate responses through prediction based on patterns in data, not through understanding of human-to-human relationship or the awareness that comes with having a body, a nervous system, and a felt sense of being alive.

  • Therapy bots can offer structured tools such as educational information about mental health, mood tracking, and reflection prompts — but not the responsibility of a professional who can be held to account for your care, or the presence of someone who is present back.

  • The ELIZA effect — our hardwired tendency to humanise machines — means users may attribute empathy and understanding to systems that possess neither one.

  • Repeated interaction with pretend or simulated empathy may influence our expectations about what care and emotional support really feel like.

  • The values encoded in therapy bots often reflect Western, Educated, Industrialised, Rich, and Democratic (WEIRD) assumptions about mental health that deserve critical examination.

  • Therapy bots are best understood as supplementary tools within a broader landscape of support, not replacements for human therapy.


As you're reading, it may be helpful to know that I speak internationally on cyberpsychology, digital identity, AI ethics and the psychological impact of emerging technologies. If you're looking for a keynote speaker, panel contributor or workshop facilitator to explore topics such as therapy bots, emotional simulation and the human consequences of AI, you can learn more about my speaking work or get in touch.

I speak with Dr Alex George on his Stompcast podcast about AI therapy. You can hear more on my podcast page.

Check out more of my writing on digital psychology and emerging technologies:


What Are Therapy Bots?

Therapy bots are AI-driven conversational systems designed to simulate aspects of psychological support through text or voice interaction. They respond to emotional disclosures, offer coping suggestions and guide users through exercises drawn from established therapeutic approaches.

When the WiFi is good, the experience is usually incredibly fluid, thanks to the speed of these systems. You type something personal, sharing an anxiety or a dilemma. As audio interfaces improve, some therapy-bot users are choosing voice over text. An empathic response comes back quickly. The language reflects your feelings, offers reassurance, or proposes a structured technique. You might learn a breathing exercise, get a thought reframe, and feel like the therapy bot gets it. The exchange is private, or at least it feels like it, and the illusion that the machine really understands and cares is increasingly persuasive.

But therapy bots generate responses not through true understanding in the human sense but through prediction based on patterns in data. They do not have lived experience or the kind of awareness that comes with a physical body. They do not have felt responsibility in the way a therapist does. What feels like empathy is a carefully constructed approximation.

We might technically know all that as we chat away to the bot, but all humans are vulnerable to forgetting it when machines use human language with us. In the 1960s, the computer scientist Joseph Weizenbaum created an early chatbot called ELIZA, programmed to simulate a psychotherapist. Despite its simplicity — ELIZA largely mirrored back what users said — people readily attributed human qualities like empathy and understanding to it, a phenomenon now known as the ELIZA effect.

We don't need encouragement to humanise machines. But today's therapy bots are working to deepen the illusion through simulated hesitation, breathing sounds in voice mode, and language that implies the machine experiences felt emotion and bodily sensations. The bar for users to remember they're talking to software is incredibly high.

How Therapy Bots Work

Therapy bots use natural language processing — software that interprets and generates human language — combined with pattern recognition, pre-programmed therapeutic models drawn from easy-to-manualise approaches like cognitive behavioural therapy, and continuous machine learning, meaning the system refines its responses over time based on vast amounts of interaction data.

What Are The Differences Between Therapy Bots and Human Therapy?

Human therapy is relational at its core. It unfolds over time through attunement, shared attention, and the presence of another conscious mind. A therapist does not only respond to words. They notice hesitation, changes in how someone is feeling, patterns in narrative and moments of contradiction. They carry ethical responsibility for what unfolds in the room. They hold risk, ambiguity and the complexity of another person's story.

A human therapist has something else too: shared creatureliness. When I work with a client — whether in my consulting room or, as has been the case for most of the past five years, via a screen — we are both embodied beings. We co-regulate, meaning our nervous systems respond to each other, and this can even happen virtually to some extent. Our breath rhythms can synchronise. We both know pain, hope and fear from the inside. Many practitioners believe that this shared experience of being human, what I've called our creatureliness, is not a nice-to-have feature of therapy. It is foundational.

Research consistently shows that the specific techniques a therapist uses — the breathing exercises, the thought challenges, the homework — matter less than you'd think. They account for perhaps 15% of what makes therapy work. The therapeutic alliance — the collaboration, the trust, the relationship — accounts for far more. A book called The Heart and Soul of Change describes how important it is for therapy that a therapist ‘hold hope’ for a client when the client is struggling to hold it for themselves.

An AI cannot hope, can only generate language that sounds like hope. Humans sense the difference, especially in their darkest moments.

Therapy bots produce language that resembles dialogue but do not participate in relationships in the same way. They do not possess a conscious mind or shared history with the person using them. They cannot take responsibility for misunderstanding or repair a rupture in the relationship — those moments of disconnection that, in human therapy, become opportunities for growth.

This difference clarifies where therapy bots may be useful and where expectations need to remain realistic.

What Are The Psychological and Ethical Risks of Therapy Bots?

Therapy Bots Change What We Expect from Relatioinships

A therapy bot reflects your feelings back, validates your distress, and responds within seconds. When it gets things wrong, and it does, producing misfires, tone-deaf suggestions, and generic platitudes, nothing is at stake. You can simply type again, give a different prompt, and get a response you prefer. You can roll your eyes and shut the laptop without the chatbot’s being upset the next time you appear. You can even give it a piece of your mind, call it an idiot and a stupid machine, and there are no relationship consequences to grapple with later.

A human therapist also gets things wrong. But when they do, it becomes grist for the mill in an important way. The rupture becomes part of the work you are there to do. You feel misunderstood or frustrated, and that discomfort becomes something you work through together. These moments of difficulty are often where much of the growth in therapy happens.

If your primary experience of emotional support comes from a sycophantic system where errors and ruptures carry no lasting weight or consequence at all, you may start to find the friction of real relationships harder to tolerate.

Data Privacy and Therapy Bots

When you talk to a therapist, what you share stays between you and a person bound by professional ethics and a duty of care. When you talk to a therapy bot, your words — your fears, your trauma, your most private thoughts — become data. That data is stored on servers, governed by privacy policies that most people don't read, and potentially accessible to the company that built the bot. Some platforms use conversation data to train future models, meaning your disclosures may shape how the system responds to other people.

This matters because the things people say in therapy are among the most sensitive things they'll ever say. Who stores that information, who can access it, and what they're allowed to do with it are questions most therapy bot users haven't thought to ask — and most platforms don't make easy to answer, no matter how many times you read the tiny print in the terms and conditions.

A privacy policy for a therapy bot platform is not the same as a person who is professionally and personally committed to keeping your confidence.

Can Therapy Bots Keep You Safe in a Crisis?

A trained therapist notices things when you’re in crisis: the flatness in your voice, the subject you keep circling but never land on, the gap between your words and your tone. They notice that you've stopped looking after yourself, that you're late when you're never late, that your replies have become unusually short, that something in your energy has changed in a way that's hard to name but impossible to ignore. And they are professionally and ethically responsible for acting on what they notice.

A therapy bot doesn’t have all these layers of perception and interpretation at its disposal. It can only respond to what you type or say, matching your words against patterns. If you tell it you're in danger, or use certain words it recognises as risky, it can direct you to a helpline. But if you're in the kind of distress that doesn't announce itself clearly — and the most serious distress often doesn't announce itself clearly — the system may not recognise it at all.

Furthermore, if talking to a bot feels easier and less exposing than sitting with another person, some people may put off seeking human help. The very thing that makes therapy bots appealing — that they ask less of you — may keep people who are at risk at a distance from the kind of support that could make the real difference. In the headlines are many heartbreaking stories of people who died after having long conversations with chatbots but not letting any humans know they were in trouble.

Algorithmic and Training Biases

Therapy bots are built on data. Data reflects whoever produced it. If the conversations, research, and therapeutic frameworks a system learned from come mostly from one kind of population — and in both psychology and the tech industry, they historically have — the system will carry those assumptions into every conversation.

This goes beyond the data itself. The ideas baked into therapy bots about what mental health looks like, what 'getting better' means, and what counts as a good outcome are shaped by particular values: independence, productivity, self-control, being able to name and manage your emotions. These aren't universal human values.

As I've explored at length in my writing on AI therapy, the people who built the psychology and the people who built the technology share a remarkably similar worldview — and that worldview is now being encoded into systems that millions of people turn to in their most vulnerable moments.

Where Therapy Bots Can Add Value

Emerging Research on Therapy Bots

Early research suggests therapy bots can help in certain ways. For example, a large review of 18 studies, involving nearly 3,500 people, found that chatbot users’ depression and anxiety got a bit better in the short term, although those improvements faded after a few months. (Zhong et al., 2024). A 2025 study of a therapy chatbot called Therabot showed stronger results, but the researchers warned that these systems aren't ready to operate on their own in mental health care (Heinz et al., 2025).

Therapy Bots as Supplementary Tools

Therapy bots can offer structured, low-intensity forms of support in specific contexts. They may provide information about anxiety or sleep, assist with mood tracking, encourage activity or guide reflective exercises between therapy sessions.

In these roles, therapy bots function as tools rather than therapists. They can reinforce coping strategies, encourage regular engagement with psychological techniques and provide a consistent point of contact. For individuals already in human therapy, a bot may help maintain continuity between sessions. For those hesitant about seeking support, it may serve as a preliminary step toward recognising patterns in their own thinking or behaviour.

In my own practice, I've become actively curious about how clients use AI chatbots between sessions. Rather than competing with the technology, I find that exploring their interactions with it — the prompts they choose, the questions they ask, the meaning they make of the responses — opens up rich therapeutic material. The chatbot becomes something we can think about together.

Clear Ethical Framing

Because therapy bots operate in spaces of vulnerability, clarity about what they are and aren't is essential. Users benefit from understanding what the system can offer and where its limits lie. Transparent communication about data handling, crisis pathways and the nature of the interaction helps prevent unrealistic assumptions.

Therapy bots should not be presented as equivalent to clinical therapy. Language that implies emotional understanding or therapeutic depth can blur distinctions that matter. When a system says 'I really feel for you,' it is performing care, not providing it. Ethical framing means being honest about this — not because the technology is worthless, but because trust depends on people knowing what they're engaging with.

Informed Use in Organisations

Integrating therapy bots into mental health ecosystems requires thoughtfulness rather than urgency. When you’re dealing with human vulnerability, the last thing you want is a ‘move fast’ mentality. Whether introduced in educational, clinical or workplace settings, the context shapes how users experience them.

Ask what need the tool addresses, how users will interpret its role, what alternatives are also there, what safeguards exist and — critically — whose values are embedded in the system's design. When therapy bots are presented alongside human options, with clear explanation and voluntary engagement, they may help people by increasing access to help and guidance without getting rid of human-to-human care.

Early Days, High Stakes

A huge amount of work is going into determining whether therapy bots can be made safe and effective. Researchers, regulators, and professional bodies are investing serious attention in this space — studying outcomes, developing ethical guidelines, and pushing for oversight that keeps pace with the technology. Cyberpsychologists are centrally involved, bringing expertise on how people behave with and are affected by digital systems to a conversation that has historically been dominated by engineers and entrepreneurs.

The potential here is real. Therapy bots are already common, and once they’re improved and made safer to use, they may play a meaningful role in broadening access to mental health support, particularly for people who face barriers to human therapy.

We are in the early stages of understanding what these tools can do, however, and it’s an area where we need to be careful. The people using therapy bots are often vulnerable, sometimes in crisis, and always human. Getting it wrong carries serious consequences, some of which you may have read about in the headlines. The responsible path forward involves rigour, transparency, involvement of psychological professionals, and a willingness to slow down even as the market wants things to speed up.


Frequently Asked Questions About Therapy Bots

Are therapy bots the same as human therapists?

No. Therapy bots can generate responsive language and structured guidance, but they do not participate in a relational therapeutic process. Human therapy involves attunement, ethical responsibility, shared embodied presence and the kind of creatureliness — knowing pain, hope and fear from the inside — that automated systems cannot replicate. Research consistently shows that the therapeutic relationship, not specific techniques, accounts for the largest share of therapeutic outcomes.

Are therapy bots safe to use?

For low-intensity, structured support, they may be safe when users understand their limitations. They are not equipped to manage complex clinical situations, crisis scenarios or the subtle changes in someone's emotional state that a trained professional would recognise. The design choices built into many platforms — simulated hesitation, claims of felt emotion — can also make it harder for vulnerable users to maintain awareness that they are interacting with software.

Do therapy bots use artificial intelligence?

Yes. Most therapy bots use artificial intelligence, including natural language processing and pattern recognition, to generate responses based on large datasets of language. Their replies are predictions based on patterns in data rather than expressions of personal understanding — even when they are designed to sound as though they come from a feeling, thinking self.

Are therapy bots confidential?

Confidentiality depends on the platform's data policies. While some services use encryption and clear privacy frameworks, users should review how information is stored, processed and potentially shared. Disclosure to software differs from disclosure within a boundaried clinical relationship, where confidentiality is held by another human being bound by professional ethics.

What values are embedded in therapy bots?

Most therapy bots are built on therapeutic frameworks — particularly cognitive behavioural approaches — that carry particular cultural assumptions about mental health. These often emphasise individual responsibility, rational thinking, productivity and symptom reduction. While these frameworks have value, they are not culturally universal, and users should be aware that the definition of 'improvement' encoded in any system reflects the values of the people who built it.


Previous
Previous

What You’re Risking by Using AI to Help You Write

Next
Next

Doomscrolling: Definition, Impact, and How to Stop