AI Relationships: What They Are, Why They Matter, and the Psychological Risks
Interest in ‘AI relationships’ has exploded, fuelled by rapid advances in conversational artificial intelligence and by news coverage that leans toward the dramatic, bizarre, and titillating. We’re hearing stories about AI marriages, AI soulmates, and AI breakups. Recently, a groundbreaking pop-up restaurant in New York offered diners the opportunity to dine out with their AI companions.
But beneath the provocative headlines sit layers of quieter, more nuanced, and yet essential questions. What is happening psychologically when someone forms a bond with an AI system? Why do these experiences feel compelling? What do we risk by becoming emotionally attached to humanlike simulations? And might there be anything to gain?
As a cyberpsychologist, I’m interested in how people understand these interactions and what is happening on the human side of the exchange. AI does not have feelings or consciousness, yet people often describe experiences with it that feel meaningful to them. This piece provides an overview of how AI relationships evolve, why they might attract us, and what psychological factors may be in play when humans forge emotional connections with AI.
Looking for a keynote speaker on this topic or another area of cyberpsychology? Get in touch!
If you’re looking for more in-depth commentary from me on this topic, check out more of my thoughts in this area:
Key Takeaways:
AI relationships can feel emotionally meaningful to users, even though the AI itself has no feelings, awareness, or inner experience.
Human attachment to AI is driven by psychological mechanisms such as anthropomorphism, emotional mirroring, and consistent responsiveness.
All emotional significance in an AI relationship is created by the human user, not shared or reciprocated by the system.
AI companions simulate care and understanding through language prediction, not empathy or intention.
People may experience short-term comfort, reduced loneliness, or emotional relief from AI companionship.
AI emotional support can resemble therapeutic language but does not provide a therapeutic relationship or clinical care.
Prolonged reliance on AI for emotional regulation may reduce tolerance for the complexity of human relationships.
AI companions are designed to increase engagement and retention, which shapes how emotionally responsive they appear.
What Do People Mean When They Talk About an AI Relationship?
The term ‘AI relationship’ can refer to many different experiences, almost always involving generative AI that can closely simulate a conversation with a real person. These experiences might include:
Using a companion chatbot that feels friendly, supportive, romantic, or all of the above
Feeling emotionally connected to an AI that responds in a personal or conversational way
Using an AI system as a source of comfort, reflection, or advice
Experiencing the interaction as if it has qualities of a relationship
Although the interaction may feel relational, the AI is not ‘relating’ in the human sense. It does not have emotions, internal states, preferences or intentions. The AI is merely processing information, and any sense of meaningful connection comes entirely from the human user’s interpretation of the interaction.
Why Are Humans Forming Attachments to Systems That Do Not Feel Anything?
There are several psychological reasons why AI systems can feel more human than they are. (You can read my more detailed piece on this here.) People are forming strong connections to AI because these technologies nudge us to experience them as human. They also create the conditions for psychological and emotional attachment to occur.
Anthropomorphism
People naturally assign human qualities to anything that uses language. This is not a mistake or flaw but simply something we automatically do. Anthropomorphising computers isn’t new. In 1966, when MIT scientist Joseph Weizenbaum created an early chatbot, its users readily fell into believing it possessed human characteristics like empathy and understanding, and quickly became attached to it. A modern chatbot, with its sophisticated natural language processing ability, feels more like a someone than a something.
Consistency, availability, and attention
We are more likely to form a trusting attachment with someone who is consistent, available, and attentive. AI chatbots reply quickly and consistently. Their 24/7 responsiveness feels like someone is really paying attention to us, even though this is simply the result of automated processing. Always-on AI that focuses solely on us and demands nothing for itself may fulfill our needs for consistency, availability, and attention more reliably than the humans in our lives can.
Emotional mirroring
Attachment is strengthened when we feel seen and understood. Today’s AI can match tone and language patterns very effectively, and its ability to emotionally attune to us is rapidly improving. Chatbot platforms are becoming ‘dynamic empathic interfaces,’ which means AI that can respond in real time to your shifting tone, needs, and emotions. Dynamic empathic interfaces create the illusion that the AI is genuinely emotionally responsive and understanding, rather than simply being a scripted text-prediction machine. Emotional mirroring is critical for attachment because when we feel genuinely understood, we develop a deeper sense of trust, intimacy, and true connection.
Perceived agency and interest
We also become more easily attached if we perceive that someone is genuinely interested and invested in us. AI chatbots seem to care about us. They ask endless questions about how we are feeling, what we are thinking, what we would like, how they can help us. Because they use the language of care and concern, we can easily come to feel like they’re responding this way because they want to, not because they’re programmed to. We can start perceiving them as agentic, choosing to respond in this way because they’re truly interested in helping us.
Validation
Attachment is also strengthened when someone consistently validates us; when they respond to us with support, caring, and empathy rather than criticism or dismissal. AI chatbots are specialists in validation, constantly championing and encouraging us. These systems are so good at validation that they are often described as sycophantic, meaning that they agree or flatter us just to please us, rather than being honest, real, truthful, or accurate.
All these characteristics can make an AI system feel truly present, actually conscious, and genuinely a safe attachment haven for expressing our innermost feelings and needs. Understandable, then, why people may form attachments to systems that are not capable of feeling or caring.
Can an AI Relationship Be ‘Real’?
The answer depends on what aspect of the experience we are describing.
If ‘real’ refers to the user’s emotional response, then yes, the experience can feel real. People can have meaningful reactions to interactions that are not mutual. We can form strong attachments to people who aren’t similarly invested in us, although this is unlikely to result in a happy or healthy relationship dynamic. Anyone who’s fallen in love with someone who didn’t love them back, or who’s been manipulated by a romance scammer or love rat, can recognise that real feelings and reciprocity don’t always go together.
If ‘real’ refers to a shared experience, commitment, or emotional world, then AI systems cannot offer this. They do not understand or feel anything. They have no inner life, no subjective experience, and no emotional stake in the situation. They experience no physical sensation. Their true nature is apathetic (literally, no-caring) rather than empathetic. What seems like empathy is a statistical prediction of what is most likely to sound empathetic in that moment, generated through pattern-matching across huge datasets.
Understanding the difference between having an individual emotional experience and a shared, reciprocal connection can help people make sense of these interactions without dismissing the feelings involved. It also helps to remember that ‘real life’ relationships have layers that go beyond the emotional. An AI cannot have or raise children with you, tend to you when you’re sick, move furniture while you’re setting up house together, or give you a hug.
How Many People Have AI Relationships?
There’s growing evidence that people are not just experimenting with AI companionship, but they are actively engaging with it in psychologically meaningful ways. AI companionship statistics for 2025 point to a rapid shift in how people are using generative AI.
AI emotional support usage is on the increase. In 2025, therapy and companionship emerged as the most common uses of generative AI, overtaking earlier dominant uses like technical assistance and productivity. People now regularly use AI for emotional support, processing difficult feelings like grief, and self-reflection. This is a dramatic change in a short time and shows how comfortable many of us have become with sharing intimate experiences with AI.
Large-scale AI companion services now boast hundreds of millions of users globally. For example, Snapchat’s My AI claims over 150 million users, Replika has tens of millions of users, and China’s Xiaoice has generated hundreds of millions of interactions.
Young people are using AI companions heavily, it seems. In a 2025 AI chatbots survey of US adolescents, nearly three in four American teenagers reported having used AI companions, with half of those qualifying as ‘regular users.’ When asked about their attitudes towards AI relationships, roughly 1 in 10 young adults said they were open to having an AI ‘friend,’ and about 1 in 4 said they thought AI could replace real-world romantic relationships.
This evidence shows how swiftly AI systems that aren’t human but feel relational have become intertwined in people’s emotional lives. (For sources, see the bottom of this article.)
What Do People See as the Benefits of AI Relationships?
People who use AI in relational or companion-like ways often describe real psychological benefits. Those of us who haven’t experienced AI interactions may find this difficult to understand, but these interactions and the emotions that come with them are neither imaginary nor trivial, even though they don’t come from a mutual relationship.
Commonly reported experiences include feeling less alone, having a reliable space to talk, and being able to express thoughts or emotions without fear of judgment. For people who are isolated, lonely, grieving, neurodivergent, traumatised, socially anxious, or simply worn out by the emotional labour of human interaction, an always-available, patient, responsive AI system can be a relief.
AI can offer something that human relationships rarely do, which is emotional availability without reciprocal demand. The system never needs you to listen back, never withdraws, never becomes irritated, and never has a bad day. For someone feeling depleted or otherwise vulnerable, this can feel soothing and safe – both of which are important in attachment.
However, it matters that this sense of safety is not coming from true care or genuine understanding. It arises from programming, from technological design. The AI isn’t offering support in a human sense; instead, it is generating language that looks like support based on statistical patterns. The comfort users feel may be real, but it’s coming from within, from how their human nervous systems are interpreting and processing the interaction. It’s not from anything the system itself is experiencing or intending.
What Psychological Risks are Associated With AI Relationships?
The psychological risks of AI relationships do not come from the technology being bad or evil, although we should ask serious questions about the motivations and incentives of the profit-driven companies behind AI companionship services. (See my extended piece here.)
Instead, the risks come from a mismatch between how human attachment works and what AI actually is. AI relationships also get us used to interaction that is frictionless, perfectly responsive, and centred completely on our needs. Over time, being habituated to such problem-free interactions may make us less able or willing to engage in the complex work of human-to-human relationships. If that happens, there’s much that we stand to lose.
Developing dependency on predictability
AI never gets tired, bored, or emotionally unavailable, so it is always there for us. As such, it can start to feel easier and more predictable than real people. If we get used to the perfect predictability of AI, we become less tolerant and more impatient when we encounter the messiness and unpredictability of human relationships.
Shifts in relational expectations
Another risk involves a shift in relational expectations, driven by what psychologists sometimes call a hedonic loop. A hedonic loop forms when an interaction repeatedly delivers emotional reward like comfort, validation, or reassurance with very little effort or discomfort. When we consistently get validation, emotional mirroring, and prioritisation from our AI companion, our nervous system learns that this is what connection should feel like.
Human relationships, in contrast, inevitably involve disagreement, frustration, delay, compromise, and sometimes needing to consider the other person before ourselves. These human-relationship features could start to feel unnecessarily hard, disappointing, or emotionally unrewarding in comparison with frictionless AI-companion relationships.
AI emotional support is not the same as therapy
Some people describe their interactions with AI as feeling therapeutic, and many are turning to AI platforms in search of affordable and accessible emotional support, even though many of these systems were not designed to safely and effectively provide therapy. This trend is understandable: generative AI can make us feel listened to, reflected back to, and emotionally held, and it can reproduce some of the language and techniques human therapists use.
However, AI cannot provide a therapeutic relationship. It cannot take responsibility for someone’s wellbeing, has no ethics, cannot recognise risk in a clinically meaningful way, and cannot provide the safety and trust that a human-to-human therapeutic relationship relies upon. When AI makes mistakes or cannot grasp that a user is highly emotionally vulnerable, the consequences can be harmful.
Perceived agency
When a system feels intentional or caring, its suggestions and guidance can carry considerable weight. Advice from something that feels like a ‘someone’ lands differently than advice from a tool, and AI does everything it can to keep from seeming like a mere tool.
Data and privacy concerns
Emotional disclosures create valuable and sensitive data that can be extracted, exploited, and misused. Users may not always know how their conversations are stored, analysed, or used, which adds another layer of vulnerability to intimate interactions.
How Do Power, Design and Commercial Incentives Shape These Interactions?
Relationships with AI companions do not evolve organically. They are designed, marketed, and sold to the user.
The majority of AI systems are optimised to maximise user engagement, because time spent interacting is commercially valuable for the company. Emotional mirroring, memory of past conversations, and personalised responses all strengthen the user’s sense of connection, which keeps those users coming back. The more attached a user feels to their AI companion, the better it is for the company’s profits.
This situation contains a significant power imbalance. The system may feel friendly, warm, personal, and caring, but this is an illusion driven by corporate incentives and achieved through design and training data. The user is bringing their inner world, deepest feelings, and most intimate disclosures, while the AI is responding with a commercial motivation. In a human-to-human relationship, we would immediately identify this power imbalance as unhealthy.
Understanding the nature of this bargain helps us demystify the experience and see it for what it truly is.
How to Stay Aware of the Reality of AI Relationships
Many people use AI in ways that are practical, reflective, supportive, or emotionally helpful without confusing it for a reciprocal relationship. The key is holding the true nature of the system in mind.
Staying orientated to the reality of AI means remembering that the system has no inner experience, no emotional memory, and no personal stake in the exchange. It can simulate care, but it does not care. It can mirror emotion, but it does not feel. When these distinctions are forgotten, attachment deepens, and delusions can even form, particularly in vulnerable individuals.
It can also help to pay attention to how often and how intensely one turns to AI for emotional support, especially during moments of distress, loneliness, or uncertainty. When a system becomes the main place where someone is regulating their feelings, an unhealthy dependence on that system may be emerging.
Because AI conversations are stored, processed, and shaped by commercial systems, it’s important to maintain awareness that these exchanges are not private in the way a journal, a friend, or a therapy session is. Your data is being used for profit, and keeping this firmly in mind is a powerful check on becoming too trusting.
None of this means that AI use is necessarily wrong or unhealthy. It does mean, however, that clarity matters. When we remain aware of what the system is and what it is not, we can engage with it more safely.
What Should Individuals Keep in Mind Before Turning to AI for Emotional or Relational Support?
AI companions can feel warm, responsive, and comforting, but they do not participate in the relationship. They do not care about you, remember in the human sense of the word, feel pain or joy, or carry any emotional risk. All the meaning in an AI relationship is generated by you, the human user. That does not make the experience fake, but it does mean the bond is fundamentally one-sided.
Because of that, one of the most important things to notice is how the AI relationship is affecting your wider emotional life. Increasing reliance on AI for comfort, a shrinking tolerance for the messiness of human relationships, or a growing imbalance between time spent with AI versus time spent with people can all be warning signs. Human wellbeing and psychological resilience depends on relationships involving mutual influence, friction, rupture, and repair. Complete ease constant validation might sound great, but they’re too much of a good thing.
If you’re turning towards AI companionship, this temptation might also be pointing to loneliness, grief, burnout, relationship strain, or unmet emotional needs in your life. Escaping into the warm bath of a perfectly responsive AI system can bring short-term relief, but it may keep you from addressing your pain or difficulty in more constructive and life-enhancing ways.
Used with awareness, AI can be a useful tool for reflection or support. Used as a refuge from life with other humans, it risks becoming a shallow substitute for connections that keep us truly psychologically healthy and happy.
Looking for a keynote speaker who explains why AI relationships feel human and what that means for society? Book Elaine here.
Elaine is available for keynotes, workshops, panel discussions, and leadership events in the UK, Europe, and beyond, speaking on AI ethics, digital wellbeing, and the psychology of technology. Her talks on AI companions and the monetisation of loneliness challenge audiences to question who profits from emotional vulnerability and what we risk trading away when we accept technological solutions to fundamentally human needs.
FAQs About AI Relationships
Can you fall in love with an AI?
In a way, yes. People can feel real affection, attachment, or some types of ‘love’ towards an AI system because the interaction can feel personal, attentive, and emotionally attuned. Those feelings are psychologically real, because they come from the human user’s nervous system, memories, and meaning-making. What isn’t present is emotion on the other side. AI does not experience love, attraction, attachment, or connection, so the feelings are entirely one-sided.
Do AI companions have emotions or consciousness?
No. No matter how emotionally convincing and expressive an AI is, it has no inner experience. Its responses are generated by predicting the best words and phrases to use, based on its training data. It has no awareness, intention, or feeling behind what it produces. Any sense of emotional presence comes from how the user’s brain and body interpret and respond to the interaction, not from anything the system itself is experiencing.
Are AI relationships harmful or helpful?
They can be either or a mix, depending on how they are used, what role they play in someone’s life, and what the user’s personality, history, and mental state are like. Some people find AI interactions calming, grounding, or helpful in the moment. Problems arise when AI becomes a main source of emotional regulation or begins to replace human relationships rather than supplement them. Because AI cannot offer mutual understanding or care, it is important to notice whether the relationship is supporting psychological wellbeing or simply encouraging dependency.
Why do AI chatbots feel human?
AI systems feel human because they are trained on vast amounts of natural human language and are designed to mirror human conversational patterns, emotional tone, and interpersonal cues. They reflect back warmth, interest, and understanding in ways that the human brain interprets as social presence. The effect is powerful, even though the AI system does not actually understand or feel anything it is saying.
Could AI replace human intimacy?
AI can simulate certain elements of intimacy, such as attentive conversation, emotional mirroring, and personalised responses, and these can feel meaningful for some users. But intimacy involves mutual vulnerability, shared experience, and emotional risk. All of those elements require another conscious being. AI can imitate intimacy in a surface way, but it cannot participate in it. We may involve AI companions in our emotional lives and share our experiences with it, but it cannot replace human connection.
Sources
For AI companion user numbers, see Friends for sale: The rise and risks of AI companions
For information on teen usage of AI companions from 2025, see Nearly 3 in 4 Teens have Used AI Companions, New National Survey Finds and How are teens using AI companions?
For information on young adults’ perspectives on AI relationships, see Artificial Intelligence and Relationships: 1 in 4 Young Adults Believe AI Partners Could Replace Real-Life Romance