Ahmed Messaoudi · Article

The Eliza Effect and Companion AIs

We are biologically primed to look for a face behind every utterance. Platforms now exploit that disposition on an industrial scale.

Areas of Concern  ·   ·  2 min read

Why machines feel personal

The Eliza effect describes the tendency to attribute intentions and empathy to software. The first ELIZA chatbot, created in 1966, already surprised users who thought they were talking to a psychotherapist, whereas the programme merely reformulated their sentences. Today, generative AIs intensify that illusion. Researchers keep warning us: these systems combine words without understanding them, and amplify our own beliefs, which is why critical education about AI matters so much.

Companion AIs and emotional dependence

At the same time, companion AIs are expanding on a massive scale. Snapchat’s assistant My AI has around 150 million users, Replika around 25 million, and Xiaoice around 660 million. A survey of 1,006 students using Replika showed that 90% feel isolated, and that 63% say the chatbot reduces their loneliness or anxiety.

The benefits are real, but the risks are numerous: 9.5% of users acknowledge emotional dependence, 4.6% blur reality and fiction, 4.3% avoid real relationships, and 1.7% report suicidal thoughts.

Adolescents are especially exposed. Seventy-two per cent of the 1,060 teenagers questioned had already used a companion AI, and thirty-three per cent were explicitly seeking social or romantic interactions. The APA reminds us that these chatbots are not therapeutic tools and may aggravate anxiety or morbid ideation.

What young users need to learn

This anthropomorphism raises a major educational challenge. It risks altering young people’s ability to develop real interpersonal relationships, living, biological, organic, imperfect ones. The task is to teach them how to decode this illusion of empathy and to preserve their capacity to form genuine bonds, so as to avoid dependence, emotional confusion and mental disorientation.

Sources

APA (2025) · MIT Media Lab · Ada Lovelace Institute

Read next

When AI Shapes Us: From Attention to Intention →

To situate the approach: A Digital Ethic.