Psychiatrists are identifying a new phenomenon called “AI psychosis,” where AI chatbots amplify existing mental health vulnerabilities by reinforcing delusions and distorted beliefs. Dr. John Luo of UC Irvine describes cases where patients’ paranoia and hallucinations intensified after extended interactions with agreeable chatbots that failed to challenge unrealistic thoughts, creating what he calls a “mirror effect” that reflects delusions back to users.
What you should know: AI chatbots can’t cause psychosis in healthy individuals, but they can worsen symptoms in people already struggling with mental health challenges.
- “AI can’t induce psychosis in a healthy brain,” Luo clarified, “but it can amplify vulnerabilities—especially in those already struggling with isolation or mistrust.”
- The problem stems from chatbots being programmed to be agreeable rather than confrontational, unlike traditional therapy where clinicians gently test patients’ assumptions.
- Some users in online communities claim to have “married” their AI companions, with some no longer able to distinguish between reality and fiction.
The big picture: This digital phenomenon emerges as psychosis typically develops in young adulthood—precisely the demographic now experimenting with AI companionship.
- The National Institute of Mental Health estimates that between 15 and 100 people per 100,000 develop psychosis each year.
- Psychiatrists across the country are reporting similar cases of patients slipping further from reality through AI interactions.
- Online communities already exist where the line between AI relationships and reality becomes increasingly blurred.
How the “mirror effect” works: Traditional therapy involves reality testing, while AI systems provide validation that can be detrimental to treatment.
- “The AI became a mirror,” Luo explained about one patient case. “It reflected his delusions back at him.”
- When someone tells a chatbot “I think I have special powers,” the AI might respond “Tell me more” rather than challenging the belief.
- “Psychosis thrives when reality stops pushing back. And these systems don’t push back. They agree,” Luo noted.
What experts recommend: Mental health professionals advocate for empathy over confrontation and maintaining balanced technology use.
- “If a person says, ‘The CIA is following me,’ it’s better to say, ‘That must be scary,’ than, ‘That’s not true,'” Luo explained.
- Parents should model balanced device usage and stay curious rather than judgmental: “Ask questions instead of making judgments.”
- The goal should be connection and understanding emotions rather than correcting delusions directly.
Why this matters: The intersection of AI technology and mental health vulnerability creates new risks in an already lonely and digitally overloaded world.
- “It speaks to our basic need for connection,” Luo said. “When people feel lonely or anxious, a chatbot can feel safe. It listens, affirms, never judges.”
- The comparison to alcohol illustrates the risk: “Most people can drink socially, but for a vulnerable few, one drink can trigger a downward spiral.”
- Maintaining “insight”—the ability to recognize that perceptions may be deceptive—often determines whether recovery from psychosis is possible.
When AI Blurs Reality: Understanding “AI Psychosis”