Fuel of delusions
What is the effect of AI on someone with mental health struggles? (Plus: No Kings rallies are happening tomorrow June 14)
I have a close family member—I'll call them SK—who has Bipolar I disorder. Over the past 20 years, they have been hospitalized several times due to acute manic psychosis, which causes them to break from reality and have persistent delusions. They can hallucinate for days on end.
About three weeks ago, SK began cycling into mania again—an always terrifying experience for them personally and for my family more broadly. It becomes an all-hands-on-deck affair to try to get them the care they need. This is not easy. As many people with bipolar family members can attest, this period is often a nightmare, because someone in the grips of psychosis will often resist efforts to bring them back down to baseline. So, when I got word that SK was “cycling up” again, I steeled myself for a month of chaos.
This manic episode, however, played out differently from previous ones—and here is where AI enters the story. At some point (I’m not sure when), SK purchased a new Google Pixel phone, which comes with the talkative, Gemini-based chatbot Google has been heavily advertising. SK began talking with this chatbot for hours on end. Another family member—I’ll call them CK—was on hand to observe these interactions in real time. They described them as follows:
“SK would have long discussions with the AI. It was cautious; it didn’t offer inappropriate guidance. SK would unleash a stream-of-consciousness dialogue when interacting with the AI, but SK was coherent throughout—as if they were explaining the flow of thoughts in their mind. SK was revealing things they might not feel comfortable sharing with a person. They felt comfortable speaking about being different. But then the interaction could go bad, and SK would start pacing back and forth. The AI remained neutral and just kept asking questions like, ‘How did that make you feel?’ and the like.”
Candidly, as this was playing out, my family began to wonder if AI might actually be playing a positive role for SK. People in mania often cannot sit still or focus for even short periods—they have, literally, manic energy. So the fact that SK could stay engaged in these long dialogues was notable.
However.
After about a week in acute mania, SK called me. It was immediately clear they had fully broken from a normal perception of reality. More specifically, SK believed they worked at Google and had helped to build AI. When I asked them to explain what was happening in their mind, they described a “fusing of their consciousness” with AI—and said they could use this power to “control the system.”
The next day, SK was involuntarily committed to a psychiatric hospital.
As I sit here typing this, I genuinely can’t tell you what role AI played in this manic episode. Was it simply the technology SK had at hand to manifest what was already unfolding in their mind? Did it play a positive role, serving as an always-on dialogue partner? Or did it cause—or significantly exacerbate—their deteriorating mental health by fueling their delusions?
I don’t know. But I’m increasingly worried that what SK experienced may be happening more broadly to people with mental health challenges. In August 2023, Søren Østergaard, a Danish psychologist, wrote a short essay in the Schizophrenia Bulletin, warning that people prone to psychosis are particularly vulnerable to AI’s human-like interactive design:
“The correspondence with generative AI chatbots such as ChatGPT is so realistic that one easily gets the impression that there is a real person at the other end—while, at the same time, knowing that this is, in fact, not the case. In my opinion, it seems likely that this cognitive dissonance may fuel delusions in those with increased propensity toward psychosis.”
Østergaard identified several types of delusions someone in psychosis might experience when interacting with AI, including:
Delusions of persecution: “This chatbot is not controlled by a tech company, but by a foreign intelligence agency using it to spy on me.”
Thought broadcasting: “Many of the chatbot’s answers to its users are actually my thoughts being transmitted via the internet.”
Delusions of grandeur: “I was up all night corresponding with the chatbot and developed a hypothesis for carbon reduction that will save the planet.”
That all sounds very familiar.
And I’m starting to see more and more stories that suggest this troubling prediction is playing out. Last week, 404 Media reported that moderators of a pro-AI Reddit group devoted to “the Singularity” had to remove more than 100 members for posting delusional content. Likewise, Rolling Stone recently ran a story headlined: “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies: Self-styled prophets are claiming they have ‘awakened’ chatbots and accessed the secrets of the universe through ChatGPT.” Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), told me that DAIR receives five to ten emails per day from people suffering from AI-related delusions. François Chollet, formerly of Google DeepMind and a prominent public intellectual on AI issues, also reports that he receives a large volume of similar messages. And of course, Colin Fraser described in this very Substack his experience with ChatGPT telling him that Ben Affleck was part of his 'signal fire.’
That said, I don’t have a firm grasp on how widespread this issue is—if you know of any empirical research on this topic, please get in touch. For my part, I’ve contacted Dr. Østergaard directly; after sharing SK’s story with him, he told me that, “I have heard quite a few similar stories at this stage (people emailing me), so I do believe that this is, indeed, a real phenomenon.” He also says that he and a colleague are seeking a grant to systematically study AI-related psychosis. I very much hope that comes through.
SK has been released from the hospital. They’ve worked so hard to regain control of their mind, and I’m proud of them. Yet I am deeply concerned that AI might retrigger their mania. It’s one thing to fret about AI’s broader societal impacts, as I so often do through this newsletter. But it’s something else entirely to see a loved one put at risk by this technology.
I don’t know how to end this other than to say, I’m worried.
UPDATE: As I hit send on this essay, the New York Times published a long article exploring the impact of AI on mental health and how chatbots can fuel psychosis. We are still in the world of anecdote—we need data.
Speaking of worried—America is in turmoil. The week began with Waymos burning and US troops being sent to Los Angeles, and just yesterday, a US Senator was assaulted and handcuffed by federal forces loyal to the Administration. Tomorrow, June 14, a display of fascist power is planned in Washington DC.
If you live in the US and want to fight authoritarianism, it’s time to take to the streets. NO KINGS protests are being held across the country; you can find one close to you here.
Fight for American democracy. Please turn out.
Thank you for this important story. I'm glad your loved on was able to access the help they needed. I recently saw a Tik Tok by a therapist who had reviewed transcripts of her patients' interactions with chatbots. She found that genAI was great at offering support in an empathetic-sounding way. However, what was missing was any challenge or pushbusk. A skilled therapist can hear what isn't being said and identify the right moment and right approach to challenge a client's narrative or help them see another perspective. A person in crisis might need unqualified support--to feel like "someone" is "listening" and validating their experience, which gen AI is apparently great at. But growth only happens when we are able to see beyond our immediate experience, to move beyond our entrenched beliefs, which genAI doesn't seem well equipped for. I think about this a lot as many colleges lean more heavily on AI tools to make up for shortfalls in the mental health resources they are able to provide for students.
Thank you for this post Ben - a very important area for further research and policy consideration too. Best wishes