9 Comments
User's avatar
Kelly's avatar

Thank you for this important story. I'm glad your loved on was able to access the help they needed. I recently saw a Tik Tok by a therapist who had reviewed transcripts of her patients' interactions with chatbots. She found that genAI was great at offering support in an empathetic-sounding way. However, what was missing was any challenge or pushbusk. A skilled therapist can hear what isn't being said and identify the right moment and right approach to challenge a client's narrative or help them see another perspective. A person in crisis might need unqualified support--to feel like "someone" is "listening" and validating their experience, which gen AI is apparently great at. But growth only happens when we are able to see beyond our immediate experience, to move beyond our entrenched beliefs, which genAI doesn't seem well equipped for. I think about this a lot as many colleges lean more heavily on AI tools to make up for shortfalls in the mental health resources they are able to provide for students.

Expand full comment
Benjamin Riley's avatar

This is very well said, Kelly, thank you for sharing your perspective. I agree completely.

Expand full comment
John Gardner's avatar

Thank you for this post Ben - a very important area for further research and policy consideration too. Best wishes

Expand full comment
Christa Albrecht-Crane's avatar

Wow, this post hit me really hard. I've been very concerned about the false marketing claims of big tech firms in their push for AI chatbots to do all sorts of tasks they are not designed for. Misuse in mental health settings is very dangerous. I read through the Reddit thread you linked to at the beginning. One person said this:

"Unironically, this exact same thing happened to me. However, I already had bipolar disorder so I found the experience eerily familiar. I'm also on antipsychotics and lithium.

I killed all my subscriptions and I only interact with it through the API. I set it to absolute mode with an addendum to tell me to fuck off if I ask for subjective answers. Make the tool a tool. As a result, I have been using it less and less and I legit feel my critical thinking skills returning, albeit slowly.

I'm considering terminating my API access too."

This is chilling. Even in a raw form, without the conversation interface, LLMs can be dangerous when we imbue their output with imagined meaning or intention. They are not conversation partners, or assistants, or therapists. People experience real pain, and we need to use our human empathy to figure out together how to help one another. And yes, we need to fight for democracy, too. Thank you so much for this important post.

Expand full comment
Miriam Greenberg's avatar

Thank you for sharing this.

Expand full comment
Ben's avatar

Thank you for an important reminder about AI's dangers. I think we will come to learn about a plethora of harmful effects on our mental health in the coming years. Although minor and anecdotal, I've noticed that my own desire to program and conduct experiments for my research position is waning because generative AI removes so many of the fulfilling aspects of my job, and it's even given me a sense of depression at times.

Is there any way to get involved with the research you're hoping to do?

Expand full comment
Benjamin Riley's avatar

Thanks for the comment, and the interest in helping out. For now, I'd just say point me to any stories or research you come across that includes data related to AI potentially causing or amplifying psychosis.

Expand full comment
Rob Nelson's avatar

I am generally quite skeptical of cultural anxieties around new technology. Delusional behavior is associated with every cultural technology I can think of. What would make this latest tech any different? That said, this piece and the NYT piece you reference would urge us to better understand the prevalence and severity of these behaviors.

It may very well be that my skepticism is wrong. That LLM's capacity to simulate human conversation is a danger to people with mental health problems. An industry investing trillions of dollars into developing products and extending capabilities should want to know if they are building machines that cause harm. Right?

What's most frustrating is how much the self-understanding of Silicon Valley has changed, going from don't be evil to don't get in our way.

Expand full comment
Benjamin Riley's avatar

I'm not sure we've ever had a mass-produced technology that, by design, creates the illusion of an intelligent being who is interacting with you -- the ELIZA effect at scale. More research is definitely needed to figure out what we're dealing with here.

Expand full comment