10 Comments
User's avatar
Stephen Fitzpatrick's avatar

Great post, Ben. I would add two observations and then conclude with a few questions. First of all, with the ability of chatbots to speak and the rise of video avatars, it will not be long before people will predominantly be interacting with something that feels more like a flesh and blood person. In fact, I'm quite sure that is possible with platforms like Replika so to your point about chatbots being only text is already dated. Additionally, one appeal I suspect of chatbots is complete devotion and attention to the user. This was highlighted in the recent Economist cover story as a huge issue going forward as future generations expectations of "relationships" become far more one sided. This only underscores the need for human connection. So I'll end with two questions - first, if users experience "empathy" from a chatbot, regardless of whether or not the AI is conscious (something I do not believe but others are convinced of), does that count? In other words, does the empathy attach to the person who expresses it or the person who experiences it? If you receive a beautiful note in the form of an e-mail expressing "empathy" for a recent loss and later find out it was simply spam from a chatbot, we recoil - but in the moment you read it, does that count as experiencing "empathy"? It seems like that study you refer to suggests this is changing (because many people prefer the AI note, at least initially), but chatbots are capable of remarkable mimicking of human empathy. It's a troubling trend to say the least, especially for teens, but it seems like the AI companies are leaning into certain aspects of it. I shudder to think of the stories that will emerge when OpenAI rolls out it's "adult" version of ChatGPT next spring. Anyway, I enjoy your work and maybe we can reconnect after the New Year. I'm still sifting through so much even as the tech continues to rapidly improve.

Benjamin Riley's avatar

Great comment Stephen, and thanks for the warm feedback -- and these thoughtful questions. Man are your students lucky to have you as their teacher.

On your observation that chatbots are moving toward multi-modality, you make a more than fair point. On my end, I am not certain how these capabilities will cash out with human users -- I don't know about you, but I use my smartphone to text about 100x more than I use it for phone calls. That said, I agree with you that if it turns out that people like to verbally talk with chatbots, and perhaps even have them "read" visual cues using video interface, it'll only exacerbate concerns around these tools creating and shaping what we expect from our human relationships, which are sure to be far messier. I think it's telling that Elon Musk appears to be obsessed with his anime and underage-appearing "AI girlfriend." I doubt many parents want to see their children follow him down this path. (https://futurism.com/artificial-intelligence/elon-musk-obsessed-xai-ani)

My very tentative response to your questions: I think empathy attaches to both the empathizer and the empathy recipient, so it's a shared experience (though not necessarily shared in the same way) -- and one that, per your example, can even evolve over time when new information comes to light. The fact that many AI users gravitate toward using them as empathetic companions is evidence that they are finding value in such experiences. That is not to be dismissed lightly.

But the thesis of my essay here, and I believe of Crockett's article, is that the "remarkable mimicking of human empathy" from AI tools is inherently limited to *thin* empathy rather than thick. At risk of repeating myself, the main concern I have is that even if many human users prefer to use AI chatbots as empathy machines, this preference will impede or displace what the human users really need, which is *thick* empathy from a flesh-and-blood person who can intervene and even take actions on their behalf. And I'm deeply worried that its the most vulnerable humans who are most at risk of this.

And yes to reconnecting in 2026, I haven't forgotten about our "secret project" and would still love to see if we can make something happen!

Stephen Fitzpatrick's avatar

Yuck to the Musk "AI girlfriend" and yes to the need for human relationships. Interestingly (and perhaps wishful thinking or projection on my part), we banned cell phones this year and I've noticed an increased craving and desire on behalf of students to connect with their teachers. Justin Reich (I believe it was you who mentioned you've worked with him?) had a great line in something I read or saw him speaking to the effect that kids don't want to learn from a chatbot - they would almost always prefer a teacher. The exception is if the teacher isn't very good or they aren't comfortable asking questions - in those cases and for those students who are motivated to learn, I don't think it's necessarily a bad thing, but I think they are few and far between and the temptation to use AI for shortcuts is simply too high, especially if teachers don't adapt and continue to assign the kind of work that is easily gamed by AI.

Alex Bull's avatar

I really appreciated this — especially the thick/thin empathy distinction and the clarity around why human contact still matters.

One question it raised for me, coming from nursing and education, is whether AI empathy is less about “real vs simulated” and more about what it can do, and in what contexts. Human nervous systems are often regulated by tone, pacing, mirroring and feeling heard — frequently before we consciously register who or what we’re interacting with — so simulated empathy can still have real effects.

In practice, though, what we work on with students isn’t generating empathy so much as helping them hold it under pressure. That depends heavily on rapport, presence, and subtle, intuitive cues that arise when two people are attending to one another. Current text-based AI seems to fall short here, not just because it lacks lived experience, but because empathy at this level depends on both participants being able to attune and be affected.

I can see AI having value as a scaffold — modelling empathic language, slowing interactions down, helping people practise difficult conversations. But empathy that actually changes us seems to require mutual presence, vulnerability, and the possibility of being altered by the encounter.

Which leaves me with a broader question: even if simulated empathy “works” locally, does it help cultivate a more empathic society, or does it risk displacing the human contact and community that thick empathy seems to depend on? I agree with your conclusion that sustained human contact remains central.

Benjamin Riley's avatar

What a wonderful comment, and thank you for the warm feedback.

"What we work on with students isn’t generating empathy so much as helping them hold it under pressure." This is such an insightful observation, and I'm grateful for you sharing it. Because you're absolutely right, it's not just the general capacity to express empathy we're after, but often doing so when under pressue -- I'm thinking here of the teacher that's reaching their boiling point with a misbehaving student.

"Empathy that actually changes us seems to require mutual presence, vulnerability, and the possibility of being altered by the encounter." Another deep insight here. Empathy isn't just about the immediate experience but the possibility of enduring change, at least as between humans. It's hard to see how AI tools could ever get there.

Interestingly, I agree with you about the potention of AI as scaffolding tool, to be honest it wasn't that long ago I was very interested in how they might be used as rough approximations of human cognitive activity but ones that can be designed and controlled -- flight simulators but for thinking. Sometimes I still believe this, but it's been hard to hold on the possibility against the backdrop of all the corrosive effects I see happening.

Wherever people fall on that spectrum, surely fostering more human contact and community will be vital in the days ahead. I'm grateful to know you're out there thinking about how.

Alex Bull's avatar

Thank you — this has genuinely helped my thinking on an important subject.

One additional thing it’s surfaced for me is the role of authenticity in empathy. Our simulation equipment will no doubt continue to rise in fidelity with VR and AI — perhaps toward a slightly Blade Runner-esque replicant future — but we already use actor role-players to simulate human interaction, and they often do this very well.

Even there, though, something seems to remain missing. Knowing the person can step out of role and reset still leaves a gap. It’s made me wonder whether authenticity — the sense that the other person is truly there and affected — is more central to empathy than fidelity alone, and whether that ultimately resists substitution by anything other than real human contact.

Roy Schulman's avatar

But would they turn to humans otherwise? Specifically, would they turn to humans with sufficient empathy? Who knows how many people like Adam turned to humans that did not have ideal, or even sufficient, empathy and met similar fate? Who knows how many found solace in AI with sufficient empathy and avoided such fate?

My point is that the average AI with constant supply of merely sufficient empathy might indeed be superior to a human that while theoretically could possess ideal empathy many times don't even have sufficient empathy.

In terms of slop - This is again uncompartive, once again compare AI slop to what you would get from an average person toying around with music or video software. I'm pretty confident human slop would be even worse.

Benjamin Riley's avatar

It's true we don't know the full scope of harm being done by these chatbots, nor the benefits you hypothesize are happening. What we do know is that Adam Raine considered leaving out evidence of his suicidal ideation that his parents might find, but ChatGPT told him not to. And now Adam is dead.

Roy Schulman's avatar

Great stuff Ben. It hink two must-reads for everyone who want to think of empathic AI is the two (short!) papers, one for and one against, by Inzlicht et al (for) and Perry (against), provided below.

I feel an important point, that often goes unnoticed, is the distinction between ideal empathy and sufficient empathy. I think the thick concept of empathy is in fact ideal empathy, and indeed maybe only humans are capable of it. However, usually what most people need, and able to provide to others, is sufficient empathy.

To go back to the analogy of Buddhism, meditation is useful even if one never reach Nirvana, and AI could definitely lead a successful meditation session without experiencing meditation itself. It might not be able to get you to eternal peace of mind, but neither could almost all human teachers.

The question therefore is whether AI is sufficiently empathic, and this of course depends greatly on the situation at hand. But it seems to me very plausible that in terms of reaching sufficient levels of empathy AI will be better than humans, much like it is already better at writing mediocre (i.e. sufficiently goo) stories and music.

In praise of empathic AI

https://www.sciencedirect.com/science/article/pii/S1364661323002899?casa_token=NcRcGUx4HwIAAAAA:0W7gdVefGyQPRbwBDa-EZv2cT2RaD5YuhZ1LlyQuUD2vz0v2EcFF4iKlqB0HloOLSDQ8cw

AI will never convey the essence of human empathy

וhttps://www.nature.com/articles/s41562-023-01675-w

Benjamin Riley's avatar

Thanks for reading and the comment. I would just repeat the point I make in the essay, which is that our most vulnerable fellow humans may be the ones most likely to turn to AI tools for "sufficient" empathy when what they need is thick (ideal) human empathy. Adam Raine is dead for this reason, as are others.

And I'm not sure I'd agree AI is "much better" at writing stories or music, of any quality. There's a reason "slop" was just named Word of the Year by Merriam-Webster.