Empty empathy machines
AI chatbots lack something fundamental to human empathy

I am typing this with a heavy heart. As of Sunday morning, Brown University remains in “lockdown” in response to another mass campus shooting that’s left at least two dead and eight injured. Meanwhile in Australia, a country I visited twice this year, multiple assailants attacked a Hanukkah gathering on Sydney’s Bondi Beach, killing at least 15 people and leaving 40 hospitalized. I am sending feeble messages to friends in both places, gestures that feel inappropriately small against the enormity of the traumas.
I hope it will not feel crass to connect these incidents, and my reaction to them, to my exploration this week of human empathy and “empathy machines.” A few weeks ago, I received an unexpected and welcome email from Dr. M.J. Crockett, a psychology professor at Princeton University’s Department of Psychology and University Center for Human Values, wherein they revealed they’re a regular Cognitive Resonance reader—always appreciated. Through the mysteries of social media, I’ve been following M.J. on BlueSky, but it was only after receiving their email that I dove deeper into their work studying the “cognitive science of ethics, knowledge and power,” and discovered how it connects to just about everything I’m advocating for here. If this piques your curiosity, this Mind & Life podcast they did with Wendy Hasenkamp offers a great introduction to their ideas and interests. Make sure to listen for the Burning Man reveal.
One of Crockett’s interests is “empathy, thick and thin,” which doubles as the title of a new pre-print that they shared over the weekend. It’s a great read, in part because they write in an accessible style that every academic would do well to emulate, but also because they offer a new way to think about what we talk about when we talk about empathy, and—as we’ll get to—how our human capacity for “thick empathy” is unlike the “thin empathy” expressed by large-language models.
So what is empathy, thick and thin? Crockett offers a multi-factor definition that builds off the philosopher Gilbert Ryle’s notion of thick description, of describing something with enough detail to capture essential meaning—Ryle’s example being the difference between describing a wink versus an eye twitch. When it comes to empathy, Crockett contends we are talking about a couple of things in tandem:
First, there’s knowledge we may possess about an experience that someone is going through (“phenomenal knowledge”), and also the knowledge we may possess about the someone themself (“interpersonal knowledge”).
Second, empathy involves the skill of expressing our empathy to others. Here, Crockett distinguishes between possessing binary and abstract knowledge about some fact (“knowledge-that”) verus the gradiation of skill that arises from experiential knowledge (“knowledge-how”). For example: “You can know that meditation practice involves sitting on a cushion and watching your breath, but to know how to meditate, you need to experience it for yourself,” Crockett writes, and sneaking in a citation to the Buddha in the process (love it).
From all this we get empathy thick and thin: “Thin empathy is the kind of empathy we enact in the absence of direct experience with the situation or person we are empathizing with; it is grounded in knowledge-that…Thick empathy is grounded in knowledge that comes from direct experience of situations (phenomenal knowledge) or persons (interpersonal knowledge). These types of knowledge confer new skills, or knowledge-how, that enable the empathizer to grasp the nature of the other’s experience more fully or completely.”
If all that sounds a bit technical, these graphics illustrate the distinction neatly:
This framework maps to my varying levels of empathy regarding the violent incidents I’m currently grappling with. I have, mercifully, never directly experienced a school shooting.1 I am however an American who led a nonprofit organization for a decade that works with universities across the country, and through that effort I made many—far too many—phone calls and emails to deans and faculty who’d experienced gun violence on their campuses. In contrast, while I’ve been to Australia and visited Bondi Beach a decade ago, my experience of that culture is very limited by comparison. So my capacity for empathy across these tragedies is of varying thickness, and I feel this viscerally right now, I really do.
Less heavily, as I read this paper over the weekend I was struck by its relevance to education. In the American context, I think we generally want teachers to possess thick empathy for their students—we value and believe teachers should develop rich relationships with the children in their classroom, and should have a relatively broad understanding of what’s happening in their lives outside school. I see this largely as a virtue of our approach to education, but it’s not a universal shared perspective. Several years ago, I read Amanda Ripley’s book The Smartest Kids in the World wherein she interviewed non-American teachers who were teaching in the US system, and more than a few were resistant to developing thick interpersonal knowledge of their students’ lives. I’m paraphrasing from memory, but I distinctly remember a teacher from Eastern Europe saying, “my job is to teach them knowledge, not be their mother,” or words to that effect. Likewise, I recall the Finnish teacher who said they deliberately refused to inquire about their kids’ home circumstances, lest this make them (the teacher) less capable of demanding academic rigor.
All of which is to say, our capacity for and expression of empathy, whether thick or thin, may vary depending on cultural context. And of course technologies play a role in shaping our cultures. This brings us to the efforts to build “empathy machines.”
Until reading Crockett’s essay, I was unfamiliar with the interest in cultivating empathy using virtual-reality simulations. The idea here is plausible enough: create a digital scenario that allows people to virtually “step in the shoes” of an experience they are unlikely to have directly, such as being homeless, or a refugee. Researchers have employed VR along these lines and studies indicate that people report feeling more empathy after undertaking simulations of these experiences. One problem among many, however, is that this may create thin empathy with only an illusory understanding of the modeled experience, as Crockett observes:
VR simulations cannot simulate thick empathy, even as they promise to teach “what it’s like.” Simulated experiences of poverty, homelessness, racism, etc. cannot provide phenomenal knowledge of what these experiences are actually like because when you take part in the simulation, you know you are in a simulation - what [Nonny] de la Peña calls “duality of presence.” This knowledge fundamentally changes the quality of the experience. A brief simulation of poverty, homelessness, or racism that you know you can turn off at any time tells you very little about what it’s like to live under these conditions with no means of escape.
The other form of empathy machine is a chatbot. Here, Crockett readily acknowledges that “many people do feel heard and cared for by chatbots,” to the point that “therapy and companionship” is now the most common use case for generative AI. Further, there is a bevy of recent research that suggests when people engage in a personal and intimate dialogue over a digital device without knowing if they are corresponding with a human or chatbot, they often will prefer the feedback and experience they have with the latter. (Often, but not always—there’s emerging research that suggests when participants are told the source of feedback, they prefer humans.)2
Here I will interject another personal note to say, perhaps controversially, I understand how many could find comfort (and other psychic benefits) from engaging with a non-human interlocutor. My mother died several years ago, long before chatbots were omnipresent, but in the last years of her life, her social community was almost entirely online; in particular, she liked to play the card game Hearts with people from around the world. I will never forget when she told me, and I am typing this through tears as I relay this to you, of the confusion she felt after a man she’d been corresponding and flirting with for several months asked for her to send him a picture—and after she did, she never heard from him again. “Why would he do that, Ben?”
It’s a question that haunts me to this day. We call this ghosting nowadays and I think it is socially corrosive in ways we’ve yet to fully fathom, that we even have a word for this phenomena suggests something terrible to me. My point is, my mother did not feel attractive physically and thus felt most comfortable in online environments, and I strongly suspect if she were alive today she would gravitate to chatbots for companionship. So I have empathy, perhaps even thick empathy, for those who make this choice.3
Empathy, but also deep misgivings, and here again Crockett poignantly gives voice to something I’ve felt without quite knowing how to articulate. As an initial matter, they point out that the research that compares humans versus chatbot acting as companions or therapists are grounded in thin empathy, meaning, in situations where the parties are anonymous to each other, thus devoid of existing interpersonal relationships. What’s more, these experiences are limited to text, and thus lack all the many other ways in which we express our emotions, our feelings, through physical cues, such as facial expressions or tone of voice.
But an equal if not larger problem with chatbots as empathy machines is the extent to which they displace the vital need for human-to-human thick empathy when the stakes are literally life and death. I’ve shared the details of Adam Raine’s suicide previously, and how ChatGPT actively discouraged him from telling his parents about his suicidal ideation. After likewise sharing the circumstances of Raine’s death, Crockett does not hold back:
This tragedy was foreseeable. No matter how fluently expressed, empathy without a caring human behind it cannot provide the support we need in our darkest moments. People who seek companionship with chatbots might feel understood, but chatbots cannot feel any obligation to help someone in distress, nor can they physically intervene to prevent harm.
Thus we face a social dilemma around the role of these empathy machines in our society. Many people clearly find them supportive notwithstanding their limitations. Yet those who are most cognitively vulnerable—children, say, or those suffering a mental health crisis—may turn to these tools when what they most need is a flesh-and-blood human who will care for them, not sycophantically, but with thick empathy that can only arise from the experience of being human.4
This is a heavy subject to send us into the holidays with, I know, so I want to share a moment of quasi-philosophical levity from the comedian Sebastian Maniscalco.
In this bit, he observes that in the not-too-distant past, it was common to just…have people come over to your house, unannounced. “Company,” it was called, and yes I can remember the jar of candy my grandmother kept in her living room for when her friends randomly dropped by. But nowadays? When the doorbell rings unexpectedly in the evening, “it’s like what the f….hide grandma in the closet, do an army crawl, and get the sword out of the living room.” (The clip is funnier, I promise.)
Society changes, culture evolves, I get it, and I do not want to dwell in nostalgia. And yet, it’s becoming harder to identify the physical spaces where we keep company with one another (movie theaters and public malls beings two examples of withering public squares). As a result, we seek refuge in digital alternatives, and our capacity for empathy grows ever thinner. We can reverse this trend, I really believe that, but it will require collective effort. I will leave you with a quote on empathy from feminist philosopher Mariana Ortega, one cited by Crockett that leaves us with a clear mandate:
Empathy “requires tremendous commitment to practice: to actually engage in activities where one will experience what others experience; to deal with flesh and blood people, not just their theoretical constructions; to learn people’s language in order to understand them better, not to use it against them; to really listen to people’s interpretations, however different they are from one’s own; and to see people as worthy of respect, rather than helpless beings.”
This is the charge I will carry into the new year, and I hope you’ll consider carrying it with me.
My thanks to M.J. Crockett for her rapid feedback on a draft of this essay; all errors and misinterpretations are mine alone.
At least two student survivors at Brown also experienced mass shootings in high school. I’m not sure what can be said about America’s culture of gun violence that’s more indicting.
Crockett helpfully pointed me to this preprint as emerging evidence of this (I’ve yet to read it).
In a related vein, educator Alicia Stoller eloquently explores recent efforts by AI companies to preserve the life of loved ones after death through simulated avatars, aka “grief tech.” Everything precious is fragile, Stoller observes, and how true that feels to me now.
Interestingly, 42 state attorneys general recently joined together to warn the AI hyperscalers that they may be subject to civil and even criminal penalties for harms resulting from innapropriate advice these tools provide: “It is illegal to provide mental health advice without a license, and doing so can both decrease trust in the mental health profession and deter customers from seeking help from actual professionals.” More on this here.




Great post, Ben. I would add two observations and then conclude with a few questions. First of all, with the ability of chatbots to speak and the rise of video avatars, it will not be long before people will predominantly be interacting with something that feels more like a flesh and blood person. In fact, I'm quite sure that is possible with platforms like Replika so to your point about chatbots being only text is already dated. Additionally, one appeal I suspect of chatbots is complete devotion and attention to the user. This was highlighted in the recent Economist cover story as a huge issue going forward as future generations expectations of "relationships" become far more one sided. This only underscores the need for human connection. So I'll end with two questions - first, if users experience "empathy" from a chatbot, regardless of whether or not the AI is conscious (something I do not believe but others are convinced of), does that count? In other words, does the empathy attach to the person who expresses it or the person who experiences it? If you receive a beautiful note in the form of an e-mail expressing "empathy" for a recent loss and later find out it was simply spam from a chatbot, we recoil - but in the moment you read it, does that count as experiencing "empathy"? It seems like that study you refer to suggests this is changing (because many people prefer the AI note, at least initially), but chatbots are capable of remarkable mimicking of human empathy. It's a troubling trend to say the least, especially for teens, but it seems like the AI companies are leaning into certain aspects of it. I shudder to think of the stories that will emerge when OpenAI rolls out it's "adult" version of ChatGPT next spring. Anyway, I enjoy your work and maybe we can reconnect after the New Year. I'm still sifting through so much even as the tech continues to rapidly improve.
I really appreciated this — especially the thick/thin empathy distinction and the clarity around why human contact still matters.
One question it raised for me, coming from nursing and education, is whether AI empathy is less about “real vs simulated” and more about what it can do, and in what contexts. Human nervous systems are often regulated by tone, pacing, mirroring and feeling heard — frequently before we consciously register who or what we’re interacting with — so simulated empathy can still have real effects.
In practice, though, what we work on with students isn’t generating empathy so much as helping them hold it under pressure. That depends heavily on rapport, presence, and subtle, intuitive cues that arise when two people are attending to one another. Current text-based AI seems to fall short here, not just because it lacks lived experience, but because empathy at this level depends on both participants being able to attune and be affected.
I can see AI having value as a scaffold — modelling empathic language, slowing interactions down, helping people practise difficult conversations. But empathy that actually changes us seems to require mutual presence, vulnerability, and the possibility of being altered by the encounter.
Which leaves me with a broader question: even if simulated empathy “works” locally, does it help cultivate a more empathic society, or does it risk displacing the human contact and community that thick empathy seems to depend on? I agree with your conclusion that sustained human contact remains central.