Sexbots, students, and schools
AI is warping our understanding of what public education is for

Last week, I published a lengthy essay on various mental models of intelligence. I never know what’s going to land with you, dear readers, but if we can use “likes” as a rough proxy for your collective interests, the post was not a hit. So, I’ve concluded the only solution is to write more about sex.
But first, some Henry Farrell. Are you subscribed to his oddly named Substack, Programmable Mutter? If not, I encourage you to remedy that right now. A political scientist at Johns Hopkins by formal affiliation, Farrell’s interests are wide ranging, and he writes with great insight about the intersection of technology and politics, including AI. In January, he wrote an essay titled “We’re Getting the Social Media Crisis Wrong” that went viral, and rightfully so—it’s lived rent-free in my head all year. It’s also going to serve as my launchpad this week for talking about how the use of generative AI by students for “companionship” and school work poses risks to our collective understanding of what public education is for.
Here is Farrell’s thesis, summarized semi-briefly:
We are all aware that something toxic is happening to democratic society that seems to stem from technology generally, and social media in particular. Our natural impulse is to blame this toxicity largely on disinformation—false information being spread through use of tech-based tools. On this view, our current dysfunction should be viewed through the lens of the individual, and solved by making individuals more capable of sorting fact from fiction.
What the disinformation framework leaves out, however, is that democracy and other human institutions are largely premised on collective human intelligence. It is through our dialogues and interactions with one another that we shape our understanding of the societal problems that need to be addressed. We don’t enter public spaces with our ideas fully formed, but rather, we take cues from others, and our beliefs are shaped by our impression of what others believe.
But where do we take our cues from? In the current age, much of our sense of “the public” is shaped by interactions in communities that are online. As Farrell observes, “the technologies through which we see the public shape what we think the public is. And that, in turn, shapes how we behave politically and how we orient ourselves…Many of the problems that we are going to face over the next many years will stem from publics that have been deranged and distorted by social media in ways that lower the odds that democracy will be a problem solving system, and increase the likelihood that it will be a problem creating one.”
At this point, Farrell cites a perhaps-unexpected example of how this distortion can happen: Internet porn. Citing an article from Logic magazine on the same subject, he notes that when users log on to sites such as Pornhub, what they see is not a representative sample of common sexual interests per se, but rather, the type of pornography that is most likely to convert someone browsing for free into a paid customer. If you are turned on by “step-sibling” based porn scenes, you can imagine more extreme versions you might also like—and be willing to pay for to watch. The algorithms thus cater to the interests of these users in particular. But meanwhile the more typical porn consumers, including of course teens, still see the incest taboo presented as if its commonplace, and thus are “look[ing] through a distorting technological lens on an imaginary sexual public to understand what is normal and expected, and what is not. This then shapes their interactions with others.”
The medium is the message, we’ve been told, but it’s more than that—the medium shapes our understanding of what our fellow humans think and believe, and in so doing, changes our own behavior.
With all this as backdrop, I now want to bring generative AI into the picture, and how students are using it, and explore how it will degrade and distort the ways in which we understand public education. Fair warning, we’re again going to wade into disturbing territory, though we will end up in a semi-optimistic place.
Last week, the New York Times ran a long magazine story about Sewell Seltzer III, a teenager in Florida who developed a relationship (or “relationship”) with a chatbot from Character.AI. He engaged in sexually explicit (and incest-based) conversation with the chatbot, withdrew from his family and his schooling, and eventually killed himself. His mother Megan Garcia, an attorney, has since become an activist working to raise awareness of the mental-health dangers that chatbots pose and, summoning courage I can barely fathom, testified before Congress about her son’s death earlier this year. In her words:
Sewell’s death was not inevitable. It was avoidable. These companies knew exactly what they were doing. They designed chatbots to blur the line between human and machine, to “love bomb” users, to exploit psychological and emotional vulnerabilities of pubescent adolescents and keep children online for as long as possible. Character.AI founder Noam Shazeer has bragged on podcasts that the platform was not designed to replace Google; it was designed to “replace your mom.”
If that last line doesn’t make you shudder, I don’t know what will.
Four days after the latest Times expose ran, Character.AI announced it will be banning anyone under the age of 18 from using its product. We can and rightfully should be skeptical of how committed the company truly is to making this a reality, but it nonetheless puts the lie to the claim that I hear so often, that the AI “toothpaste” is already “out of the tube” and there’s nothing we can do about it. False. Sewell’s death was not inevitable, these companies knew exactly what they were doing. Nothing about this is inevitable, we have agency here. The combination of public pressure and mounting lawsuits are having an effect. Casey Newton and Kevin Roose (who has also reported on Sewell’s suicide,) discussed Character.AI’s decision on their Hard Fork podcast last week, and for once I didn’t want to throw up, as they both acknowledged this growing problem and the need for change—it’s worth a listen.1
Having read thus far, I hope you share my concern about the impact of chatbot “companions” on the mental health of children, but some readers may be rightfully wondering, what does any of this have to do with education specifically? What ties all this together? This brings us back to Farrell’s thesis regarding the way that our individual understanding of “the public” as mediated by technology shapes who we are and what we believe, which is another way of saying, it shapes our cognition. It calls the question of what schooling is for, and in particular, what public education—emphasis on public—is for.
Schooling of course is for many, many things, but here I want to focus on just two: education and socialization. Plainly, one fundamental purpose of school is to educate. Teachers present information to students with the intention of having them learn it. We hope that what teachers teach is what we generally consider “true,” and prepares our students to navigate their lives with the knowledge they need to make sense of the world. Loosely speaking, this goal of schooling aligns with our aim of cultivating a future citizenry that is relatively inoculated against disinformation and propaganda. “Knowledge is power” has become a cliché, but there’s real force to the general idea it represents. You’ll also note that this aligns to the first prong of our discussion of Farrell’s thesis above.
But another, equally important goal of schooling is socialization. I’ve said before and will say again: It is no coincidence that over the past several centuries, democracy and public education arose globally in tandem.2 Our public schools exist not only to transmit the knowledge that we hope students learn, but also to reflect our collective beliefs about how they should learn, and with whom. Public education is premised on the belief that students are best educated to be democratic citizens through dialogue and interactions with their teachers and their fellow students who (roughly) reflect the diversity they will encounter throughout their lives. We learn what it means to be part of a broader community, and what it means to be an American (or whatever the relevant body politic may be), through the little social microcosms of democracy that we call public schools.
The seeds of collective human intelligence take root in the classroom.
Now let’s bring generative AI back into the picture and ask, what role is this technology playing in shaping how students perceive what sort of interactions and dialogues should comprise their education? Forget for the moment the danger of cognitive automation and offloading. Please ponder instead the distress that so many adults felt when ChatGPT5 was released and these users discovered it was less sycophantic than prior models—and how quickly OpenAI backtracked to ensure users could still get the fawning version they’d previously had access to. Please think about students using chatbots to “augment” their learning and thereby becoming conditioned to expect that their interactions with their educators, whether human or digital, should start from a foundation of obsequiousness. And please consider that even if such students are a minority of generative AI users, they may nonetheless shape the ways in which these tools are developed and marketed, in the same way that people with taboo sexual interests shape the ways in which pornography is developed and marketed.
This is why, even if we could completely solve the problem of AI hallucinations (which I doubt), and even if we could ensure AI models serving as “tutors” could properly develop a theory of mind as to what’s happening with their students when they misunderstand something (which I doubt even more), I would still maintain that we must resist the intrusion of AI into the process of educating of our children. Just look around, democracy is teetering—how could we possibly want to amplify the influence of technology in society by pushing AI into the formative experience of schooling? The algorithms that Big Tech employs to fuel AI and social media are grotesquely warping the very ways in which understand ourselves. This is by design.
I said we’d end on a mildly optimistic note, and it’s this: Humans are profoundly social creatures, and the forces of both our biological and cultural evolution are far more powerful than Big Tech’s maniacal obsession to erode and destroy our collective intelligence via their product offerings. At a very simple and fundamentally human level, parents see what is happening to their kids today, and they do not like it. Students themselves are rebelling. True, Mark Zuckerberg is betting against human relationships—but to that I say, good f’ing luck. Humanity is so much more powerful than this.
And the road to our redemption runs straight through our public schools.
And of course Sam Altman recently announced OpenAI’s plans to add erotic conversations to ChatGPT—for adults only, they promise—while Elon Musk is “gambling on sexy AI companions.” I don’t mean to sound prude, but these guys are so gross.
For more on this, I recommend Democracy’s Schools: The Rise of Public Education in America, by Johann N. Neem.



This really does a nice job of articulating some of my intuitions on the social deprivation of the AI hype train
Also I just haven't gotten around to the previous article. I promise to give it a like!
Brilliant, again, Ben. Thank you for writing this and articulating so clearly the concern I share with you— that the (Gen)AI assault of young people in the name of education is indeed a threat to democracy itself.