Last week Casey Newton, who writes about technology for Platformer and co-hosts the “Hard Fork” technology podcast, wrote about the emerging movement of AI Skepticism. Newton’s take: AI Skepticism has two main “camps,” the “AI is real and dangerous” camp versus “AI is fake and it sucks” camp. The first group, which Newton puts himself in, believes that AI is going to transform life as we know it, bestow great benefits as well as harms, and may soon lead to artificial general intelligence. In contrast, Newton contends, the fake-and-sucks group is so fixated on the current limitations of AI, they fail to appreciate the advances of this technology and its many benefits, benefits he believes will only continue to grow.
Oh, and Newton blames Gary Marcus for inspiring these nattering nabobs of AI negativity.
The basic problem with this essay is that nearly everything about it is, well, wrong. David Karpf skewers it here, Edward Ongweso Jr. whacks it here, and Gary Marcus fires back here. My take: Newton is wrong both on the Who of AI Skepticism, in terms of who is participating in this nascent intellectual movement, and the What, in terms of what people within this movement actually believe. I appreciate that Newton covers technology broadly, not just AI, and thus is unfamiliar with the nuances of AI Skepticism as a movement. But in this instance he didn’t just oversimplify things, he failed to grasp its core essence.
I’m now going to do something that makes me nervous. As someone who’s spent the last two years of his life think critically about AI and connecting with other AI Skeptics, I’m going to describe the Who and What of AI Skepticism as I perceive it. Please, please, please note the italics in that previous sentence—I am not trying to create the definitive guide to AI Skepticism generally. But because I consider myself a member of this movement, and because I want to foster AI skepticism more broadly, I want to offer something more detailed than Casey Newton’s drive-by description. This is not meant to be comprehensive, and many of the people here play multiple roles within this movement, including Marcus himself (read to the end for more on that point).
With that said, here goes.
The Who and the What of AI Skepticism
Scientific Skeptics – Cognitive Science Wing
What this group generally believes: That generative AI based on Large Language Models is limited in its capabilities compared to human cognition. LLMs are adept at pattern matching but relatively limited at higher-order cognitive capabilities such as reasoning, forming abstractions, and generalizing to solve novel problems.
One layer of nuance deeper: Most of the scientists in this group think that “artificial general intelligence” will not be achieved simply by feeding LLMs more data or increasing its computing power. But many are relatively bullish on improving in its AI capabilities by other means.
Who I’d place in this “camp”:
Melanie Mitchell, Santa Fe Institute
Subbarao Kambhampati, Arizona State University
François Chollet, until recently with Google DeepMind
Murray Shanahan, Imperial College London/Google DeepMind
Iris van Rooij, Radboud University
Do they believe AI is fake or that it sucks?: No. These scientists are deeply curious about AI, and many of them even believe “AGI” is achievable—but it likely won’t happen simply by scaling up LLMs.1
Scientific Skeptics – Neuroscience & Linguistics Wing
What this group generally believes: Large language models have predictable limitations arising from the processes they employ to model human languages. Further, while LLMs are very adept at replicating the surface features of language, this doesn’t mean they “understand” language.
One layer of nuance deeper: The members of this wing look forward to one day dancing upon the grave of Chomsky’s theory of “universal grammar.”
Who I’d place in this “camp”:
Tom McCoy, Yale
Ev Fedorenko, MIT
Kyle Mahowald, University of Texas
Do they believe AI is fake or that it sucks?: No. These folks are very interested in how LLMs do what they do, and they use them in their research to explore human and artificial uses of language.
Skeptics of AI Art and Literature
What this group generally believes: There is something essential to being human, to being alive, that we express through art and writing. Generative AI may mimic what we humans produce, but we should be extraordinarily wary about its impact on this aspect of human culture.
One layer of nuance deeper: Gah, how much time do you have? There are a million deeper layers here to unpack.
Who I’d place in this “camp”:
Eryk Salvaggio, Wearer of Many Hats
Jane Rosenzweig, Harvard Writing Center
Angie Wang, The New Yorker
Ted Chiang, Author
Do they believe AI is fake or it sucks?: No. To be sure, these skeptics are concerned about fake output produced by AI being perceived as real and the many ways in which this may degrade art and writing in general, that’s a different sort of “fake” from what Newton meant.
AI in Education Skeptics
What this group generally believes: We should be wary of yet another ed-tech phenomena that overhypes and underdelivers on its promises. AI is particularly dangerous to education as a tool of cognitive automation.
One layer of nuance deeper: I can give you about 30 layers of nuance if you peruse this Substack. These are my peeps.
Who I’d place in this “camp”:
I mean, this is why this post terrifies me, I know so many of you, and I can’t possibly list you all here, I’m so sorry (though I’m glad there’s so many of you). That said, and with pleas for forgiveness, here’s a sample:
Chanea Bond (English)
Fonz Mendoza (ed tech podcaster)
Audrey Watters (“Ed tech’s Cassandra”)
Dan Meyer (Math + ed tech)
Jed Williams (computer science)
Tom Mullaney (social studies/AI in general)
Do they believe AI is fake or it sucks?: No. If it was fake, we wouldn’t have to worry about the the temptation students face to use this tool to displace the effortful thinking that school is designed to foster. Although it will suck if we plow ahead into such a future without thinking about the consequences.
Sociocultural Scholarly Skeptics – “the DAIR wing”
What this group generally believes: DAIR is nonprofit organization founded by Timnit Gebru that believes that “AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes it can be beneficial. Our research reflects our lived experiences and centers our communities.”
One layer of nuance deeper: There is a long history of technology being used as a tool of oppression, often in conjunction with capitalism and white supremacy. AI is the latest manifestation of this sordid history.
Who I’d place in this “camp”:
Timnit Gebru, founder of DAIR
Emily Bender, DAIR/University of Washington
Abeba Birhane, Trinity College
Do they believe AI is fake or it sucks?: No, they absolutely do not believe that AI is fake, they see its power—and the power of those building and promoting it—as significant and dangerous. They do believe it will “suck” for society if we don’t resist the efforts of Big Tech companies (and others) who are working to advance their private self-interests rather broader social benefits. (I’ll underscore again that I’m painting with broad brushstrokes here—Gebru, Bender, and Birhane are scientists at heart who have become prominent political commentators. There’s a lot of cross-pollination across these wings.)
Sociocultural Commentator Critics – “the neo-Luddite wing”
What this group generally believes: Silicon Valley sucks. The business model for generative AI sucks. The impact of technology on human labor sucks. The Luddites were on to something.
One layer of nuance deeper: This is not the wing for nuance.
Who I’d place in this “camp”:
Ed Zitron, Journalist
Amy Castor & David Gerard, Pivot to AI
Brian Merchant, Journalist (and author of a book about Luddites)
Do they believe AI is fake or it sucks?: Kind of. Of all the wings of AI skepticism, this one comes closest to matching Newton’s it’s-fake-and-it-sucks camp. These folks do think AI sucks, and they think AI may soon join NFTs and the metaverse on the scrap heap of tech history. But I think the basic mistake Newton made with his piece was conflating this branch of AI Skepticism with the movement more broadly—indeed, when when I pressed him on BlueSky to identify AI skeptics besides Gary Marcus, Zitron and Pivot to AI were the only names he offered. I think that’s telling.
Skeptics of AI Doom – “the Princeton School”
What this group generally believes: AI is a powerful and potentially beneficial technology, but also one that can cause harm, particularly when used for predictive purposes (e.g., who should get healthcare, or who’s likely to become a criminal). But it’s not so powerful that it’s going to kill us all. So instead of worrying about AI causing social collapse, we should gather evidence and pass regulations to shape how we use it, like we do with other technologies.
One layer of nuance deeper: This is the moderate wing of AI Skepticism, folks here can be quite optimistic about particular use cases of AI while also warning that there’s a lot of AI-related snake oil being peddled.
Who I’d place in this camp:
Arvind Narayanan & Sayash Kapoor, Princeton (co-authors of AI Snake Oil)
Timothy Lee, Understanding AI (and coiner of “the Princeton school” phrase)
Do they believe AI is fake or it sucks?: No. They like AI! They just think people need to chill out with the doomerism.
Technical AI Skeptics
What this group generally believes: The technical capabilities of AI are worth trying to understand, including their limitations. Also, it’s fun to find their deficiencies and highlight their weird output.
One layer of nuance deeper: Some of those I identify below might resist being called AI Skeptics because they are focused mainly on helping people understand how these tools work. But in my view, their efforts are helpful in fostering AI skepticism precisely because they help to demystify what’s happening “under the hood” without invoking broader political concerns (generally).
Who I’d put in this “camp”:
Sean Trott, UC San Diego
Chomba Bupe, AI entrepreneur
Colin Fraser, Meta (I think?)
Simon Willison, Programmer (I think?)
Do they believe AI is fake or it sucks?: No. Although the range of opinions here is wide, the people in this group would not spend time talking about the techical aspects of generative AI if it was “fake.” They spend time helping educate people on how these tools work.
Gary Marcus Wing
Breaking out of my template here because Gary Marcus is a real person, not a “wing.” He’s also someone I consider a friend and intellectual compatriot—I reached out to him years ago to help me understand what was then called “deep learning,” and we’ve stayed friendly since.
What makes Marcus a complex figure within AI Skepticism is that he fits within multiple wings of the movement—his critiques of AI span from scientific to economic to sociocultural. He’s definitely part of the cognitive science wing (he’s written books about this); he could easily be affiliated with the anti-doom wing (he’s explicitly played down that particular risk, while underscoring the real and present harms of AI); and you could even say he’s part of the Art & Humanities wing (e.g., he’s done work to highlight how AI tools miasappropriate intellectual property). In contrast, he’s definitely not a neo-Luddite—if you listen to his excellent podcast series Humans Versus Machines you’ll hear him describe his genuine love for AI and for what it might become. Emphasis on might.
Like I said—as a figure in this movement, Marcus is complex! Given his prominence, I have no problem with Casey Newton offering critiques of Marcus’s positions—that goes with the territory of being a public intellectual, after all. But Newton failed to capture the nuance of Marcus’s beliefs and then, based on that misunderstanding, imputed Marcus’s views to a movement that is far more nuanced than Newton ever acknowledges.
Other AI Skepticism wings that I am less familiar with and/or might quadruple the size of this already hefty taxonomy
There are many, many other AI Skeptical concerns that are not included here because drafting this has already exhausted me. These include related to Economics, the Environment, Ethics, Intellectual Property, Privacy, Military, Law enforcement, Transportation, and Medicine. To name only a few.
Final caveat
I say once more: There are so many involved in the AI Skepticism movement beyond what I’ve shared here. Frankly, this list is incomplete even as to influences on my thinking, much less the movement more broadly. So please forgive me if I failed to include you here, because I deeply value the contributions that everyone is making to this effort.
Conclusion
I’m an AI Skeptic. Here’s what I believe:
AI isn’t fake.
Sometimes it sucks. Sometimes it doesn’t.
Right now, the power of AI is heavily tilted in favor of those who claim it doesn’t suck. So, within the domains where my ideas are relevant, such as education, I’ll be trying to balance out that power.
As far as describing the AI Skepticism movement goes, Newton’s essay definitely sucked.
That’s because the AI Movement is diverse and comprised of far more than just two crudely sketched out camps. Rather instead:
Our movement is very real, and it’s spectacular.
Yann LeCun and Yoshua Bengio are two scientists, Turing Award winners, and AI pioneers who I see as adjacent to this group. Both have acknowledged that LLMs are limited in their current capabilities (with LeCun going so far as to say that the “suck”). They also think AGI may be achieved if we have scientific breakthroughs on other AI frontiers.
Thanks for a great overview with lots of references I could use. And it was funny too, starting with the illustration. I’d say it probably was worth exhausting yourself.
Seinfeld reference is perfect.