Another analogy that acknowledges the utility but highlights the risks of pushing it everywhere for commercial gain before its effects are fully understood. This one was used in an exchange over the proposed adoption of AI in a school district north of Boston. I'm not sure if its use originated there. I have since come across its use elsewhere.
"As one Massachusetts school administrator recently said; this moment with AI is remarkably like the moment when we were introduced to asbestos. Yes, it had some remarkably promising characteristics – fireproofing! – and had some real utility in science, research, and industrial applications. But a profit-driven industry bullied us into inserting it everywhere; into our homes and schools and public spaces, before we really understood the risks. This resulted in decades, if not centuries, of illness, injuries, deaths, and the astronomical financial burden of trying to remove the stuff."
Wonderful comment as per usual Alan. I think the asbestos metaphor is a really good one. There are many -- check out that AI metaphor observatory I linked to, it's great.
Thank you Mark! I will continue to defend the intellectual beauty of the stochastic parrots metaphor, even if two of its creators attacked me for believing my defense to be flawed.
Yeah, keep at it, I think you have super balanced takes that are well presented. Do you know what’s going on with those two? I wasn’t even aware of them until I listened to Robert Wright podcast where he had them on recently. I found most of their technical discussion good but Bender’s tone and some of her responses seemed needlessly antagonistic, and actually, they seemed to very strongly push back on the idea that AI is useful. I’ve still reserved their book just to ensure I’m reading widely.
Second, oh man, even to respond to your question feels dangerous. I haven't listened to the Wright podcast, but several months ago I did listen to The Big Technology podcast that Bender appeared on, along with the co-author of her book, Alex Hanna -- who, I want to say, seems really wonderful, as I've had thoughtful conversations across a range of topics with them on BlueSky.
In the Big Tech podcast, I remember the host Alex Kantrowitz describing to them how he'd recently used generative AI to plan a trip to a foreign country, and how he'd found it incredibly useful to that end. My recollection is that Bender responded by describing what she thought Alex K. *should* have done instead, which was -- again I'm going off memory -- consult locally produced newspapers and magazines, the equivalent of the local Village Voice (RIP). Not only did that sound incredibly cumbersome to me, I remember wondering, what happens when you're visiting someplace where they speak a different language? What then?
The argument that AI can never be useful appears intellectually incoherent and rhetorically disasterous to me. If these tools weren't useful, we wouldn't have to worry, because no one would bother to use them. Instead, and forgive me for reiterating my final point in the above essay, the tools are dangerous *precisely because* of their utility. What's more, if you try to argue that they're *never* useful to someone who's used them and feels the very opposite way, you've destroyed your credibility before the dialogue has even begun. There is no path to "meeting in the light of mutual understanding." (Malcolm X)
"Needlessly antoginistic" sums up much of what I see within the AI Skepticism movement unfortunately -- and sorry not sorry for calling it that. Social activist Todd Gitlin once said, back in 1995, “while the Right has been busy taking the White House, the Left has been marching on the English department.” Insert "BlueSky" for the English Department and I think that's a pretty astute insight into why we're getting our ass kicked in the broader political arena (with we = people resisting AI intrusion across all walks of life). I plan to write a longer essay about this soon. Your comment might get worked in, unless you object.
I hope you continue to read and comment on future essays -- you seem to get what I've been trying to do. Thanks again.
That is a very helpful article. I do have a question of how self-critique in a model fits into your explanation (e.g. Claude's 'Constitutional AI' approach).
Thanks! Short version of what probably needs to be an essay, the "constitutional approach" is mostly just adding some universal text to the prompts we enter into the models. It's more complicated than that, but I -- controversially -- think Anthropic is the worst of the worst insofar as they are trying to project the illusion of being a responsible actor as cover for their weird cultish organizational culture.
OK. I had thought/hoped there was more of a feedback loop, but I can't cite any sources and it might well have been some random comment on a podcast.
However, I'll push back a bit on Anthropic's hypocrisy. This is more sociology than AI, but hypocrisy can serve a useful purpose. Paying lip service to a value can influence public discourse even when actual behavior does not back it up.
To me, you've laid out the case for why *ethics* is necessary. And no, I don't mean the narrow sort of "AI ethics" that most people want to talk about. I'm talking about what philosophers of technology have been interested in for quite some time. What does it mean for a technology to promote *the good*? Naturally, that question is different from whether it is "effective" or "useful" in a strictly technical sense - as you make very clear. We don't have to deny that AI is technically useful to say that it is not a good thing for society, for certain communities, for ourselves, for the planet, etc.
That is very eloquently stated. Unfortunately, I think "AI ethics" as a term has been co-opted, as you note. This is another reason why I embrace the imperfect term "AI skepticism," because it captures inquiry and doubt. But I'm open to other descriptions!
I was living in a truck in the woods in the mid-1970s, thinking about natural language processing (NLP). I had a library card, and read about early attempts by the U.S. Navy to use cybernetic approaches to inventory control. My interest was grammar and semantics. I read grammars, studied English syntax, and became focused on idiomatic expression as the nexus between symbol and semantic implication. My first computer was a SCELBI 8H, with Intel 8008 and 1k of RAM. My next was an IMLAC PDS-1, with 8k of 16 bit magnetic core (non-volatile) RAM. Next Commodore 64 with tape drive, then with a disk drive. I figured if only I had two disk drives, then I could put a dictionary on one, and compress text on the other. On and on. I went through IBM XT format as a Compaq "luggable". Actually lugged it to Nicaragua and back. In Nicaragua the Technica team was doing inventory for the Ministry of Health (MINSA). I was doing dBaseIII (something like that), then Clipper. But I had nothing they didn't already have. I could go on until everyone here was asleep. My point being that, when the LLM came around, I was ready for it. I appreciated it for what it was, and did not expect it to be what it was not. I would like to carve out space in the political argument that allows for the opinions of people who are not naive to what LLMs are, and who find them to be interesting in their own right, and useful for investigating and for elucidating how language is used. https://johntinker.substack.com/p/misunderstanding-as-a-commutator
Terrific comment John, and I wasn't falling asleep. Man that Commodore 64 was something, as was the follow-on Amiga. I think my dad thought of it almost as family. I completely agree with your end point. I'll look into your Substack. Many of the newsletters I recommend here are aligned with the intellectually curious perspective -- top of mind, Melanie Mitchell and Sean Trott are two you might like, if you haven't found your way to them already.
Thanks for the link to the AI metaphor observatory. The entire AI project seems to be about blurring meaning and the first casualties are the critical meaning of intelligence and creativity. Then we have opinions on Substack like this from Noah Smith, ‘You Are No Longer the Smartest Type of Thing on Earth’, about the ascendant super intelligence of AI—how much further can we degrade the value of humanity.
Thank you Marcie, that's much appreciated. I really had fun writing it. I had less fun when some of the co-authors of the paper I was praising decided to attack me, as discussed in the "correction." Bygones!
Another analogy that acknowledges the utility but highlights the risks of pushing it everywhere for commercial gain before its effects are fully understood. This one was used in an exchange over the proposed adoption of AI in a school district north of Boston. I'm not sure if its use originated there. I have since come across its use elsewhere.
"As one Massachusetts school administrator recently said; this moment with AI is remarkably like the moment when we were introduced to asbestos. Yes, it had some remarkably promising characteristics – fireproofing! – and had some real utility in science, research, and industrial applications. But a profit-driven industry bullied us into inserting it everywhere; into our homes and schools and public spaces, before we really understood the risks. This resulted in decades, if not centuries, of illness, injuries, deaths, and the astronomical financial burden of trying to remove the stuff."
Stop AI in Malden Schools. Open Letter. October 21, 2025. https://openletter.earth/stop-ai-in-malden-schools-d7de618d
Lorna Garey. Digital future or risk to critical thinking skills? 5 takeaways as Malden crafts AI strategy for its schools. Neighborhood View. November 13, 2025. https://neighborhoodview.org/2025/11/13/digital-future-or-risk-to-critical-thinking-skills-5-takeaways-as-malden-drafts-ai-strategy-for-schools/
Wonderful comment as per usual Alan. I think the asbestos metaphor is a really good one. There are many -- check out that AI metaphor observatory I linked to, it's great.
Great essay! Loved this. Helpful technical discussion and some fine-grained distinctions/clarifications, all well justified.
Thank you Mark! I will continue to defend the intellectual beauty of the stochastic parrots metaphor, even if two of its creators attacked me for believing my defense to be flawed.
Yeah, keep at it, I think you have super balanced takes that are well presented. Do you know what’s going on with those two? I wasn’t even aware of them until I listened to Robert Wright podcast where he had them on recently. I found most of their technical discussion good but Bender’s tone and some of her responses seemed needlessly antagonistic, and actually, they seemed to very strongly push back on the idea that AI is useful. I’ve still reserved their book just to ensure I’m reading widely.
First of all thanks, that means a lot.
Second, oh man, even to respond to your question feels dangerous. I haven't listened to the Wright podcast, but several months ago I did listen to The Big Technology podcast that Bender appeared on, along with the co-author of her book, Alex Hanna -- who, I want to say, seems really wonderful, as I've had thoughtful conversations across a range of topics with them on BlueSky.
In the Big Tech podcast, I remember the host Alex Kantrowitz describing to them how he'd recently used generative AI to plan a trip to a foreign country, and how he'd found it incredibly useful to that end. My recollection is that Bender responded by describing what she thought Alex K. *should* have done instead, which was -- again I'm going off memory -- consult locally produced newspapers and magazines, the equivalent of the local Village Voice (RIP). Not only did that sound incredibly cumbersome to me, I remember wondering, what happens when you're visiting someplace where they speak a different language? What then?
The argument that AI can never be useful appears intellectually incoherent and rhetorically disasterous to me. If these tools weren't useful, we wouldn't have to worry, because no one would bother to use them. Instead, and forgive me for reiterating my final point in the above essay, the tools are dangerous *precisely because* of their utility. What's more, if you try to argue that they're *never* useful to someone who's used them and feels the very opposite way, you've destroyed your credibility before the dialogue has even begun. There is no path to "meeting in the light of mutual understanding." (Malcolm X)
"Needlessly antoginistic" sums up much of what I see within the AI Skepticism movement unfortunately -- and sorry not sorry for calling it that. Social activist Todd Gitlin once said, back in 1995, “while the Right has been busy taking the White House, the Left has been marching on the English department.” Insert "BlueSky" for the English Department and I think that's a pretty astute insight into why we're getting our ass kicked in the broader political arena (with we = people resisting AI intrusion across all walks of life). I plan to write a longer essay about this soon. Your comment might get worked in, unless you object.
I hope you continue to read and comment on future essays -- you seem to get what I've been trying to do. Thanks again.
Agreed with all of this!
That is a very helpful article. I do have a question of how self-critique in a model fits into your explanation (e.g. Claude's 'Constitutional AI' approach).
Thanks! Short version of what probably needs to be an essay, the "constitutional approach" is mostly just adding some universal text to the prompts we enter into the models. It's more complicated than that, but I -- controversially -- think Anthropic is the worst of the worst insofar as they are trying to project the illusion of being a responsible actor as cover for their weird cultish organizational culture.
OK. I had thought/hoped there was more of a feedback loop, but I can't cite any sources and it might well have been some random comment on a podcast.
However, I'll push back a bit on Anthropic's hypocrisy. This is more sociology than AI, but hypocrisy can serve a useful purpose. Paying lip service to a value can influence public discourse even when actual behavior does not back it up.
https://www.youtube.com/watch?v=O87Q3igXYZY
To me, you've laid out the case for why *ethics* is necessary. And no, I don't mean the narrow sort of "AI ethics" that most people want to talk about. I'm talking about what philosophers of technology have been interested in for quite some time. What does it mean for a technology to promote *the good*? Naturally, that question is different from whether it is "effective" or "useful" in a strictly technical sense - as you make very clear. We don't have to deny that AI is technically useful to say that it is not a good thing for society, for certain communities, for ourselves, for the planet, etc.
That is very eloquently stated. Unfortunately, I think "AI ethics" as a term has been co-opted, as you note. This is another reason why I embrace the imperfect term "AI skepticism," because it captures inquiry and doubt. But I'm open to other descriptions!
I was living in a truck in the woods in the mid-1970s, thinking about natural language processing (NLP). I had a library card, and read about early attempts by the U.S. Navy to use cybernetic approaches to inventory control. My interest was grammar and semantics. I read grammars, studied English syntax, and became focused on idiomatic expression as the nexus between symbol and semantic implication. My first computer was a SCELBI 8H, with Intel 8008 and 1k of RAM. My next was an IMLAC PDS-1, with 8k of 16 bit magnetic core (non-volatile) RAM. Next Commodore 64 with tape drive, then with a disk drive. I figured if only I had two disk drives, then I could put a dictionary on one, and compress text on the other. On and on. I went through IBM XT format as a Compaq "luggable". Actually lugged it to Nicaragua and back. In Nicaragua the Technica team was doing inventory for the Ministry of Health (MINSA). I was doing dBaseIII (something like that), then Clipper. But I had nothing they didn't already have. I could go on until everyone here was asleep. My point being that, when the LLM came around, I was ready for it. I appreciated it for what it was, and did not expect it to be what it was not. I would like to carve out space in the political argument that allows for the opinions of people who are not naive to what LLMs are, and who find them to be interesting in their own right, and useful for investigating and for elucidating how language is used. https://johntinker.substack.com/p/misunderstanding-as-a-commutator
Terrific comment John, and I wasn't falling asleep. Man that Commodore 64 was something, as was the follow-on Amiga. I think my dad thought of it almost as family. I completely agree with your end point. I'll look into your Substack. Many of the newsletters I recommend here are aligned with the intellectually curious perspective -- top of mind, Melanie Mitchell and Sean Trott are two you might like, if you haven't found your way to them already.
Thanks for the link to the AI metaphor observatory. The entire AI project seems to be about blurring meaning and the first casualties are the critical meaning of intelligence and creativity. Then we have opinions on Substack like this from Noah Smith, ‘You Are No Longer the Smartest Type of Thing on Earth’, about the ascendant super intelligence of AI—how much further can we degrade the value of humanity.
Thanks Joe. Sadly I fear we’re far from the bottom. Gotta fight back! I know you are doing your part.
A really good article. Thank you.
Thank you Marcie, that's much appreciated. I really had fun writing it. I had less fun when some of the co-authors of the paper I was praising decided to attack me, as discussed in the "correction." Bygones!