Thou shalt not falsify the AI bubble
Serenity now, serenity now
My goal with this newsletter is predominately aimed at keeping readers informed of the latest scientific developments at the intersection of cognitive science and AI. That’s it, that’s the beat. But sometimes I dip into the business side of the AI industry more broadly—and this, I fear, may drive me insane. Or at least that’s how I felt reading Ben Thompson’s defense of the AI bubble in his Stratechery newsletter last week.
Before we get to that, a brief refresher on Karl Popper’s theory of falsifiability. For the unfamiliar, Popper argued that what makes a claim scientific is that it can be falsified in some manner, meaning, there exists the possibility of presenting some sort of evidence that can disprove the assertion being made about the world. Popper and his devotees were prone to overstating their case—in reality, we often fiddle with scientific theories when we find contradicting evidence, rather than chucking them out entirely—but I’ve nonetheless often found falsification to be a useful heuristic when evaluating an argument. If someone makes a claim about the way the world is, or soon will be, is there some way I can disprove this claim? If not, we’re treading into the realms of faith and dogma.
Ok, back to Ben Thompson. If you don’t subscribe to his (pricey!) newsletter, he’s one of the most prominent analyst of the technology sector who writes for a public audience. I’ve been reading him for the past decade, and truth be told, my aspirations with the field of cognitive science are akin to the role he plays with respect to the tech sector, a sort of barometer of where things are going. Which is to say, I respect Ben Thompson, and I rely on his work to help keep me abreast of the Silicon Valley zeitgeist.
But that zeitgeist, man, it’s getting pretty zeity and geisty.
Three weeks ago, I wrote about my ongoing sense that OpenAI, as a company, is mirroring many of the behaviors of Enron around the turn of the century. In passing, I noted that Thompson, “he who gushes about AI regularly,” had casually admitted that “we’ve crossed the line into bubble territory” (his words), which struck me as noteworthy. In the short time that’s passed since, the evidence for AI bubbliciousness has only continued to grow, as Michael Burry—the famed investor at the center of The Big Short who made a fortune when he correctly anticipated the housing bubble collapse circa 2008—recently revealed that he’s made major bets against Nvidia and Palantir. “Sometimes, we see bubbles,” Burry posted on social media, and “sometimes, there is something to do about it.” Money, meet his mouth.
Yet here’s where Ben Thompson dared to ask last week: But what if the AI bubble is good? I think his essay is not paywalled, but to summarize it here:
First, he believes that we are in an AI bubble that will pop, and thus we are headed for some serious economic pain. “What goes up must come down…with the end result being a recession and lots of bankrupt companies. And, not to spoil the story, that will almost certainly happen to the AI bubble as well.” Oh! Seems bad.
But not to worry, Thompson argues, because our glorious abundant future will eventually be delivered unto us nonetheless. This is so because, while bubbles may be harmful in the short term, they can lead to long-term benefits if they result in (a) the building out physical infrastructure that later fosters innovation, and (b) they spur collective “cognitive capacity” around a transformative vision of the future. He cites Technological Revolutions and Financial Capital by Carlota Perez to support the first claim and Boom: Bubbles and the End of Stagnation by Byrne Hobart and Tobias Huber for the second.
Accordingly, the dotcom bubble of the late 1990s/early 2000s was ultimately a net positive for society because it “brought nearly the entire U.S. population online, thanks to that virtuous cycle that Hobart and Huber describe. This not only provided the market for the consumer Internet giants that followed, but also prepared an entire generation of future workers to work on the web, unlocking the SaaS enterprise market. Second, the intense competition of the dotcom era led to one of my favorite inventions of all time, both because of its impact and because of its provenance.” (The magical invention he’s referring to XMLHttpRequest, a protocol that allows browsers to retrieve data from a URL without having to do a full-page refresh.)
The AI boom will thus also be beneficial long term, Thompson contends, because we are both building out “fabs, the places where chips are made,” as well as new power supply. “If AI does nothing more than spur the creation of massive amounts of new power generation it will have done tremendous good for humanity.”
For brevity’s sake, I’m not going to get into the environmental costs of AI, which Thompson ignores, nor the rising price of power that consumers face as AI companies hoover up more and more of it. (No small things to ignore, I grant.) Instead, let’s note that at no point does Thompson reflect on the question of whether bringing the entire US population online has been a net long-term benefit to our society. Remember that Donald Trump began his rise to political power through his use of Twitter to question whether Barack Obama was an American citizen—as Dave Chappelle aptly noted nearly decade ago, we elected an Internet troll as president. In fact, around that same time Ben Thompson himself astutely observed that the disintermediating power of the Internet was fueling Trump’s takeover of the Republican Party. As we see agents of the Trump Administration in our streets today acting to disappear preschool teachers as young children scream (among a thousand other daily horrors), perhaps we might at least question whether it’s been worth the annihilation of our civil liberties to, uh, unlock the SaaS enterprise market?
But even if you come out on the other side of that and believe convenient online shopping is worth the price of tear gassing the citizenry, the problem remains that, per Popper, there’s simply no way to falsify Thompson’s pro-AI bubble thesis. If I may channel my inner Matt Levine: If AI hyperscalers promise “omniscient robots, coming soon!” and I say “well no I don’t think you’re close” and they say “but look at all the money we have” and I say “yes but it’s all being passed around in fishy fashion using off-balance sheet entities and complicated equity swaps” and they say “the robots will deliver abundance” and I say “but wealth inequality has only metastasized due to Big Tech’s economic dominance” and they say “the omniscient robots will fix everything” and I say “but what if they don’t?” and then Ben Thompson butts in, Kool-Aid pitcher style, and declares “well even if they don’t it doesn’t really matter because the bubble-fueled infrastructure and collective cognitive enthusiasm will also be good for us all”…
…After all that, do you see how there’s nothing I can do, there’s nothing any of us can do, to falsify this story? Either omniscient robots deliver utopia or we get eventually get the modern day equivalent of XMLHttpRequest or whatever due to all the chips and allegedly cheap power that’ll be lying around. Every AI investment is thus justified, every crazy promise is thus rendered credible, based on a vision of our glorious future. We’ve entered the realm of faith rather than reason.
Perhaps you think I’m overstating my case. Perhaps you think, as I sometimes think, that I’ve lost my mind. And so now I present to you the final two paragraphs of Thompson’s essay as proof positive that Silicon Valley is embracing AI mysticism while shunning scientific discourse and economic rationality:
What is fascinating about the AI bubble is that there is at its core a quasi-spiritual element. There are people working at these labs that believe they are building God; that is how they justify the massive investment in leading edge models that never have the chance to earn back their costs before they are superseded by someone else. That’s why they push for policies that I think are bad for innovation and bad for national security. I don’t like these side effects, to be clear, but I appreciate the importance of the belief and the motivation.
And, I must say, it certainly is fun and compelling in a way that tech was not a few years ago. Bubbles may end badly, but history does not end: there are benefits from bubbles that pay out for decades, and the best we can do now is pray that the mania results in infrastructure and innovation that make this bubble worth it.
Ah yes, the fun and compelling excitement of finding the One True Path to the Divine. This is the definition of a cult. Indeed, one might invoke the same “appreciation for the importance of the belief and motivation” to justify a wide range of unorthodox activities, so long as salvation is at hand. “To be sure, I don’t like the side effects of the Aum Shinrikyo’s policies on public transportation, but I appreciate the importance of their pursuit of Supreme Truth.” And I’m sorry, the only thing we can do regarding the forthcoming bursting of the AI bubble is…pray? Are we so helpless in the face of our tech overlords that we must hope for the Almighty to save us, rather than, say, enacting some regulations and employing some critical thinking?
Back in July, I interviewed Adam Becker, author of More Everything Forever, about all this, and something he said about the “leaders” within Silicon Valley—Elon Musk, Peter Theil, Jeff Bezos, etc—is worth revisiting here. “There actually is a sort of rejection of modernity and cult of tradition that is happening,” Becker said, “because what they’re doing is they’re advocating for essentially the end of modern science. They want a completely different way of evaluating empirical claims. They have rejected the scientific establishment, and rejected proper scientific evaluation of their claims.”
So this isn’t just about Ben Thompson, it’s about an entire intellectual ecosystem that seems completely divorced from reality. I mean, Mark Zuckerberg crowed just last week, “on a day-to-day basis we have conversations with biologists who think [it’s] wildly ambitious to try to prevent and cure all diseases. And then you talk to the AI people [and they ask] ‘Why are you so unambitious?’” Gee I dunno, Mark, maybe because bioligists are rational human beings who have some basic familiarity with the scientific method? Of course, even Mark’s AI people lack real ambition, because who cares about curing all diseases when AI can save us from death itself?
Serenity now, serenity now, I find myself chanting—but we know how that ended for Lloyd Braun, and perhaps it’s my fate as well. The irony is that it is leaders of faith who seem the most rational to me in this moment of mass AI delusion. Over to the Chicago White Sox-loving Pope to close things out for us:
Catholicism. I’m looking into it.





Genuine reverence for life appears to generally be incompatible with being a billionaire
It seems that Apocalyptic AI has finally evolved from a fancy tool for grant money farming to a real cult.