The government doesn't know that AGI is coming
And neither does Ezra Klein or AI insiders or anyone else
Goddammit, Ezra Klein.
Yesterday he dropped a podcast with former Biden AI official Ben Buchanan wherein he (Klein) announces that AGI—artificial general intelligence, a technology that’s as smart as humans—is gonna happen, and soon. How does Klein know? Because AI insiders—that is, people who work at the Big Tech companies that make these products—have told him so. As have government officials, or at least, Ben Buchanan. And thus Klein prognosticates:
“We are on the cusp of an era in human history that is unlike any of the eras we have experienced before. And we’re not prepared in part because it’s not clear what it would mean to prepare.” What’s more, while American democracy may be burning in flames, Klein thinks that “there’s a good chance that, when we look back on this era in human history, AI will have been the thing that matters.”
Wow! As they say, big if true. Amazing to think of the progress these companies have made so quickly. After all, it seems like it was just last week that OpenAI, the undisputed world leader in generative AI, released its latest flagship model (GPT 4.5) with a slew of caveats, such as “this is not a frontier model,” and also, “performance is below that of o1, o3-mini, and deep research on most preparedness evaluations”—that is to say, models that came out in 2024. To be sure, OpenAI then quietly scrubbed this concession from its press release—and has yet to offer any explanation as to why—but hey, if AI insiders are whispering to Klein that digital sentience is nearly upon us, surely we should trust them, right? Right?
Klein is smart, but his suspension of even a modicum of critical thinking here is indicative of the general stupidity of our times. The good team over at AI Snake Oil have been doing heroic work trying to fix this, and way back in (checks notes) December 2024 they wrote a lengthy explanation of why we absolutely should NOT trust AI insiders when it comes to these predictions. Here, Ezra, let me summarize their points for you (and without the use of AI):
“Industry leaders don’t have a good track record of predicting AI developments. A good example is the overoptimism about self-driving cars for most of the last decade.”
While its true AI insiders have proprietary information access, “given how many AI companies are close to the state of the art, including some that openly release model weights and share scientific insights, datasets, and other artifacts, we’re talking about an advantage of at most a few months, which is minor in the context of, say, 3-year forecasts.”
Similarly, while AI insiders have technical expertise, “there is just as much AI expertise in academia as in industry,” and “expertise isn’t that important to support the kind of crude trend extrapolation that goes into AI forecasts,” oh and also “overemphisizing the technical dimensions tends to result in overconfident predictions.” (We’ll come back to this point.)
Finally and most importantly, we shouldn’t trust AI insiders for the very obvious reason that they benefit from hyping their own products—and just in the last year we’ve seen them totally change their story on how AGI will be achieved. Put simply, the "industry’s sudden about-face has been so brazen that it should leave no doubt that insiders don’t have any kind of crystal ball and are making similar guesses as everyone else, and are further biased by being in a bubble and readily consuming the hype they sell to the world.”
Delusional, overconfident predictions about AI have been occuring with regularity since the dawn of AI itself. Here’s Herbert Simon, one of the godfathers of cognitive science, in 1965: "Machines will be capable, within twenty years, of doing any work a man can do." That’s a hard nope. Or how about Marvin Minsky, the person perhaps most responsible for developing AI as a research field, in 1970: “In three to eight years we will have a machine with the general intelligence of an average human being." Wrong again.1 There’s a history here! It’s readily available, Ezra! You could even prompt Deep Research, your new favorite tech toy, to summarize it for you!
But wait, there’s more.
Although Klein appears to be partially relying on whispers from AI insiders, the headline to Klein’s podcast is that the government knows AGI is coming. By the government Klein seems to mean, well, Ben Buchanan, his podcast guest and the former AI special advisor to the Biden Administration. Their conversation is wide ranging and to his credit Buchanan occassionally tries to walk Klein back from some of his (Klein’s) frothier hysteria regarding what AI portends for the future.
At the same time, however, vitally important context around the Biden Administration’s approach to AI—and the huge mistaken prediction they made about the future—is completely glossed over. I’ll summarize this very briefly. The big AI-related policy move of the Biden Administration was a massive Executive Order to regulate large models based on their size, and to invoke national security concerns to impose export controls on selling Nvidia chips to China.2 Whether you agree with that policy play or not, what is indisputable at this point is that it completely failed. And the reason we know it completely failed is that a Chinese company called DeepSeek has managed to produce generative AI models that are highly competitive with American-made ones, and seemingly at a fraction of the cost of the American frontier models.
In response to this, Buchanan concedes that the Biden Administration whiffed on this and that a fundamental reimagining of AI policy is needed. Just kidding, he doesn’t do that at all, instead he doubles down: “The key thing here is when you look at what DeepSeek has done, I don’t think the media hype around it was warranted, and I don’t think it changes the fundamental analysis of what we are doing. They still are constrained by computing power. We should tighten the screws and continue to constrain them.”
To his limited credit, Klein pushes Buchanan a bit here, but unfortunately the conversation turns into a debate over Nvidia’s stock price. What Klein does not do at any point is reflect on the fact that if the Biden Adminstration made a big policy bet on trying to stop the Chinese AI industry from catching up to the US AI industry, and yet utterly failed to achieve this goal, perhaps Buchanan’s predictions about the future should, you know, not be credited—much less be translated as the government knows AGI is coming?
I’m still not done. Again, I’ve never met Buchanan, and I’d like to assume that someone who served as Biden’s special advisor on AI is generally intelligent himself. But I’m sorry, when he says that he’s “someone who reads an essay like ‘Machines of Loving Grace,” by Dario Amodei, the chief executive of Anthropic, that is basically about the upside of AI, and says: There’s a lot in here we can agree with,” I am obligated to remind you that essay is a flaming hot pile of unhinged techno-fascist bullshit that is as embarassing as it wrong. I’m curious if Buchanan agrees with Amodei that AI may free us from death itself?
If I may channel Ed Zitron for moment, and with apologies for the obscene gerunds—what the actual fuck is going on here? Ezra Klein is an enormously influential journalist and thinker, someone I generally enjoy and one who typically does his homework. So what the fuck happened here? As we see American democracy getting smashed into pieces by a racist, fascist technology baron who predicted—wrongly—that AI should have already surpassed humans in intelligence, is it too much to ask that Klein—who has a direct line into the leadership of the Democratic Party—not serve as the mouthpiece for AGI bullshit?
Klein might be right that, when we look back on this era, AI will be the thing that matters. But here’s my prediction: In a few years, nothing remotely resembling “AGI” will have arrived. Instead, continuing the trend we already see unfolding today, AI will be used as a tool of autocracy, as a method of replacing human labor, and as a means of misshaping our shared sense of reality. And people of good conscience, including Ezra Klein and Ethan Mollick (another AI influencer who eagerly passes along what AI insiders tell him) will feel embarassed about the predictions they made during this time.
Embarassed, and maybe even ashamed.
Tim Lee wrote a helpful summary if you want to go deeper on the Executive Order.
The various tech companies are in for major restructuring. First, because they aren't profitable or even close to it. Second, because they are horribly inefficient (as DeepSeek showed us). Third, because they are losing industry partners, especially Microsoft.
The first bailout attempt was Stargate. The mix of public and private funds has not materialized so far and it seems like the whole thing is being forgotten. Maybe the administration is doing crypto things now instead? Meanwhile the economy is teetering on the brink of recession with inflation poised to eat into VC bottom lines once again (we all remember how tech startups suddenly had to make money in 2022 because interest rates weren't 2% anymore).
My conclusion is that they're looking for new money, maybe even buyers. Either funding from less tech-literate firms or outright trying to sell themselves. Let's think about Ezra's audience. Riley mentions that he's got a direct line to Democratic leadership but that's not super important in this case. They are out of power until at least the next election and the AI companies are running out of money now. Same goes for the kinda policy-wonk types who work in government agencies or as contractors. Well, they used to, anyways. I doubt there are many in the Trump admin who pay Klein much attention. My gut feeling is that the rest of his audience is probably enough finance and tech people to make it worth promoting AI on his show. They need good press. They need to continue the narrative that their version of AI company is the future. Most of all, they need someone to help pay to keep the lights on. You'll be seeing their surrogates make more appearances in mainstream media as the situation becomes more dire.
Great post.👍 Minor typo in Ed Zitron's surname.