16 Comments
User's avatar
Rob Nelson's avatar

The timing of Klein taking a pro-AGI turn is so weird. Same with Kevin Roose's over-the-top essay last week. Given how generally bad things are going in AGI land, it's like they decided to help keep the wheels spinning.

Expand full comment
Tommy C's avatar

I felt like a crazy person when I saw the clip of Amodei at Davos claiming that the human lifespan will be doubled by 2030 and everyone sat there in awe instead of throwing tomatoes at that insane man who said that ridiculous thing. These guys can say anything because it's the Terminator technology. I'd never be able to get away with claiming I've invented a flying car that doesn't require any sort of fuel and can go supersonic, because people know what cars are. But when you're an AI CEO, you can say you'll be able to bring your dead dog back to life by June of this year and the media just runs with it.

Expand full comment
Benjamin Riley's avatar

Honestly, I think the Amodei essay is what tipped me into the you've-got-to-be-f'ing-kidding-me camp. I may start an AI company to revive dead dogs myself.

Expand full comment
Tommy C's avatar

Seems like an easy job considering tech is the only science industry where you can just say stuff and do not ever have to show your work

Expand full comment
James's avatar

The various tech companies are in for major restructuring. First, because they aren't profitable or even close to it. Second, because they are horribly inefficient (as DeepSeek showed us). Third, because they are losing industry partners, especially Microsoft.

The first bailout attempt was Stargate. The mix of public and private funds has not materialized so far and it seems like the whole thing is being forgotten. Maybe the administration is doing crypto things now instead? Meanwhile the economy is teetering on the brink of recession with inflation poised to eat into VC bottom lines once again (we all remember how tech startups suddenly had to make money in 2022 because interest rates weren't 2% anymore).

My conclusion is that they're looking for new money, maybe even buyers. Either funding from less tech-literate firms or outright trying to sell themselves. Let's think about Ezra's audience. Riley mentions that he's got a direct line to Democratic leadership but that's not super important in this case. They are out of power until at least the next election and the AI companies are running out of money now. Same goes for the kinda policy-wonk types who work in government agencies or as contractors. Well, they used to, anyways. I doubt there are many in the Trump admin who pay Klein much attention. My gut feeling is that the rest of his audience is probably enough finance and tech people to make it worth promoting AI on his show. They need good press. They need to continue the narrative that their version of AI company is the future. Most of all, they need someone to help pay to keep the lights on. You'll be seeing their surrogates make more appearances in mainstream media as the situation becomes more dire.

Expand full comment
Benjamin Riley's avatar

This prediction is...better than Ezra's. (I'll show myself out.)

Expand full comment
Matthew  Byrne's avatar

Great post.👍 Minor typo in Ed Zitron's surname.

Expand full comment
Benjamin Riley's avatar

Thank you, and thank you! Fixed.

Expand full comment
Herbert Roitblat's avatar

No, you are correct. We might someday achieve AGI, but not on the road we are on now. The current models will not and cannot achieve it. The short explanation is that current models suffer from anthropogenic debt. The intelligence comes from humans, not the machines. https://arxiv.org/abs/2502.07828

For example,

Human contributions to GenAI problem solving:

• Training data

• Number of neural network layers

• Types of layers

• Connection patterns

• Activation functions

• Training regimen for each layer

• Number of attention heads

• Parameter optimization method

• Context size

• Representations of words as tokens and vectors

• Training task

• Selection of problems to solve

• Training progress measures and criteria

• Human feedback for reinforcement learning

• Rules for modifying parameters as a result of human feedback

• Prompt

• Temperature and other meta-parameters

Machine contribution to GenAI problem solving:

• Parameter adjustments through gradient descent

Also, language models have just about run out of improvement growth. Satya Nadella, for example, has argued that intelligence grows with the log of compute. Let's assume that he is right, that would mean that every time we want to increase intelligence by one unit (whatever that means), we have to multiply the available resources by some factor. The technical report for GPT-4 reported that decreasing error by one bit, required an increase in compute by 10,000 times. Whatever compute resources GPT-3.5 needed, GPT-4 needed 10,000 times as much to cut error by 1 bit. By their estimate, derived from their flattening curve, the next one-bit reduction will require 17 orders of magnitude more compute. That is simply unachievable. Even another 10,000 x increase in computing would be challenging.

NVIDIA produced under 4 million data center GPUs in 2023. Chip production is growing linearly, not exponentially. If all of the chips were used for GenAI, they would have to produce 40 billion units, following the most conservative estimate. Those chips are not likely to be available for the foreseeable future.

All of this assumes that the measures being used to estimate intelligence, do actually measure something related to AGI. They do not. Performance on benchmarks does not tell us anything about whether they are getting closer to AGI or are simply "benchmark hacking" (a term used by Nadella).

Conclusion: If we are to achieve AGI, it will require capabilities that are not only unavailable, they are most unimagined at this point. For some direction on this point see: https://thereader.mitpress.mit.edu/ai-insight-problems-quirks-human-intelligence/

Expand full comment
Benjamin Riley's avatar

Lots to chew on here! I read your piece for MIT, good stuff, thanks for sharing.

Expand full comment
Roman's Attic's avatar

I know this is kinda an old comment, but I feel like it’s wrong in a lot of ways. People who talk about AI scaling don’t usually just think increasing compute is the only way to do it. The amount of training data is also incredibly important, as well as the type/amount of post training and various forms of improvements towards a the algorithms. When people talk about AGI coming soon, a lot of the assumptions rely on AIs being able to assist in the coding process more and more, essentially making each programmer several orders of magnitude more productive and greatly refining algorithms.

You have this whole list saying “intelligence comes from humans, not the machines,” and, like, yeah, humans DID in fact build the AI machines and their training data, but that doesn’t mean they’re not capable. Yes, humans created almost everything in that list, in the same way that humans created Stockfish, but it would be wrong to say that Stockfish’s capabilities are simply human’s capabilities.

I’m honestly just really confused as to where your whole argument is coming from.

Expand full comment
Benjamin Riley's avatar

Always enjoy someone digging into my past essays. A couple of quick responses:

1. "People who talk about AI scaling don’t usually just think increasing compute is the only way to do it." With respect, this is simply false -- "scale is all you need" was a major mantra of AI enthusiasts as recently as one year ago, with scale simply referring to data and compute. They have now shifted their arguments, but I refuse to let this claim be memory-holed. More here: https://buildcognitiveresonance.substack.com/p/what-if-todays-llms-are-as-good-as

2. I'm not sure what list you are referring to from this particular post -- I think you may be referencing my recent Stone Soup essay? In any event, I am not arguing that LLMs are incapable, though I admit I'm less impressed by their capabilities than many. What I've been arguing for some time is that whatever capabilities they possess, it is very different from human cognition, which I think is far more capable across a huge array of real-world tasks.

Thanks for the comment. I hope this clarifies things a bit!

Expand full comment
Roman's Attic's avatar

I’m replying to Herbert Roiblat’s list here, btw. I agree, scaling refers to increased data and compute, but Roiblat’s example here only talks about how increasing compute is difficult without talking about data.

Expand full comment
Roman's Attic's avatar

To be honest, your whole argument kinda just sounds like you’ve been consulting the rock that has “THERE WILL BE NO MAJOR CHANGE IN THE WORLD” written on it (https://open.substack.com/pub/astralcodexten/p/heuristics-that-almost-always-work?r=5fcaw0&utm_medium=ios). This isn’t a response to various scaling theories; it’s normalcy bias.

Expand full comment
Kyle Liburd's avatar

It’s incredible how much, and for how long, AI has taken over any level of common sense amongst so many respected, earned or not, voices. Collectively spending hundreds of billions while claiming the government that hasn’t proportionally grown in 4 decades is the problem with everything.

Jim Covello of Goldman Sachs and the economist Daron Acemoglu are the only two people with major economic influence who I’ve seen say anything smart. Covello’s life must feel like working amongst a ton of cult members.

I mean to be honest, the Tupac hologram was cooler than anything AI has done & that came out in 2012!

Expand full comment
Morgan Bird's avatar

Klein has always been a credulous wanker. Long ago he was also taken in by Paul Ryan as a serious Republic budget wonk when it was already crystal clear he was an ideologue. This feels very similar.

Expand full comment