9 Comments

The various tech companies are in for major restructuring. First, because they aren't profitable or even close to it. Second, because they are horribly inefficient (as DeepSeek showed us). Third, because they are losing industry partners, especially Microsoft.

The first bailout attempt was Stargate. The mix of public and private funds has not materialized so far and it seems like the whole thing is being forgotten. Maybe the administration is doing crypto things now instead? Meanwhile the economy is teetering on the brink of recession with inflation poised to eat into VC bottom lines once again (we all remember how tech startups suddenly had to make money in 2022 because interest rates weren't 2% anymore).

My conclusion is that they're looking for new money, maybe even buyers. Either funding from less tech-literate firms or outright trying to sell themselves. Let's think about Ezra's audience. Riley mentions that he's got a direct line to Democratic leadership but that's not super important in this case. They are out of power until at least the next election and the AI companies are running out of money now. Same goes for the kinda policy-wonk types who work in government agencies or as contractors. Well, they used to, anyways. I doubt there are many in the Trump admin who pay Klein much attention. My gut feeling is that the rest of his audience is probably enough finance and tech people to make it worth promoting AI on his show. They need good press. They need to continue the narrative that their version of AI company is the future. Most of all, they need someone to help pay to keep the lights on. You'll be seeing their surrogates make more appearances in mainstream media as the situation becomes more dire.

Expand full comment

This prediction is...better than Ezra's. (I'll show myself out.)

Expand full comment

Great post.👍 Minor typo in Ed Zitron's surname.

Expand full comment

Thank you, and thank you! Fixed.

Expand full comment

The timing of Klein taking a pro-AGI turn is so weird. Same with Kevin Roose's over-the-top essay last week. Given how generally bad things are going in AGI land, it's like they decided to help keep the wheels spinning.

Expand full comment

I felt like a crazy person when I saw the clip of Amodei at Davos claiming that the human lifespan will be doubled by 2030 and everyone sat there in awe instead of throwing tomatoes at that insane man who said that ridiculous thing. These guys can say anything because it's the Terminator technology. I'd never be able to get away with claiming I've invented a flying car that doesn't require any sort of fuel and can go supersonic, because people know what cars are. But when you're an AI CEO, you can say you'll be able to bring your dead dog back to life by June of this year and the media just runs with it.

Expand full comment

Honestly, I think the Amodei essay is what tipped me into the you've-got-to-be-f'ing-kidding-me camp. I may start an AI company to revive dead dogs myself.

Expand full comment

Seems like an easy job considering tech is the only science industry where you can just say stuff and do not ever have to show your work

Expand full comment

No, you are correct. We might someday achieve AGI, but not on the road we are on now. The current models will not and cannot achieve it. The short explanation is that current models suffer from anthropogenic debt. The intelligence comes from humans, not the machines. https://arxiv.org/abs/2502.07828

For example,

Human contributions to GenAI problem solving:

• Training data

• Number of neural network layers

• Types of layers

• Connection patterns

• Activation functions

• Training regimen for each layer

• Number of attention heads

• Parameter optimization method

• Context size

• Representations of words as tokens and vectors

• Training task

• Selection of problems to solve

• Training progress measures and criteria

• Human feedback for reinforcement learning

• Rules for modifying parameters as a result of human feedback

• Prompt

• Temperature and other meta-parameters

Machine contribution to GenAI problem solving:

• Parameter adjustments through gradient descent

Also, language models have just about run out of improvement growth. Satya Nadella, for example, has argued that intelligence grows with the log of compute. Let's assume that he is right, that would mean that every time we want to increase intelligence by one unit (whatever that means), we have to multiply the available resources by some factor. The technical report for GPT-4 reported that decreasing error by one bit, required an increase in compute by 10,000 times. Whatever compute resources GPT-3.5 needed, GPT-4 needed 10,000 times as much to cut error by 1 bit. By their estimate, derived from their flattening curve, the next one-bit reduction will require 17 orders of magnitude more compute. That is simply unachievable. Even another 10,000 x increase in computing would be challenging.

NVIDIA produced under 4 million data center GPUs in 2023. Chip production is growing linearly, not exponentially. If all of the chips were used for GenAI, they would have to produce 40 billion units, following the most conservative estimate. Those chips are not likely to be available for the foreseeable future.

All of this assumes that the measures being used to estimate intelligence, do actually measure something related to AGI. They do not. Performance on benchmarks does not tell us anything about whether they are getting closer to AGI or are simply "benchmark hacking" (a term used by Nadella).

Conclusion: If we are to achieve AGI, it will require capabilities that are not only unavailable, they are most unimagined at this point. For some direction on this point see: https://thereader.mitpress.mit.edu/ai-insight-problems-quirks-human-intelligence/

Expand full comment