EXCLUSIVE: AI Insider reveals secrets about artificial general intelligence
A watershed moment for Cognitive Resonance
Two weeks ago, I wrote a response to Ezra Klein’s podcast wherein he declared that, according to AI insiders he’s heard from, we’re truly on the cusp of developing “artificial general intelligence” aka AGI, meaning AI that is as smart as we humans are. Surprisingly, my rejoinder has turned into my second-most popular post since launching this Substack a year ago. Clearly, y’all like snark. And ironically, I almost scrapped the piece, because (1) we’re trying to articulate a positive vision here at Cognitive Resonance headquarters, and not just be reactionary to AI hype; and (2) I was worried I might sound jealous that AI insiders aren’t, you know, talking to ‘lil old me.
But then something exciting happened last week: A true AI insider within one of the major “hyperscalers”—meaning, one of the major companies developing generative AI—shared his thoughts with me about how close we are to achieving AGI. And now, with his express permission, I am now passing along his insights to you, dear readers. Because of the importance of what he told me, I’ve taken extra care to transcribe his remarks near-verbatim.
On what large-language models do versus human cognition
“Chatbots are trained on an extraordinary amount of knowledge which is purely text, and they’re trying basically to regurgitate, to retrieve, to essentially produce the answers that are conformed to the statistics of whatever text they’ve been trained on. [But] they are incapable of inventing new things.”
“In the future AI systems, we might be able to turn abstract thoughts into language in the human brain. But we [humans] don’t think in language. We think in mental representations of a situation. We have mental models of everything we think about. That’s where real intelligence is. And that’s the part we have haven’t reproduced certainly with LLMs.”
“LLMs are really good at retrieval. They’re not good at solving new problems, [or] finding new solutions to new problems.”
On whether the new so-called “reasoning models” actually reason the way humans do
“We have to figure out what reasoning really means. Everyone is trying to get an LLM to be able to check whether the answers they produce are correct. And the way people are approaching the problem at the moment is that they are basically modifying the current paradigm without completely changing it.”
“So can you bolt a couple of words on top of an LLM so that you have some kind of primitive reasoning function. That’s essentially what a lot of the reasoning systems are doing. You basically tell them to generate more tokes than they really need in the hope that, in the process of generation those tokens, they’re going to devote more computation to answering your question. And to some extent that works, surprisingly, but it’s very limited. You don’t actually get real reasoning out of this.”
“Reasoning in the classical sense involves a search through a space of potential solutions. There is no mechanism at all in LLMs for this search mechanism. So a big issue there is that when humans or animals reason, we don’t do it in token space.”
“In other words, when we reason, we don’t have to generate text that expresses our solution and then generate another one and then generated another one, and then among the ones we produce, pick the one that is good. We reason internally, right? We have a mental model of the situation and we manipulate it in our head. Humans do this all the time. Animals do this all the time.”
“And this is what we yet cannot reproduce with machines.”
On whether the training of LLMs is hitting a wall
“I don’t know if I would call it a wall, but it’s certainly diminishing returns in the sense that we’ve kind of run out of natural text data to train LLMs. Most of the basic systems actually don’t understand basic logic, for example. It’s going to be slow progress with synthetic data and hiring more people to plug the holes in the background knowledge of those systems.”
“We are not going to get to human level AI by just scaling up LLMs. That is just not going to happen. Absolutely no way. It’s not going to happen within the next two years. The idea that we’re going to have a country of geniuses in a data center, that’s complete BS.”
On whether we are on the cusp of AI agents
“So we need a new paradigm. We need a new kind of architecture of systems that are capably of searching for a good solution and planning for a sequence of actions to arrive at a particular goal, which is what you would need for an agentic system to really work. Nobody has any idea how to build them other than basically regurgitating plans that the system has already been trained on.”
“The really complex thing is how you build a system that can solve new problems without being trained to solve those problems…The essential ability of humans and many animals is that when you face a new situation, you can think about it, figure out a sequence of actions [to] accomplish a goal. Basically that’s what’s missing.”
On whether there’s a better way forward for AI
“There are ideas about how to have systems that are capable of doing what every intelligent animal and human are capable of doing that current AI systems are not capable of doing. I’m talking about understanding the physical world, having persistent memory, being able to reason and plan. These are the four characteristics that need to be there to acquire common sense. And that’s a big challenge. That’s what we have to figure out. And it’s not going to happen within the next three years.”
Wow, right? I know what you’re thinking—the words of this AI insider sound almost suspiciously congruent to everything I’ve been contending for the past year (or for that matter, everything Gary Marcus and Melanie Mitchell and many others have been arguing for many years). So can you trust me? Might I be making these quotes up just to hammer yet again at my enduring point that the hype around AI has raced way beyond what the science can justify, and is largely driven by a complete misunderstanding of what makes human cognition special?
Well, decide for yourself. Every word above is taken directly from this Big Technology Podcast featuring Yann LeCun, the Turing Award winner that’s co-responsible for developing the artificial neural network architecture that underlies generative AI (and who currently serves as Meta AI’s Chief Scientist). LeCun is the epitome of an AI insider, but unlike the secret sources whispering in the ears of Ethan Mollick and Ezra Klein, he’s unafraid to go the record. He understands how this technology works, and how human cognition functions, and he’s stating plainly to anyone who will listen that these two things are very different—and that in comparison to humans, LLMs suck.1
He’s right.
Memo to AI-in-education enthusiasts: I need you take this seriously. With the ed-tech conference circuit in full swing again, I’ve recevied a slew of invitations to “share my thoughts” on the future of AI. But they’re right here in this Substack and other freely availble publications, folks, there’s not some secret thought repository I’m keeping hidden (sorry). Using AI chatbots to tutor children is a terrible idea—yet here’s NewSchool Venture Fund and the Gates Foundation choosing to light their money on fire. There are education hazards of AI anywhere and everywhere you might choose to look—yet organization after organization within the Philanthro-Edu Industrial Complex continue to ignore or diminish this very present reality in favor of AI’s alleged “transformative potential” in the future. The notion that AI “democratizes” expertise is laughable as a technological proposition and offensive as a political aspiration, given the current neo-fascist activities of the American tech oligarchs—yet here’s John Bailey and friends still fighting to personalize learning using AI as rocket fuel.
Everyone needs to wise up. Democracy is crumbling in America and there’s a direct throughline from what’s taking place in our politics and what looms for our education system, with AI sitting at the heart of it. If you don’t believe me, I invite you to again contemplate the sentiment of another AI insider:
This is the future we’re facing right now. Which side are you on?
Once more unto the breach, dear friends, once more: I’ll be speaking at the ASU+GSV conference in a San Diego in a few weeks, the title of my solo session is “AI Will Not Revolutionize Education.” If you’re planning to attend and unafraid to hear the message of this essay delivered in person, let me know.
Yes, I know that LeCun speculates that something akin to AGI may arrive in three to five years time. I mean, maybe! But amusingly, LeCun makes that prediction shortly before exploring the loooooong history of AI underdelivering on its promises. In his words: “We had super impressive autonomous driving demos 10 years ago, but we still don’t have level 5 self-driving cars, right? It’s the last mile that’s really difficult (so to speak). If we go back several years and we look at what happened with IBM Watson, it was going to generate tons of revenue and be deployed in every hospital. And it was basically a complete failure and sold for parts.” When it comes to future technological developments, neither he nor any other AI insider knows what will happen. Think for yourself.
Clever!
Genuine Follow-up: I still dont know of any educators using AI to tutor their students, do you? I know of students using it to “tutor” themselves, or learn new things, but I haven’t met any teachers who are using AI to tutor their students.
I know of teachers using it for lesson planning, creating learning experiences that center on critical consumption of AI outputs, and for creating materials. But unless I’m missing something, the “don’t use AI to tutor your students” angle feels like tilting at windmills. I haven’t met a single teacher who has done that.
Again maybe you know teachers who are—but I legitimately haven’t met any. Most hate AI, and for good reason. But the extent of their use rarely goes past using it to draft emails or report cards.
But this isn't true: "But we [humans] don’t think in language. We think in mental representations of a situation. We have mental models of everything we think about. That’s where real intelligence is.” I've been writing non-stop about this. Those of us with aphantasia don't think in mental representations at all. I wrote about it here. https://hollisrobbinsanecdotal.substack.com/p/aphantasia-and-mental-modeling