The role of knowledge in the age of AI
My conversation with Bror Saxberg, longtime advocate for learning engineering
Earlier this year, I wrote an essay criticizing what I’ve long called “knowledge nihilism,” the dangerous yet shockingly common view that it’s not important for people to add ideas and knowledge to their long-term memory. This led to an extended dialogue with my friend Bror Saxberg, a long-time advocate for learning science in education with a fascinating career—among other things, he has both a medical doctorate and a PhD in artificial intelligence. Also, it was a lunch conversation with Bror that inspired me to investigate the inner workings of generative AI and thus birthing Cognitive Resonance, so it seemed a fitting way to close out 2024.
With permission and some slight editing, our dialogue from over the summer is reprinted below. Happy New Year!
From Bror:
Years ago, I was invited to a major tech company to talk about learning science and its relationship to education. Their stance at that time was that education is basically a new flavor of "search"—if you could just look stuff up in the right way, you would solve the world's learning problems. Just leave it to search.
So I started the talk by having them look up something unfamiliar to almost all of them, the deRham Cohomology Theory. It's a fancy theory within differential geometry, and you can quickly access some good Wikipedia info on it. I asked them to just look up anything in the description they found that they didn't understand, and said “let’s see how it goes.”
After a few minutes, many people had a dozen or more search windows open, as they went down the rabbit hole of looking up every technical term/theorem they had never heard of before. And then I asked: "Soooo, who here now understands deRham Cohomology Theory based on all these great searches you've opened up?"
Surprise: No one.
As you’ve often argued, the point is that if knowledge is not in your head, then it is not usable to you. What’s more, with complex interconnected content that is new to your brain, it's going to take real work to "move it in" to working memory for creative thought. We do not have Matrix-like download capacities (yet)!
Here’s another example of why knowledge must be added to long-term memory. In my first days in medical school, I had to pick up Harrison's Principles of Internal Medicine. This was 2,000 plus pages of evidence-based goodness (it's now 4,000 plus pages), right there in my hand. My work at medical school was done! Uhh, except for one little thing—all that empirical heft on paper did nothing for me when I was talking to a patient, without a ton of structure and work to begin to move it into my brain in ways that would help me recognize situations, solve problems, propose treatment plans consistent with the complex setting and resources, etc.
New technologies change expert practice. The key is to clarify what it is we need experts to become able to decide and do, with the new tools in hand. For writing, we no longer need to be good at shaping goose feathers into quills (the literal risks of "blotting your copy book" are gone!), yet the skills of rhetoric from Pericles and earlier are still crucial because mind-to-mind connections work the way they have for many millennia. The capacity to critique and improve requires knowledge to be resident in our heads.
This goes way back: Wasn't it Socrates who tried to convince Plato that reading and writing would destroy people's minds, and so should be stopped? Weirdly, he was right—yet with that new technology, we adapted human learning and practices in ways that ended up excelling. We still needed to get stuff "into our heads," though not via raw memorization: Books were never enough on their own.
From Ben:
This is thought provoking in many ways, but I’ll focus on two.
First, your description of Bror in medical school, equipped with the weighty tome of Harrison's internal medicine principles but no real experience in the practice of medicine, strikes me as an almost perfect metaphor for generative AI. It's got all the written principles about basically everything ever, yet has no way to connect any of that knowledge in any way other than linguistically—it can't do anything with it. Yet, you hint at trying to define expertise in a way that would put practitioners on a path to harness AI in ways that improve the quality of their decisions and practice—or so I'm inferring. What I'm wondering is if AI requires anything new to be added to, say, doctor training or teacher training or any professional training really, or whether expertise is just...expertise, to be applied using whatever tool is handy or useful.
Second, your Socrates example is the very same that I point to when trying to "steelman" the counterarguments to what I'm claiming around AI in education. I worry that there's a big shift happening here and I may be stuck just, you know, living in the past, man. I haven't read any history of how human cognition changed as reading and writing became more prevalent, have you? Obviously, something surely was lost when we moved away from oral traditions, but the gains to cultural cognition were substantial. Is it possible that may happen with AI? What's the bull case here?
From Bror:
On the point about medical school and a textbook, I still think I was better off than an AI might be. I already had pre-med levels of science to draw on, and so had scientific frameworks on which to attach a wide array of information once I began reading Harrison's. So it was not a purely linguistic/statistical exercise—there already were lots of ideas, grounded in decades of evidence and convention, that I had embedded in long-term memory. I just had almost no connection between those pure science frameworks, and the real and new areas of health, disease, symptoms, diagnoses, patient care, etc. My human neural network, however, definitely made use of those scientific frameworks to help pull myself up the new ladders of expertise.
Note that over time, human expertise does get turned into non-verbal, tacit expertise, so that the expert may have some extra effort to come up with the reasoning, but the way it got laid done in the first place was through the reasoning process, not mere linguistic correlations. I remember being on an infectious disease rotation at a major Boston hospital, and we called in an expert to figure out a particularly complex case that the rest of us could not. He walked up quickly, white lab coat flapping, listened to our brief survey of what we knew, flipped through the chart, walked into the patient's room, waved at him, shook his hand, turned around, and made the diagnosis. He then tapped his watch and headed out for lunch!
We stopped him at the door—we were mere medical students who asked how he reached his conclusion. At first he looked startled and confused, as if he had no idea how he'd done that. (A bit concerning!) And then he began to describe how all the things he had taken in through all his senses had quickly narrowed the diagnostic tree of possibilities from the chart and our description down: the lack of certain smells in the room meant a bunch of things weren't likely; the fact that the patient responded clearly and coherently to his hand wave did the same; the muscle strength on the hand shake took out more branches; the color of the eyes and skin took out more branches; etc.
He was a walking diagnostic machine—but his working memory was running through his lunch options, while his long-term memory, deeply trained in an evidence-grounded way, rapidly flew through the diagnostic process. He was able to reconstruct the principled approach to the problem because that was how it had been trained in, but his decision-making, as an expert, had moved well beyond that. The result WAS explainable by this neural network, though!
Your question about whether AI will require new skills to be added is an excellent one. "It's complicated," at best.
On the one hand, we may be able to reduce some of the things we now force human minds to deeply master. This has been true about technology for generations—that "carving the quill pen" example is one. I remember spending hours learning how to use paper concordances to search for research articles in the library science stacks, back in the day, and the technology of card catalogs, and Dewey decimal numbers to find interesting and relevant books about a topic. Not so much needed these days in modern settings!
What about the capacity to quickly produce a grammatically correct set of paragraphs from scratch to make an argument? This is complicated too: We already have tools to support your grammar and spelling, so that you don't have to be a perfect speller to get the work done in a timely way. AI can take this a step further, producing paragraphs in response to a prompt, but we still need the skills to refine our argument.
Your question about how might AI change cognition is serious crystal ball stuff! Sticking with the Socrates analogy, I would argue that the biology of our brains has not had time to evolve to something different yet, given the existence of reading and writing. It's likely it took at least several hundred thousand years to evolve the specialized machinery for spoken language we have in our heads—Broca's areas, Wernicke's area, and all the connections that support these. We’ve added reading and writing on top of this, which is very effortful compared with speaking and listening. It has worked out, as you say!
I don't think AI will change the biology or architecture of how brains learn, just as the book did not change these either. Rather, there's a good chance that things that "most" learners master will change (just as very few learners do the "big memory" work now).
It's a very interesting question to speculate what, in fact, would the long-lived skills be that humans will find add value for their entire lives? For example, there's a good chance that Python programming is not on that list—AI is likely to increasingly take on the bulk of detailed coding work from the humans that do it, although not necessarily the specification work of what "it" should do.
What kinds of things might be on that list?
Editorial skills, as described early: The capacity to specify and improve outputs for audiences.
Creative and design insights, especially around combinations of domains.
Brain-to-brain skills—things like communication, perspective-taking, empathy, meaning-making, rhetoric, motivation, group leadership of people, planning, prioritizing.
Learning itself and metacognition—how to change your own skills efficiently.
And so we might want to spend a lot less time on training minds to produce fluently things that our tools will produce faster than we can, but instead make sure more learners master things like the above, from early on.
Similarly in domains. Conrad Wolfram has done a lot of work with his team at Mathematica on what are likely to be the skills that define analytic expertise over time, in the presence of increasingly capable tools like Mathematica. (Conrad's book The Math(s) Fix talks about this.) He points out that we continue to need learners to become good at four different aspect of analytic problem-solving:
Recognizing and creating well-posed problems
Turning well-posed problems into symbolic language (mathematical or computer)
Using tools to find solutions to the symbolic representations of well-posed problems (including testing if the tools are the right ones to use)
Interpreting the solutions back into the real world. (Sometimes you throw away a negative number; sometimes it is meaningful.)
We spend arguably way too much time pushing students to become fluent at the third step (often without tools!), and not nearly enough on the core human value-added skills in the other areas. Those can and should be started early.
I like the conversational form here quite a bit, and what a lovely collection of ideas this essay bundles.
One insight I take away is how important context is to expert knowledge. As the anecdote about the diagnosis shows, the experience of solving a complex problem can be broken down into parts, but actually doing the work is a stream.
That seems true in the classroom--I'm thinking of Dan Meyer's great posts about how effective teachers teach--where the teacher takes in the context as a whole and uses contextual awareness to inform the observable act of engaging a learner, just as the doctor did in Bror's example. Breaking that process into steps is instructive, but simply understanding the discrete moves does not make you a good diagnostician. It requires an active mind engaged in evaluating a flow of information.
Our habit of breaking things apart and understanding and measuring them in parts has led us to understand a great deal about the world, but solving problems, either small or large, sometimes requires holistic approaches.
On the history of how human cognition changed with the rise of reading and writing, one older but still useful starting point is Walter J. Ong's *Orality and Literacy: The Technologizing of the Word* (1982). A lot of work on orality and textuality has been done since then, especially in medieval studies (or at least that's what I'm most familiar with). M. T. Clanchy's *From Memory to Written Record: England 1066-1307* (1979) is one of the foundational studies in that field.