AI and the Pursuit of Optimism
Babel fishes, whales, "Baba is You," scientific theorizing about why we dream...this post has it ALL
Well, last week was exciting! No, we’re not dipping into American politics (though that’s been exciting too) – I’m referring instead to the healthy number of new subscribers to the Cognitive Resonance Substack who arrived via my interview in The 74. So welcome, new readers, and thanks for joining this growing community.
And there’s more exciting news: In the next week or so, Cognitive Resonance will be releasing its first formal publication, titled The Educational Hazards of Generative AI, which hints at the tenor and theme. The purpose is to help inform educators about how these tools actually work, and the challenges they pose to improving learning. It’s a counterbalance to the egregious AI hype that’s happening in education right now, and I hope you will find it a valuable resource – and will share it with others.
So stay tuned for that. In the meantime, all the AI-in-education snake oil is starting to make me angry, which isn’t healthy. As minor antidote, I thought this week I’d share a few of the things that excite me about AI, both in education and in society more broadly. There’s reason for optimism about what AI portends, truly, though we must always think critically about it.
Here then are three things that make me optimistic about AI:
The Babel Fish made real
In Hitchhiker’s Guide to the Galaxy, Douglas Adams solved for the problem of aliens talking to each other by inventing the Babel Fish, a tiny eel-like creature you can stick into your ear and immediately understand anything said to you in any language in the universe. Well, we may not be far off from something similar. Large-language models are, as their name suggests, models of language, indeed the technology underpinning them arose in part to address the challenge of translating between human languages, and it feels to me that we aren’t all that far away from being able to communicate in real time with one another regardless of our native language.
That is a huge deal! And, on my most optimistic days, a development that imagine might have transformative effect on cross-cultural empathy. Of course, we should also pay attention to the risks and consequences of eroding linguistic diversity, and I do worry it may spell the end of bilingualism as we know it…but that’s a trade I might be willing to make if it leads to our ability to better understand and appreciate the wondrous diversity of human cultures, and build greater solidarity among us as a species.
And here’s the mind-blowing kicker – this technology may lead to interspecies communication on this very planet. Like, we might be able to TALK WITH THE WHALES!!! Can you even imagine? I mean, besides this being the plot of Star Trek IV?
Intelligent games
As I’ve written about recently, I’m delighted that so much research on the capabilities of large-language models is happening through games. Just this week, I discovered a new paper involving a game called “Baba is You” that is a fun 1980s retro mashup of Pong and Calvinball from Calvin & Hobbes (the game where the rules are arbitrary and subject to change at any time). As with so many things, LLMs do great initially, but once you start changing the rules on ‘em…confusion reigns.
Greg Toppo from The 74 (yep the reporter who interviewed me) wrote a book 10 years ago about gaming in education that I read recently. A decade is a lifetime in the world of technology and games, yet some of the underlying principles for effective game design map incredibly well to the principles of effective instructional experiences for students. I think there’s real potential for developing games that both humans and LLMs can play as a way of assessing our cognitive capabilities in ways that don’t feel like traditional assessment. That’s exciting.
AI as an end, not the means
Large-language models are tools. I say this all the time, this phrase is sprinkled throughout my writings, and I believe it is indisputably true. LLMs are tools we’ve created that we can use for various purposes – a means toward varied ends. And yet…if I’m being honest…I’m more interested in LLMs as ends unto themselves, as objects of study to play with and think about, both philosophically and scientifically, than I am in using them as tools.
Lemme try to explain using a specific example – dreams.
Why do we dream? When we go to sleep, why does our mind remain active, throwing together bizarre images and storylines that can leave us anywhere between elated and distraught, and even cause us to physically respond to something wholly in our heads?
So dreams are weird, we all know this. But LLMs may help us to understand why they happen. A few years ago, I ran across an interesting hypothesis offered by Erik Hoel, a neuroscientist and writer of unclear affiliation. Hoel suggests that dreams are the mind’s method of introducing “noise” into our experience of the world so that we don’t “overfit” the data we take in from our waking experiences. Put another way, we dream so that we can throw out “corrupted data” (his words) regarding life experiences, which strengthens our ability to form generalizations and abstractions based on our real-world, waking experiences. He explicitly draws on lessons from training AI models, where the creators such systems deliberately inject noisy data and/or randomize it in the hopes it will help improve the AI’s ability to generalize to new tasks that it hasn’t been explicitly trained to solve.
This is the subject of a longer post, but the point I’m making here is that the efforts to improve LLMs are helping to generate plausible – and testable – hypotheses about complex phenomena that happen in human minds, such as dreams. My friend and correspondent Sean Trott, a cognitive scientist, has coined the tongue-twisting term “the science of LLM-ology” to describe what I’m getting at here, and I find it intellectually thrilling. We have fake minds to play with now. Incredible.
I’ll close this out with a note about the title to this essay. It’s an mini-homage to someone I considered a late-in-life mentor, Jim March, who passed away a few years ago. Nearly 50 years ago, he wrote a remarkable essay titled Education and the Spirit of Optimism, wherein he advanced a complex claim that I’ll share with you now in brief.
Jim’s claim is that education is fundamentally human endeavor that is an arbitrary assertion of optimism, a proclamation to a human will that defines who we are, not in terms of the consequences that may follow from our actions but simply because it – it meaning education – is essential to being human. We pursue education despite all the many complexities and difficulties we face in the world, a world that can often be cruel. But like Don Quixote, we tilt windmills nonetheless, because we “embrace the foolishness of obligatory action. Justification for knight-errantry lies not in anticipation of effectiveness but in an enthusiasm for the pointless heroics of a good life. The celebration of life lies in the pleasures of pursuing the demands of duty.”
In the future that lies ahead, we will wrestle with the many negative consequences of AI, and we may lose hope. But we should, we must, maintain the optimistic spirit that is vital to the very essence of education, the acts of teaching and learning that bring us together. This is our collective duty.
TALK WITH WHALES!!!!
Recently discovered your Substack and, as a retired teacher, am fascinated! Keep up the good work!