Rockets on butterfly wings
A refreshing new resource to help maintain your—or at least my—sanity regarding AI in education
When it comes to the role of AI in education, what’s the right posture to take? Last week, we looked at AI guidance for educators issued by Chicago Public Schools with some help from the AI for Education organization…and found serious cause for concern. Then, earlier this week, I was pointed to a newly issued AI policy document issued by the Louisiana Department of Education that is…likewise riddled with contradictions and confusions.
It can start to feel like a lost cause fighting against the AI hype.
But then along came MIT’s Teaching Systems Lab with a single-page “infoposter” titled “Teachers on AI,” a resource which hit me like a cool breeze of the freshest fresh air. You can download it here and I’m about to do a mini-close reading to highlight all the great things about it.
Before getting to that, however, a quick disclosure – Justin Reich, who heads this lab at MIT, is someone I’ve lightly collaborated with over the years and hold in the highest regard. That’s because Justin understands the complexities of teaching, is curious about how technology can support better instruction, yet always maintains a healthy skepticism in his approach. I highly recommend his book Failure to Disrupt—the title is a tell—for sharp insights into why ed-tech enthusiasts so often overpromise and underdeliver.
Back to the infoposter. Why do I love it so? Let me count the ways:
Embrace measured optimism. It’s right there at the top: AI won’t replace teachers, but it might support them (italics in original). Straight off the bat the framing centers the role of humans, not technology. Scroll down, and you’ll likewise find the astute observation that the self-discipline “to know when not to use a new piece of technology is a transferable skill we can teach students.” Picture me waving my arms and singing hallelujah when reading this. We need to teach this skill to adults too, by the way.
Center teacher voice. Hey, would ya look at that—quotes from actual teachers, by name, with specific use-cases related to AI. Note how the examples are practical and small-bore—generating word problems, say, or summarizing an academic paper for 9th graders. This is what we call keeping it real.
Lift up student voice. Remember them, the students? One of the weird ironies right now is that states and districts are simultaneously falling all over themselves to hype up AI while also stumbling around trying to figure out how to stop students from cheating with it. This is a non-trivial challenge, but one thing we might do is, you know, talk more directly with students about how they are using AI, and whether they should be using it for certain tasks. There’s an ethical component to this, and I love the idea of creating student AI advisory boards—nerdy me would have been all over that in high school.
Consider the bigger picture. Speaking of ethics, I deeply appreciate that this poster highlights various social concerns about AI, including its energy and water demands, the impact it has on authorship and creativity, and its potential to “regurgitate falsehoods and misinformation.” In three tiny blocks this poster tees up key issues that other AI policy documents ignore or spend pages tap-dancing around.
“Gift students with offline spaces.” Again, I’m in my hallelujah pose. It is deeply ironic and profoundly strange that, only a few years after we realized during the pandemic that mass online education is abysmal, we’re back to seeing so many promote the digitization and personalization of education using technology. Did we learn nothing? Human beings are social creatures, and our superpower as a species is our ability to learn from one another. Give students the gift of offline spaces. Hell, gift everyone with this.
So this poster is terrific, but here’s my question—why does it feel like such an outlier among the policies and pronouncements surrounding AI in education right now? There’s nothing particularly edgy here, and it’s possible to acknowledge AI’s limitations while still being optimistic about its potential as we explore and learn more about it. We can play with it, be curious about how it works, while also maintaining a pragmatic mindset that recognizes its potential educational costs as well as benefits.
Why isn’t that sufficient? Why have so many people reached out to me on the digital down-low to say that they feel like “outliers in the wilderness among the booster voices for gen AI” or versions thereof? (That’s an actual quote from an email I received just yesterday from a university professor.) Why is this poster the aberration rather than the norm?
Well, because technology is the god of our age, and there’s a massive industry of “thought leaders” and grifters in education who latch on any new techno-development as the cure-all for our alleged education woes. But there’s an eloquent line in Justin’s book that we’d do well to remember in the days ahead: “Trying to accelerate learning by ramping up technology is like putting rockets on butterfly wings. More force does not lead linearly to more progress.”
Let’s resist the misguided uses of this new force while exploring it with cautious optimism. Now please download and share MIT’s infoposter far and wide!
Update (Aug 29): In response to this post Justin Reich emailed me to note that Jesse Dukes and Natasha Esteves of the MIT Teaching Systems Lab served as lead authors and researchers on this project. Justin however still claims credit for “championing being weird.”
And here I was thinking that the title was a reference to The Smashing Pumpkins...
One of the biggest contrasts in this info graphic and the AIxEducation conversation is its simplicity. Educators don't have time. The length of videos, articles, tutorials explaining AI tools exemplifies why they are not moving as fast into classrooms as people may lead you to think. Educators are curious, sure. But they're not in a rush, and they're definitely not in a rush to take the relationships out of education. Talking to students about AI is part of that. Facilitating experiences that encourage students to talk to each other is more a part of that.
And there one gets to the core issue of AI: it doesn’t know. It’s not capable of knowing.