Resistance to the overhyping of AI in education is not futile
What we lose when we automate the process of human cognition
Next week, the online journal Education Next will run my op-ed questioning whether we should be unleashing tutoring chatbots on students, such as Khan Academy’s Khanmigo, before we have any evidence that they can improve learning at scale. As lawyers like to say, to ask the question is to answer it. I’ll share a link when the essay runs, but here’s a sneak peek at the penultimate paragraph:
One of the great ironies in education is that, though we profess to want to develop critical thinking in all our students, whenever a seemingly transformative new technology is introduced into society writ large, prominent and powerful actors in the education system seem unwilling to think critically about whether its costs might outweigh the alleged benefits.
As I sit here writing this, an old friend of mine is visiting Austin with his eighth-grade son, and over dinner I was explaining to him my nascent efforts with Cognitive Resonance, and my aspiration to get people to think more critically about whether and how to make use of it. He nodded along but then repeated a counterargument I’ve heard frequently:
“Ok sure, but don’t you think AI is inevitable in schools?”
Maybe, maybe. There’s no question that AI chatbots using large-language models are useful for certain tasks, such as drafting essays. But will it benefit students educationally to outsource and automate the effort it takes to produce them? Or might there be some harms to reducing the cognitive work of writing? Jane Rosenzweig, director of a writing center at Harvard, makes this point powerfully:
I tell my students that writing — in the classroom, in your journal, in a memo at work — is a way of bringing order to our thinking or of breaking apart that order as we challenge our ideas. We look at the evidence around us. We consider ideas we disagree with. And we try to bring a shape to it all.
Sometimes my students see the process differently. They see writing a paper as a hoop they are being asked to jump through, a way for me to evaluate them and pronounce them successful or not. In other words, they see writing solely as a product. If the end point rather than the process were indeed all that mattered, then there might be good reason to turn to GPT-3. But if, as I believe is the case, we write to make sense of the world, then the risks of turning that process over to AI are much greater.
This all feels so eerily familiar. Ten years ago, as smartphones became ubiquitous in our daily lives, there was a sense that educators needed to figure out how to incorporate them as learning tools. If students were distracted by them in class, so the claim went, well that was the fault of teachers for not making their lessons more engaging and interesting.
You won’t hear that claim much nowadays. Instead, numerous entities – ranging from school districts to states to even entire countries – are enacting bans on bringing phones to class. As common sense suggested then and now, smartphones are distracting, and instead of contorting classroom practices to incorporate them, we can just…remove the distraction.
I grant that chatbots present a bigger challenge – for one, it’s not clear whether students can be stopped from using this technology outside of school settings (though here we might also ponder the equity implications of uneven student access). But at a minimum we should be assessing the potential harms that may result from a technology that displaces the process we use to bring order to our thinking, the method we use to discover what we think.
AI in education will only be inevitable if we refrain from thinking critically about it.