What does AI literacy mean? What should it mean?
I’ve been pondering these questions lately because they strike at the core of what I’m hoping to accomplish with Cognitive Resonance. Broadly speaking, Cognitive Resonance is aimed at improving AI literacy — but my understanding of what this means differs from many, particularly in education circles. So I thought I might sketch out the principles that inform my view, and then contrast this with the current discourse around “AI literacy.”
First, AI literacy means understanding how these tools work. This is no easy task! Generative AI systems are complex, and “getting under the hood” of how they produce their output takes a fair amount of effort – after ChatGPT launched, I spent a good year reading research papers and talking to scientists to build my own knowledge. Most normal people lack the time to dive this deep, but if you’re going to use this tool, you should have at least some basic understanding of the processes it employs, the data it’s been trained upon, and so on.
Second, Al literacy means recognizing that these tools have strengths and limitations. This understanding flows from learning about AI’s inner workings. Do that, and you may begin to see why they hallucinate, why they are bad at math, why they so often produce images that reflect racial biases or other existing social inequities – to pick just three examples. You might even start to think that these challenges may not easily be solved in future product iterations, but may instead be inherent to the design of the technology itself.
Third, AI literacy means to think critically about the broader social and historical context in which AI is situated. Dr. Abeba Birhane, a cognitive scientist who researches the intersection of AI and culture, pulls no punches when she writes:
This requires challenging the mindset that portrays AI with God-like power and as something that exists and learns independent of those that create it. People create, control, and are responsible for any system. For the most part such people consist of a homogeneous group of predominantly white, middle-class males from the Global North. Like any other tool, AI is one that reflects human inconsistencies, limitations, biases, and the political and emotional desires of the individuals behind it and the social and cultural ecology that embed it. Just like a mirror that reflects how society operates – unjust and prejudiced against some individuals and communities.
Grappling with this may be uncomfortable, but it’s vital.
Finally, AI literacy means embracing nuance and humility about the future. As a technology, AI is a remarkable achievement, and lest the tone of this post suggest otherwise, I recognize there are many ways in which it might positively benefit humanity. Might! If the last decade has taught us anything, however, it’s that a seemingly exciting new technology can quickly erode existing institutions and norms – lookin’ at you, social media – and do real damage to our cultural ecology, as Birhane aptly notes. We need to wrestle with this complexity, and not be blind to the downside scenarios.
Ok, so that’s my first attempt sketching out what AI literacy should mean. But this is not what “AI literacy” means to many in education, at least if the website for the recent “National AI Literacy Day” is any indication. That event, put on by a handful of education organizations (with the support of some big corporate donors), appears to embrace AI in education as both desirable and inevitable. Spend some time spelunking on the event’s website and you’ll see what I mean – there’s very little critical thinking of any sort. Instead, “AI literacy” seems to mean figuring out how to write better prompts, perhaps assisted by something called EdBrAIn:
Ah, that quirky humor.
EdBrAIn is ridiculous, of course, but when education leaders say things such as, “the Internet democratized knowledge, AI democratizes expertise,” I find more troubling evidence that meaningful AI literacy is lacking. Let’s leave aside the fact that education-focused chatbots struggle to do simple algebra, or that using chatbots to “supplement” essay writing may supplant the effortful thinking that writing entails…wait, let’s not leave that aside, because those are real problems, and ones we shouldn’t just gloss over.
But just as importantly, this statement reflects a fundamental misunderstanding of what chatbots actually do, and implicitly diminishes the still-uniquely human capabilities of applying knowledge, reason, and judgment to unique and novel circumstances. This is typically what we mean by expertise, and AI neither possesses it nor democratizes it any meaningful way. Rather the opposite. Consider that vocal AI enthusiasts responsible for ChatGPT’s creation now contend that large-language models are akin to “dream machines” that make statistical word predictions that only sometimes accord with reality – “hallucinations are a feature, not a bug,” on this view.
Dreams are cool, dreams are interesting, dreams can inspire creativity, but they are not sources of expertise.
I hope this all doesn’t feel too esoteric or too removed from practical questions regarding the role that AI should play in our society generally, and education in particular – because the stakes are high. The development of human literacy – meaning, our ability to read and write – is arguably the greatest advancement of the human species, the most important “cognitive gadget” we’ve ever created. If generative AI ends up eroding our collective capabilities in this regard, the impact will be damaging in ways that are hard to fathom.
To be AI literate is to worry about this sometimes. And to want to shape a different future.