AI "agents," man
They are infilitrating education and we should be concerned

This week, I want to use the launch of the redesigned Cognitive Resonance website to explore the value proposition of AI “agents.” But before we get to that—hey, why not swing by the new Cognitive Resonance website! We’ve got a new tagline—Building Human Knowledge to Halt AI Hype—and more information about our offerings, who we’ve worked with, etc etc. And if you happen to work within an organization that might benefit from better understanding how AI tools actually work, perhaps one interested in using cognitive science as the lens of analysis, please get in touch! I love delivering workshops on this and the feedback has been very positive to date.
Ok, sales pitch over, let’s talk about AI “agents.” I’ve written next to nothing on this particular topic because, well, it’s felt a little beside the point, scientifically speaking. If we manage to develop Artificial General Intelligence—and boy is that a big if!—I think we’d reasonably expect it to be able to accomplish a wide range of complex tasks with only minimal guidance, using a variety of digital tools. Broadly speaking, that’s the essence of general intelligence, or so I think. On this view, AI agents essentially arrive as a downstream effect of achieving AGI.
But there are many other ways we might conceive of AI agents, of course, one being to think of it simply as a product feature. Understood this way, an agent is essentially a customized set of instructions coupled to a specific set of software tools that have been folded into a broader AI system. If your eyes are starting to glaze over, I get it, but that’s kind of my point—on this view of AI “agency,” we aren’t talking about emulating human intelligence, we’re basically talking about how to code something useful. And I submit to you that this is how AI agents, as a practical matter, are cashing out in LLMs today. Just look at OpenAI’s Practical Guide to Building Agents if you don’t believe me, it’s all about how product and engineering teams can build things to improve “workflow automation.”
Sure. But also, yawn.
As I was wrapping up work on my website revamp with my wonderful human designer, however, I found myself fantasizing about a very specific AI “agent” that I probably would pay $20 per month for. The information on the Cognitive Resonance site is up-to-date right now, but as new engagements roll in (as I fervently hope they do!), I’d like to be able to quickly update it. If I could just tell an AI model, “hey we used Framer to develop my site, and it’s hosted on Squarespace, please go update it with the logos of all these new clients,” that would be useful to me. (I’m bracketing the many ethical concerns that surround AI for purposes of this thought experiment, don’t kill me.)
But can AI models do this at present? Knowing what I know about them, I doubt it, even with so-called “vibe coding.” What’s more, I’m not even sure how I’d experiment with one to find out, at least not without putting my entire online presence at risk. And sure, perhaps I’m wary because of my general AI skepticism, but please consider this recent and unintentionally hilarious essay by Geoffrey Fowler, a tech columnist for the Washington Post, who claims he’s found “the best new thing” that AI agents can do, and it’s…to quit online subscriptions? Which, in Fowler’s trials, only worked with half of the accounts he wanted to cancel? This is the killer app? Couldn’t we just, you know, pass a law to require tech companies to provide one-button cancellation?
But there are less trivial and more worrying possibilities presented by AI “agents,”and once again education is squarely in the cross-hairs. A great deal of university coursework is now delivered and managed online through learning management systems, which creates fertile and obvious territory for AI “agents” to invade and co-opt. Indeed, recently Perplexity launched a new tool, “Comet AI,” that they’re now explicitly marketing to students as a tool they can use to do their coursework for them. Searching for product-market fit, Perplexity has settled on helping kids cheat.
This has led to many AI-in-education commentators freaking out, and rightfully so:
Marc Watkins is incensed that Perplexity is callously using “student-aged influencers to portray the most nihilistic depiction of how AI is unfolding in higher education,” and he’s calling for a boycott. I’m in, Marc! But leaving aside the low odds this working, I suspect we’ll just be playing a game of ed-tech whac-a-mole—for every bad actor we knock down, another will pop up.
Anna Mills echoes Marc’s concerns and issues a plea for AI companies to erect technical barriers to AI agents and education technology. But as Stephen Fitzpatrick notes in the comments, the incentives for tech companies to do this are “not aligned”; I’d say they’re nonexistent so long as institutions of higher education and others fall all over themselves to incorporate AI into the educational process. (If you peruse those comments, you’ll find Stephen pushing back on my quick comment suggesting we consider AI Abolition—I’ll save that idea for a future essay.)
Josh Brake calls for valuing human relationships to mitigate the moral hazards of AI, which is a beautiful sentiment that unfortunately runs into the hard reality that many (most? all?) young students have grown up in a world where their community is largely mediated digitally. Every time I’ve attended a protest this year, I’ve been struck by how the vast majority of attendees are my age or older (I’ll be 49 next month). The idea of earnest discourse through human solidarity is “cringe.” So while I’m with Josh that we need to work to change these norms, time is not on our side.
Tressie McMillan Cotton’s justly famous “AI is a mid technology” essay wasn’t about AI agents specifically, but it’s worth recalling her warning that AI acts parasitically to robust learning ecosystems, and threatens to starve their hosts. As she shared, “Every day an Internet ad shows me a way that AI can predict my lecture, transcribe my lecture while a student presumably does something other than listen, annotate the lecture, anticipate essay prompts, research questions, test questions and then, finally, write an assigned paper.” This feels particularly apt with AI agents—perhaps we should call them AI parasites instead?
I will close on the same point I’ve been making since Cognitive Resonance launched last year: AI is a tool of cognitive automation. That’s it, that’s its central value proposition. Once we accept this, we can see that AI parasites agents are simply part of the continuum of capabilities being built into these tools so that humans can avoid thinking. This, sadly, will be continuously seductive to students, because the process of effortful thinking is often unpleasant. I refer you again to this essay co-written by William Liang, a high school student in California, for his straightforward description of the reality we face:
For me, William, and my classmates, there’s neither moral hand-wringing nor curiosity about AI as a novelty or a learning aid. For us, it’s simply a tool that enables us not to have to think for ourselves.
Not good!



I share your concern about the drift toward cognitive off-loading, though in my world agents mean something more grounded. At work I sit firmly in that yawn. Trying to leverage AI technology to code something useful. “Agents” as useful encapsulations of logic and reason; built around instruction, memory, and access to highly specific, purpose-built tools—preferably, always with a human in the loop. They help draft code or transcribe meetings, not to replace judgment but to extend it. To help relieve people of the tedious tasks while helping them amplify their own agency by focusing on what they do best.
Where I fully agree is in education: we have to protect the habit of effortful thought. Yet I’ve also seen how interactive AI can speed real learning when curiosity is what drives it. I know that from personal experience. For me the difference comes down to intent—learning with these systems, not through them. Your piece sharpened that line for me, and reminded me why it matters to keep it clear.
The new Cognitive Resonance website looks fantastic, and I like the emphasis on using genuine cognitive-science insight to cut through the hype surrounding AI. In order to safely access research resources, I've used tools like this in my own work along with FastVPN on iOS https://apps.apple.com/us/app/fastvpn-secure-private/id1381516895 , though to be honest, the app occasionally disconnects at inconvenient times and can slow down certain websites. Workshops like yours, however, seem to be beneficial for assisting businesses in comprehending the true capabilities of AI tools rather than just their marketing claims.