A general understanding of the human
We've got classroom examples, we've got data, we've got movie reviews

This week is a grab-bag of thoughts on the definition of “conceptual understanding,” recent data on AI and tech usage in schools, and reviews of the recent Frankenstein movie and the not-so-recent Gattaca. So feel free to—cue House of Pain—jump around!1
Teaching for conceptual understanding
The education space is riddled with something I’ve long called “head nodder” phrases, pithy little statements that everyone agrees we should be pursuing with students, yet often lack an agreed upon definition, much less a method of ensuring they are properly learned. Critical thinking! [Heads nod] Growth mindsets! [Heads nod] Creative collaboration! [Heads nod] It all sounds good until you dive into the details and find that people lack a shared mental model of what these phrases really mean.
“Conceptual understanding,” that’s another potential head nodder, but last week my friend and teacher extraordinaire Michael Pershan wrote a great essay on making this idea more concrete—check it out here. His central proposition is that what we really mean by “conceptual understanding” is that students know “true and useful generalizations.” (He adds “about mathematics” because he’s a math teacher, but it applies to other subjects as well I say.) And what I so often love about Pershan’s essays is how he uses real-world examples from his teaching experience to explain what he’s getting after. Consider the following task:
This is a neat little geometry problem, and here’s how it played out one day in Mr. Pershan’s classroom:
When I asked my 7th Graders to answer this, one student pointed out 100% correctly that since it’s four units up from E to D, the diagonal has to be longer than that.
I was about to move on, when I caught myself. I turned back to the kid. “So is the diagonal always going to be more than the vertical distance? Why would that be?”
I’m glad I pushed, because my students responded with two smart generally true answers:
Yes, the diagonal is always longer than the horizontal or vertical distance, because when going diagonally you’re going in two directions, not just one.
The Pythagorean Theorem says where the diagonal length is found by adding the squares of the horizontal and vertical distances, and that’s always going to be longer than just the vertical distance squared.
The point of an explanation isn’t just to eliminate doubt—it’s about connecting this particular situation to some generally true fact about the mathematical world.
What I hope you take notice of here is the pedagogical action that Mr. Pershan used here to build conceptual understanding with his students. The probative question he asked was vital to learning something general that gets beyond just getting the right answer. Curriculum cannot teach itself, and the best teaching involves pushing toward this general understanding of a broader principle. (If you think I’m implicitly throwing shade at AI “tutors” here, you’d think right.)
Over in cognitive-science world, the word used to describe the application of general knowledge to solve a novel problems is called transfer. Grant Wiggins, the co-creater of Universal Design by Learning (and who passed away several years ago), described transfer as the central goal of education, the capacity to apply knowledge in new contexts. And while I yield to no one in my advocacy for cognitive science in education, I think my fellow cog-sci enthusiasts sometimes fail to appreciate the vital role that transfer plays in our cognition. As Pershan concludes:
We all want generalization. We want to see how it all fits together. We want to know how a particular idea flows from something bigger. We want to know how to do things—we also want to know things. We want the whole picture. And when we get it—whether we land on it on our own or someone makes us think of it with words or a picture or anything else—the feeling is terrific, like it all finally makes sense.
Yep.
Data on digital device and AI usage in US schools

I miss Dan Meyer. Don’t worry, he’s still around, but he paused his newsletter back in April. That’s been a loss on a number of fronts, but I counted on Dan to keep me informed of the latest data on actual AI usage in schools. Come back to us, Dan!
I’m not going fill the Meyerian void on the regular, but a recent New York Times story about how technology is being used in US schools caught my eye. The main reason being, the reporters actually surveyed teachers—350 of ‘em—rather than relying on one or two anecdotal quotes. To be sure, this is far from a scientific approach to polling, but we try to figure out what’s happening in our education system with the data we have, not necessarily the data we want.2
Here’s what they found:
99% of teachers report that their students are provided digital devices in school
81% of elementary school teachers said this was true even in kindergarten
70% of the teachers who use digital devices in their classroom said they distract from schoolwork, with 36% saying “a lot” and 34% saying “a little”
70% of teachers nonetheless said they would continue to use digital devices at least some of the time
~66% said student work time on devices had increased since the pandemic
64% said digital devices are used in class for standardized testing
40% of middle and high school teachers said students spend three or more hours per day on digital devices (!!!)
And a whopping…6% of teachers…said students use their digital devices for school-approved use of AI (whereas 29% said students are using AI for “non-school activities”)
There’s obviously much we don’t know here, but it’s nonetheless very interesting to me that despite the saturation of schooling with digital devices everywhere, students don’t seem to be using AI all that frequently. Is that because schools are prohibiting them from doing so? Maybe, we don’t know. Is it perhaps because kids don’t like learning by typing things into a box and having a non-human thing generate the statistically most probable text back at them? Maybe, but again we don’t know. What I think we might very tentatively conclude from this data, however, is that there’s a substantial gap between the amount of enthusiasm “in the discourse” around AI’s potential to transform the education system versus the reality on the ground. Warrants mentioning.
The article is also replete with juicy quotes from teachers that I ruefully enjoyed reading—”Kids just want to use A.I. for everything. SO MUCH CHEATING!,” says one high school teacher in Texas—but then there’s this glaring bit of stupidity:
Andy Russell, a product manager at Google who oversees Chromebooks in schools, envisions them ushering in a new era of education—one in which teachers aren’t lecturing in front of the room, but rather acting as facilitators to students learning on computers and using online tools to creatively show what they’ve learned.
“So much of what teaching is today is not the when and the what, like I learned; it’s a lot more of the how—this is how we make a video, this is how we build infographics,” he said.
How many times must a man cite cognitive science, before they will call him a man? How many times must we hear this tired cliché rolled out in support of education technology, a cliché that contradicts the very basic and frankly scientifically indisputable proposition that our ability to understand new ideas depends on the knowledge we have in our heads, what we might call “the when and the what” of basic facts? And while I’m ranting, just how blinkered is Google’s vision of education, or at least Andy Russell’s vision of education, if it’s about making videos and building f’ing infographics? Schools do not exist to create content for YouTube, my dude.
I really think a backlash is growing to all this. I think there are many parents, myself included, who do not want tech-saturated schooling for their kids. There certainly are some in Malden, Massachussettes—check out their letter demanding their local district ban AI. And I’ve started to fantasize about what the alternative might look like. That’s a teaser for a future essay. Maybe.
Frankenstein and Gattaca
Ok, time for two movie mini-reviews. Let’s get weird.
I confess I’ve never read Mary Shelley’s Frankenstein. What’s more, until recently I’d never even watched a movie rendition either. I am sentient human, so of course I’ve been dimly familiar with the basic contours of the tale—crazy scientist creates artificial life by lighting up a blockheaded-looking creature, it gets angry and marches around with its arms out—but this has been a gap in my education.
Into that breech stepped Guillermo del Toro last week with his new version of Frankenstein, now streaming on Netflix. I can’t say I’m a devoted fan of del Toro’s films, I’m not really a goth or horror guy. I did see Pan’s Labyrinth in the theaters decades ago, but all I remember is that weird creature with its eyes in its hands (see above). That said, more recently I’ve appreciated del Toro becoming a very vocal anti-AI advocate who’s been leading film festival crowds in chants of “FUCK AI.” Right on, brother.
But how’s the film? Well…it’s visually stunning, for sure, but I found the first hour pretty ponderous. Victor Frankenstein, the mad scientist, is played by Oscar Isaac—the guy who played Poe Whatever in the newer Star Wars films—and he plays him pretty f’ing mad. Too mad, in my view, it’s a bit over the top. There is, however, one brilliant scene in the first act, where we see Frankenstein prototyping the creature he plans to create, that del Toro breaks down into detail here. I think it’s telling that he eschewed the use of CGI in favor of using actual puppets, actual physical things.
It’s in the second act, when The Creature becomes animated (alive?), that the movie takes off. Jacob Elordi portrays The Creature with such beautiful (and human?) sensitivity, it was impossible to watch without thinking about The Big Important Questions about what it would mean to be something brought into the world with no history, with no like companions, a being of almost unimaginable solitude. Some subsequent investigation reveals that in the novel, The Creature demands that Frankenstein make him a companion, but sadly del Toro chose not to emphasize in his telling of the story; instead, a relationship of sorts develops with a human woman, Elizabeth, that feels a bit forced. Still, the core exploration of what makes us human, what animates our wants and desires, what leads to love, and what happens when this is denied—these feel like the right questions to be asking right now, do they not?
What makes us human sits the center of another great movie I re-watched recently, Gattaca, albeit with a very different lens. I remembered loving this film when it was released in 1997, and it holds up. And makes for a worthy companion watch to Frankenstein, for reasons I’ll try to explain.
You probably know this: Gattaca, directed by Andrew Niccol, is set in a dystopian eugenicist future where one’s life prospects are entirely dictated by ones genes. The genetically “superior,” who have been bred to be that way, live a life of comfort and purpose. The genetically inferior get to clean up after them, literally. The story centers around Vincent Freeman, played by Ethan Hawke, classified as an “in-valid” (get it?) at birth due to his genetic defects, and Jerome Morrow, played by Jude Law, who in contrast is genetically near-perfect—but has been paralyzed by an accident. In order to get into the elite space program, Hawke must pretend to be Law, and go through a series of contorted activities to borrow Law’s blood, his hair, etc, in order to pass as someone who’s genetically fit. In other words, he must borrow another human body. The scenes between Hawke and Law are brilliant, it’s really a love story in some ways (if only they’d gone that far!), and the retro-futuristic film noir cinematography endures beautifully.
Gattaca feels like a time capsule we need to reopen. When the movie was released, The Human Genome Project was in full swing, cloning featured heavily in our monocultural discourse (baa, says Dolly), and it seemed we were on the verge of being able to identify the influence of genes across all our behaviorial traits. Our capacity to become ourselves would be mapped from the beginning. Gattaca of course is a warning against that future, a testament to the horrific dangers of denying our essential humanity through deterministic science.
The future the film imagines future did not come to pass in the short term, but nowadays? “Corporate eugenic companies,” in the words of Eric Turkheimer, are all the rage in Silicon Valley, funded by the same tech billionaires pursuing AI at all costs. They are pursuing the reduction of our human bodies to genetic programming just as rapidly as they are pursuing the reduction of our human cognition to statistical autoregression.
You can connect these dots, I hope, and we’ve covered this ground before. So maybe there is a throughline to this week’s essay. Maybe we need to build greater conceptual understanding of why our essential humanity is worth preserving so that we are better equipped to fight the mounting efforts to degrade it.
The artists are warning us!
Ever wonder what makes that high-pitched squeal at the beginning of the track, the one we’ve been hearing ad naseum at sporting events for three decades now? It’s a saxaphone.
Here’s all they disclose about the methodology:
The Times’s survey was circulated to members of the American Federation of Teachers, a union; Educators for Excellence, a teacher-led advocacy group; Teach for America’s alumni group; and teachers’ Facebook groups. Responses were recorded from Oct. 6 to Oct. 25. The results are not a statistical sample of all U.S. schools.
The 350 teachers who responded taught in 40 states, Puerto Rico, the Virgin Islands and Washington, D.C. Roughly 60 percent taught in urban schools, more than the national share. Thirty percent were from suburban schools and 10 percent from rural ones. Two-thirds said they worked in low-income schools that receive federal Title I funding, similar to the share of those schools nationwide.





"I really think a backlash is growing to all this."
You are definitely right, Ben. I'm conducting research for Ed3 DAO to find out how teachers are actually using GenAI -- the good, the bad, and the otherwise whatever -- in their professional lives, with students or without.
A signal that keeps emerging is:
A) Many KIDS are asking teachers to get them off the computer. This is mostly in HS. They know about the negative impacts of screen time and are saying "For chrissakes, can we do something OFF the computer?" This is happening in Higher Ed too in my experience, though it's a bit more complicated because students are thinking about the job market.
B) In the lower grades, it's not the kids but the parents who are looking for schools that DON'T offer 1:1 devices. My wife was a K teacher last year. The school had iPads for every kid. She was not a fan, but that's besides the point. When parents came in to tour the classroom, they tensed up when they saw the ipads in the corner of the room. The school told her -- back away from using the iPads, parents don't want them. They gathered dust all year.
I expect this backlash to continue. As you know, I still think "AI literacy" is a real thing that we should teach to students. But I see it as a once-a-semester experience with meaningful lead-up time, analysis, and preparation. It's not a tutor, it's a test - and we only give tests once or twice a semester anyway. Shifting the perception in this direction is my personal crusade.
In that vein, I think it would be possible to thread the needle where we keep kids off screens and still give them valuable conceptual understanding of the risks, benefits, and limitations of the technology itself.
Thanks so much for the math lesson, it was a perfect example of transfer for me, teaching life drawing in College. The simple diagram you shared is a clear explanation for the difficulty of drawing space for art students. Many of my students chose art because of how math was taught and don’t understand that the principles of geometry are central to drawing. Learning should be porous and you’re right the A.I. tutors will fail with flexibility and divergence as they are task driven and not context aware.