22 Comments

The cognitive model you describe is Cartesian and it was subject to numerous philosophical critiques throughout the 20th century but without much apparent impact. Cartesian approaches in neuroscience, cognitive science, and AI trundle on despite having much in the way of credible responses to their various well-documented failings. Hubert Dreyfus has joked that cognitive science and AI bought a philosophical lemon just as philosophers were dumping it. But current discussions of AI make very little reference to these critiques. There appears to have been much more of a debate on these matters in the 1980s and 1990s. I am not sure why that is the case.

I had not encountered Paul Cisek's work until your post so I'm not sure what to make of it. I'd be interested to know more about how he connects his work to Dewey and Vygotsky. I have added a link to a paper by Rom Harre on Vygotsky and AI. The title is a bit misleading. It's one of several critiques he discusses. As you are citing neuroscientists as an antidote to irrational AI exuberance, another work, which I have not read, but is often cited in this context is Bennett and Hacker's book attacking Cartesianism in modern cognitive neuroscience.

Bennett, M. R., and P. M. S. Hacker. Philosophical Foundations of Neuroscience. 2nd edition. Wiley-Blackwell, 2022.

https://www.wiley.com/en-us/Philosophical+Foundations+of+Neuroscience%2C+2nd+Edition-p-9781119530978

Harré, Rom. “Vigotsky and Artificial Intelligence: What Could Cognitive Psychology Possibly Be About?” Midwest Studies in Philosophy 15, no. 1 (1990): 389–99. https://doi.org/10.1111/j.1475-4975.1990.tb00224.x.

Expand full comment

Thanks for the comment Alan, and the references. Cisek agrees with you that the cognitive model is Cartesian, and this in general seems to be one of the main complaints of those who like the idea of "embodied cognition" -- I'm thinking of Annie Murphy Paul's book The Extended Mind, where she really unloads on poor Rene.

I must confess I don't share the sentiment that by believing that something is happening mentally in our brains, I'm therefore embracing a mind-body dualism that separates the two. Some of this may be touched on in the essay I may or may not write next week, tentatively titled "Why not both?" Stay tuned.

Expand full comment

I think there is something happening in our brains --I am not sure that is in question--but just what exactly it is and how it relates to everything else is the question.

In “Twenty-Five Theses against Cognitivism" Jeff Coulter writes: "No one in his right mind thinks that brains can ride bicycles or drive cars. These are things that (able) people can do. However, the projection onto brains of person-level capacities and activities (viz., their personification) is a central feature of much cognitivist theorizing, subserving the computational conception of brains as physical-symbol manipulators and information-processors. Brain functions properly described in the logically appropriate terms of biochemistry, electrophysiology and anatomy do indeed facilitate persons’ doings and accomplishments, but they do not engage in parallel ‘activities’ of, for example, parsing utterances, contextualizing behaviors, following rules, calculating distances and the myriad phenomena theoretically ascribed to them in a host of cognitivist models. Such purely conjectural attributions to brains and central nervous systems are made in complete independence of their actually satisfying the logical criteria for such ascriptions." (https://doi.org/10.1177/0263276407086789)

What Coulter is discussing is what Peter Hacker calls the mereological mistake. He discusses this in this video: https://www.youtube.com/watch?v=EMcmQPdi0Fs?t=240s

Expand full comment

Thanks for this pointer to Cisek. And for the Dewey footnote.

It sounds like Cisek is retelling the story of how the human mind works that you'll find in Principles of Psychology and in what gets called the Chicago School. Not the one with economists in the 1950s. The one with social scientists, including Dewey, in the 1890s.

The idea of a stream of consciousness and the notion that language is a process or technology that humans use to adapt to their environment were new, post-Darwinian ideas when James pulled it all together. Ben Breen, who writes at Res Obscura newsletter, is writing what sounds like an amazing book about how James's approach to studying the mind lost out to people like Galton, who prefer to measure things as precisely as possible and speculate from there.

As someone who would desperately love to see these ideas revived outside the weird group of historians and an even smaller group of philosophers who think about this stuff, I'm thrilled that there is someone I can point to who speaks the language of twenty-first neuroscience.

Expand full comment

Always excited when you comment, Rob. What's interesting is that Cisek actually contrasts Dewey with James in another lecture. I'm saving exploration of that for a future post! You might need to be involved. :)

Expand full comment

The great Richard Bernstein said that what pragmatists are up to is not like the idealized sort of conversation philosophers imagine as philosophy. Pragmatism is "a conversation more like the type that occurs at New York dinner parties where there are misunderstandings, speaking at cross-purposes, conflicts, and contradictions, with personalized voices stressing different points of view (and sometimes talking at the same time)." That is what is so great about it, and why it resists being turned into a movement or school.

Can't wait to see what you're going to add to the conversation.

Expand full comment

This article by cognitive scientist Epstein is a good compliment to this article and argument. https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer Essentially, mechanical materialism has dominated our world view and thus, science and AI development. The world is a machine. Humans too. Language too. Just figure out the rules, how the gears and interactions work and you can reproduce the world, be G_d. Everything is just material, particles bumping in space following immutable, unchanging laws. Purposeless. See Rupert Sheldrake's Science Delusion.

Expand full comment

Thanks for the thoughtful comment, David. My quick thoughts, some of which may work their way into next week's newsletter:

I read this article from Epstein years ago and I continue to find it lacking. For one thing, he claims that metaphors are "just a story we tell to make sense of something we don’t actually understand." This suggests that there's some vocabulary that we can arrive upon that will accurately reveal what we "actually understand." Richard Rorty described this as the correspondence theory of truth, the mirror of nature...and I think he successfully eviscerated it as being impossible to ever achieve, because what we consider "true" about the world, what we think we "actually understand," will always be contigent on our current descriptions of the world, which will always be grounded in language. Metaphors, on this view, are not what we use to cover for that which we find mystifying, but rather, the way in which we expand or transform our vocabulary when we find the current one is no longer sufficient.

This means that when Epstein points to metaphors about thinking and behavior from the past that are now discarded, it's not the "gotcha!" he thinks it is. A pragmatist is happy to say, yes of course, and guess what, our current description of the world will also one day get discarded or redescribed too. Every useful metaphor eventually becomes our common sense, until new metaphors come along. That's not a flaw, that's the beauty of human culture. (More here: https://buildcognitiveresonance.substack.com/p/there-is-no-artificial-irony)

Along these same lines, Epstein is playing pretty loose with history when he suggests that computers just kind of appeared the 1940s and this in turn played the causal role in creating the dominant cognitive metaphor of the mind as information processor. We might flip this around and instead say that computers appeared largely as a result of Alan Turing and Claude Shannon and others conceiving of the mind as an information processor, and then figuring out how to emulate that process digitally (or "artificially"). And lo, it worked! What's more, and this may make some of my readers shudder, the same might be said of large-language models today -- Hinton, Bengio, LeCunn and others were convinced that the networked neurons in our minds could be emulated digitally. And lo, this works too!

This doesn't mean that what these tools are doing is the same as what's happening in our minds. What it does suggest, I think, is that these metaphors have proven extraordinarily useful to us in many respects. Of course, not always -- hence the famous catching-a-baseball example.

And this brings me back to Cisek. I think the story he's told, the behavioral control model, is useful in describing animal behavior of the most fundamental sort. But as I note at the end of my piece, it seems incomplete insofar as it may leave out the hugely important role that culture plays in shaping the mechanisms of the mind. I think that's the bit that "embodied cognition" really misses.

I leave you with this: Why not both?

Expand full comment

As someone with roots in anthropology, culture is vastly underappreciated as a driver, many smart scientists are recognizing even epigenetic inheritance. Once a complete taboo since Mendel. We still don't know what triggers "form" - why things appear as they do ... There may be a lot of things to challenge in Epstein's article/argument, no doubt. But as a clear-eyed take on how ed-tech and tech in general thinks - he does a good job communicating the essential cartesian worldview governing science and passed down to us laity. And it is completely wrong this cause - effect, into - outto, processed banking, data ladened model of human thought. I'm on the side of Lu Chen, spouse of a Nobel Laureate and remarkable thinker at Stanford in her own right - "We are still in need of an understanding of the fundamentals." We have too much hubris. Remember the human genome project? So invested in, so hyped. It would map everything and then once we knew all the genetic instructions - we'd live forever, have no disease. How did this work out? And people made careers on this dead end, false knowledge about genetics. Ai will be the same, trust me. I am convinced more needs to be found out through field theory, there is a lot more going on. The brain IS a receiver not a repository. There is so much beyond our senses, beyond current science going on. https://www.scientificamerican.com/article/revolution-postponed/

Expand full comment

It's funny, I was doing some classroom planning today and thinking about how I might get my Year 12 students, in particular, to think about what learning is. I fairly confident no one has done this. The following is part of where I got to, and your article took me back to how I was grappling with the cognitive function that is learning. Turns out, and this is what interested me most, that experience of the environment is a significant driver. Here's the guts of what I'm going to get them to work over, perhaps they'll learn something.

Learning can be defined as “a relatively permanent change in behaviour based on an individual's interactional experience with [their] environment” (Hansbøl et al., 2016).

Breaking this down, we can see that change is expected; that, that change, affects behaviour; that it is individual because it is based on an experience that causes that changed behaviour, and it has to do with the environment that the person is in - in other words, there are complex influences from multiple contexts.

Learning is individual. In the context of the class:

1. The individuals doing the learning,

2. Why they are learning - what behavioural change is expected,

3. What is being learned - the learning intentions and the success criteria that indicates learning has taken place,

4. How do they learn.

(Engeström, 2001).

From what Paul Ramsden says, we can expect that the change in behaviour is a visible account that illustrates where students “expand and test knowledge” (Cited in Laurillard, 2012, p. 11).

“The goal of education is better conceived as helping students to develop the intellectual tools and learning strategies needed to acquire the knowledge that allows people to think productively about history, science and technology, social phenomena, mathematics, and the arts” (Bansford, Brown, and Cocking cited in Laurillard, 2012, p. 25).

.....

Because LLMs are not sensory, and as you point out, we need to contribute cultures and social structures to cognitive development, it's hard to see how they can in fact 'learn'. They haven't really learnt how to execute advanced mathematical formulae and string sentences together, they have been programmed to do that, albeit at a highly advanced level.

Thanks for the stimulating article.

K

Expand full comment

Thanks for this thoughtful comment Dr. Price, lots to think about here. I'm not familiar with Hansbøl, but I'm a little nervous defining learning as exclusively (?) a change in behavior -- this of course was B.F. Skinner's claim, at least to my understanding. This may be tautological, but I'm inclined to think of learning as "an enduring change to one's cognitive capabilities." (Riley, just now.)

Expand full comment

Delightful post;

The contrast between 'classical AI' as a serial input-compute-output and 'Cisek's model of continous parallel circuits' is exactly paralleled in video game AI development from the 80s into the 90s as programmers were afforded more RAM & processing cycles to improve game or simulation AI's immersion. With many 2D RPG or action games, we used simple 'finite-state-machines' acting in serial, with inputs such as 'how far is the player, how much health do i have' and outputs of 'attack player or pathfind to hiding place' for an enemy guard AI. We transitioned to having tiers of fuzzy-state-machines with probabilities and many inputs, such as sounds (can the AI guard hear the player nearby ), sights ( line of sight visible or blocked?), and daily plans ( get food, sleep, converse with other guards, etc. ).

The notion of 'intelligence', as shown repeatedly in science fiction 'literature', always requires 'wants', whether its basic drives such as sleep-eat-reproduce in 'Ultima V' or complex scores such as 'feeling-liked via attendance-attention vs gifts' in 'The Sims'.

The same industry that birthed GPUs, now marketed more for AI than gaming, is where most 'classical AI' in commercial markets has evolved and continues to explore the 'trickery/Chinese-room' aspects of AI versus the utility/user-benefit side.

Wonder why the gulf between academia/published-papers and video-games continues to widen despite such strong abstract & concrete overlap?

Really appreciate the neuroscience supported claim on language is not thinking, which resonates with me as "Expressions, whether spoken language, body gestures, sounds, etc. is not 'meaning'". We continue to attach hard meanings to expressions like language, and then go on to subvert them, evolve them, and shift in a given context or intent ( sarcasm, irony, twist, disengagement, etc. ). Would love to live in a world where all 'expressions', as some form of communication, are annotated with 'here's the intented meanings behind all the words/movements/colors/sounds/etc.'

Hope you choose to write more on culture-cultivated mind-models as there is deep evidence and claims for such systems.

Expand full comment

Thanks for this thoughtful comment, and that's a really interesting point you make about early era video-game development in some sense foreshadowing what's happened with AI. There's a very nerdy indie film (or prestige tv series) to be made about Zork, or so I think!

And if I do my homework, you should have an essay on "culture-cultivated mind-models" soon! DeepSeek gazzumped my plans to write a follow-up on Cisek this week, but soon, soon.

Expand full comment

Applause on such future culture-cultivated plans;

& charming to learn 'Zork' being filmed? Infocom were pioneers in entertainment, player agency and the illusion of a Wizard of Oz (Zork?) behind it all. Sierra and later LucasArts built on their foundation in visually seminal ways; Praise to Steve Meretsky & his gang for their cleverness ( authentically nice guy to work with too ).

Thanks again & good hunting (thinking?) on upcoming shared-thoughts;

Expand full comment

LLMs also lack interiority

Expand full comment

As a veteran High School teacher (19yrs) who has also spent 15 yrs in the tech sector, I have shocked by the aggressive push to place GenAI into edu. My shock fairly quickly turned into "no duh" when I realized all of this is ordinary and well aligned with Capitalism and the political purpose of state sponsored edu. (Raised to Obey, Agustina Paglayan). I say this because it is essential to understand the the political purpose of education - social control - in writing about education. Without that, it is, IMHO, impossible to understand why so many are pushing GenAI with little more than FOMO to justify their efforts.

That said, I perked up when I saw explore/exploit in one of the Cisek diagrams. This pair has driven a lot of the ideas behind my pedagogy. In formal education beyond the first few grades we have largely given up on "Explore" and have inserted lecture and what Freire calls "banking" or information deposits. The trade off between explore and exploit and the development of insight and intuition are the external attributes of learning that I think are the most profitable to focus on. AND, it is critical, IMHO, we NOT use the simplistic input --> output model of cognition.

Studying Explore/Exploit in children has lead to breakthroughs in robotics but we haven't bothered trying to use that research to help in the development of humans.

Expand full comment

I appreciate the comment, but I again must disagree with the claim that the essential political purpose of education is "social control." Such a sentiment simply cannot be squared with the long history of *denying* education to certain groups in order to control them. Frederick Douglass understood his path to liberation depended upon literacy when his male owner told his wife to stop teaching him (Douglass) to read, as it would "forever unfit" Douglass to be a slave. Education is liberatory, and public education, the idea that all children are entitled as a matter of right to be educated, arose in tandem with democracy. Both ideas are worth fighting for.

Expand full comment

As Paglayan discusses in her book, there are three ways to control a population: repression, concession and indoctrination. Slavery was the first. State sponsored education is focused on indoctrination. Douglass believed in being educated. He was famously self-educated and later advocated for an end to segregation in schools and for universal free education.

Horace Mann, Frederick's contemporary and a staunch abolitionist, believed that the purpose of education was to keep the population from taking up arms. Mann copied the public education system from an autocratic Prussia. While his goal was not autocracy, he was very much interested in social control. At the end of slavery, like many white abolitionists, Mann was concerned about the black population rebelling.

There were a few exchanges between Mann and Douglass in the 1850's that I found but the language is too difficult for me to understand the nuances.

Education is not liberatory by default and to suggest that is ahistorical. Democratic nations can and have used education as a way to indoctrinate students, especially low-income students, into Democracy without practicing any of the elements of Democracy in the process. This is well documented as far back as Dewey and more recently by Z. Hammond and Larry Cuban and others in between. Liberatory education, as Freire understood it, has very specific goals related to student agency and the ability to over come barriers. This is far cry from anything that education as a national system has ever intended for poor children.

Expand full comment

"There’s a reason we’re the only people in the history of the United States for whom it was ever illegal to learn to read and write, because we know — right? — that education leads to liberation. And you can’t keep a people down who understand their history." -- Nikole Hannah-Jones

Expand full comment

Nikole Hannah-Jones has a rich history of critical assessments of American Education as it is practiced in reality. Yes, learning is the path to liberation but Quoting Hannah-Jones as an advocate for how poor kids are educated in America is like saying MLK just wanted us all to get along.

Expand full comment

I think Cisek points to brain development creating changes in behaviour, thus evolution occurs.

And we measure behavioural change in our students, based on how they progress in what they know, how well they are able to do certain things, and applications of practice to different experiences.

Behavioural change also impacts the learning environment, so new experiences emerge.

I'm not wed to the concept, but I think the symbiotic nature of it and your observation of cognitive evolution make for interesting bed partners.

K

Expand full comment

Thanks for the comment. I think my follow-up essay may touch on the idea you posit here, with an emphasis on may. Stay tuned!

Expand full comment