I like the conversational form here quite a bit, and what a lovely collection of ideas this essay bundles.
One insight I take away is how important context is to expert knowledge. As the anecdote about the diagnosis shows, the experience of solving a complex problem can be broken down into parts, but actually doing the work is a stream.
That seems true in the classroom--I'm thinking of Dan Meyer's great posts about how effective teachers teach--where the teacher takes in the context as a whole and uses contextual awareness to inform the observable act of engaging a learner, just as the doctor did in Bror's example. Breaking that process into steps is instructive, but simply understanding the discrete moves does not make you a good diagnostician. It requires an active mind engaged in evaluating a flow of information.
Our habit of breaking things apart and understanding and measuring them in parts has led us to understand a great deal about the world, but solving problems, either small or large, sometimes requires holistic approaches.
As always, well said Rob. Many activities we label as "professional" involve a great deal of tacit knowledge, meaning, professionals know how to do it but may not always be able to explain how or why (because of the complexity). And the same is true for riding a bike.
On the history of how human cognition changed with the rise of reading and writing, one older but still useful starting point is Walter J. Ong's *Orality and Literacy: The Technologizing of the Word* (1982). A lot of work on orality and textuality has been done since then, especially in medieval studies (or at least that's what I'm most familiar with). M. T. Clanchy's *From Memory to Written Record: England 1066-1307* (1979) is one of the foundational studies in that field.
Useful pointers, thank you! After this dialogue took place with Bror, I ended up reading The Origin of the Modern Mind by Merlin Donald that explored this topic on the anthropological time scale.
When I started reading this piece I thought it was going to go a particular way but then veered off and did something quite different.
Another way of looking at this is that it's mistake to think that all knowledge is symbolic or could be potentially captured in some symbolic form, involving symbols and rules in the brain/mind. This is what Peter Hacker would label a mereological fallacy. A lot of AI and cognitive science takes on Cartesianism metaphysics, the binary inner and outer problematic, as a background assumption. But knowledge is not resident in our heads. We have bodies, we live in a world we are social. Language is a social institution. It's meaning is in its use within a 'form of life'. If you really wanted to understand knowledge, you'd look at communities and practice (see T.S Kuhn Structure of Scientific Revolutions, 2nd edition, p44-46).
Bror: "Note that over time, human expertise does get turned into non-verbal, tacit expertise, so that the expert may have some extra effort to come up with the reasoning, but the way it got laid done in the first place was through the reasoning process, not mere linguistic correlations."
This appears to assume that one has to have explicit concepts and rules, knowledge 'in the head' and from this tacit knowledge develops. This is not how tacit knowledge is generally understood (see Michael Polanyi). The symbolic does not give rise to practice but is rather inextricably tied up in it.
Bror: "We stopped him at the door—we were mere medical students who asked how he reached his conclusion. At first he looked startled and confused, as if he had no idea how he'd done that. (A bit concerning!) And then he began to describe how all the things he had taken in through all his senses had quickly narrowed the diagnostic tree of possibilities from the chart and our description down..."
I would argue that the description is an after-the-fact reconstruction to satisfy the students' metaphysical belief that learning involves a knowledge storage and processing, just like a computer.
Bror: "...while his long-term memory, deeply trained in an evidence-grounded way, rapidly flew through the diagnostic process."
What on earth does this mean? He has all this stored knowledge that he is processing through subconsciously in a logical manner? Riley brings up riding a bike in the comments. How do you think that happens? Does one rapidly fly through the stored physical rules of the riding process? The example of riding a bicycle appears in various places in Polanyi's book, Personal Knowledge. On page 49 he writes that he talked to manufacturers, physicists and engineers who have described the physics of what happens when a person rides a bike. "But does this tell us exactly how to ride a bicycle? No. You obviously cannot adjust the curvature of your bicycle’s path in proportion to the ratio of your unbalance over the square of your speed; and if you could you would fall off the machine...” He concludes that “Rules of art can be useful, but they do not determine the practice of an art; they are maxims, which can serve as a guide to an art only if they can be integrated into the practical knowledge of the art. They cannot replace this knowledge.”
Thanks for the extended comment. I agree with you about Kuhn and Polyani, and have cited both frequently in my writing on knowledge -- if you peruse the archive here, you may be interested in the conversation I had with Sean Trott about developing a new paradigm to understand LLMs.
Thanks. I will take a look back through the archive.
In current critiques of cognitive science, neuroscience and AI, I find there is surprisingly little reference to Polanyi or for that matter Wittgenstein, Ryle, or, now Dreyfus has departed, Heidegger and Merleau-Ponty. I think the critique of Cartesian thinking in cognitive science is devastating to many of the field's pretensions but the critiques (examples below), have for the most part simply been ignored.
Bennett, M. R., and P. M. S. Hacker. Philosophical Foundations of Neuroscience. 2nd edition. Hoboken, NJ: Wiley-Blackwell, 2022.
Button, Graham, Jeff Coulter, John R. Lee, and Wes Sharrock. Computers, Minds, and Conduct. Cambridge, MA, USA: Polity Press, 1995.
Shanker, Stuart G. Wittgenstein’s Remarks on the Foundations of AI. London: Routledge, 1998.
I like the conversational form here quite a bit, and what a lovely collection of ideas this essay bundles.
One insight I take away is how important context is to expert knowledge. As the anecdote about the diagnosis shows, the experience of solving a complex problem can be broken down into parts, but actually doing the work is a stream.
That seems true in the classroom--I'm thinking of Dan Meyer's great posts about how effective teachers teach--where the teacher takes in the context as a whole and uses contextual awareness to inform the observable act of engaging a learner, just as the doctor did in Bror's example. Breaking that process into steps is instructive, but simply understanding the discrete moves does not make you a good diagnostician. It requires an active mind engaged in evaluating a flow of information.
Our habit of breaking things apart and understanding and measuring them in parts has led us to understand a great deal about the world, but solving problems, either small or large, sometimes requires holistic approaches.
As always, well said Rob. Many activities we label as "professional" involve a great deal of tacit knowledge, meaning, professionals know how to do it but may not always be able to explain how or why (because of the complexity). And the same is true for riding a bike.
On the history of how human cognition changed with the rise of reading and writing, one older but still useful starting point is Walter J. Ong's *Orality and Literacy: The Technologizing of the Word* (1982). A lot of work on orality and textuality has been done since then, especially in medieval studies (or at least that's what I'm most familiar with). M. T. Clanchy's *From Memory to Written Record: England 1066-1307* (1979) is one of the foundational studies in that field.
Useful pointers, thank you! After this dialogue took place with Bror, I ended up reading The Origin of the Modern Mind by Merlin Donald that explored this topic on the anthropological time scale.
When I started reading this piece I thought it was going to go a particular way but then veered off and did something quite different.
Another way of looking at this is that it's mistake to think that all knowledge is symbolic or could be potentially captured in some symbolic form, involving symbols and rules in the brain/mind. This is what Peter Hacker would label a mereological fallacy. A lot of AI and cognitive science takes on Cartesianism metaphysics, the binary inner and outer problematic, as a background assumption. But knowledge is not resident in our heads. We have bodies, we live in a world we are social. Language is a social institution. It's meaning is in its use within a 'form of life'. If you really wanted to understand knowledge, you'd look at communities and practice (see T.S Kuhn Structure of Scientific Revolutions, 2nd edition, p44-46).
Bror: "Note that over time, human expertise does get turned into non-verbal, tacit expertise, so that the expert may have some extra effort to come up with the reasoning, but the way it got laid done in the first place was through the reasoning process, not mere linguistic correlations."
This appears to assume that one has to have explicit concepts and rules, knowledge 'in the head' and from this tacit knowledge develops. This is not how tacit knowledge is generally understood (see Michael Polanyi). The symbolic does not give rise to practice but is rather inextricably tied up in it.
Bror: "We stopped him at the door—we were mere medical students who asked how he reached his conclusion. At first he looked startled and confused, as if he had no idea how he'd done that. (A bit concerning!) And then he began to describe how all the things he had taken in through all his senses had quickly narrowed the diagnostic tree of possibilities from the chart and our description down..."
I would argue that the description is an after-the-fact reconstruction to satisfy the students' metaphysical belief that learning involves a knowledge storage and processing, just like a computer.
Bror: "...while his long-term memory, deeply trained in an evidence-grounded way, rapidly flew through the diagnostic process."
What on earth does this mean? He has all this stored knowledge that he is processing through subconsciously in a logical manner? Riley brings up riding a bike in the comments. How do you think that happens? Does one rapidly fly through the stored physical rules of the riding process? The example of riding a bicycle appears in various places in Polanyi's book, Personal Knowledge. On page 49 he writes that he talked to manufacturers, physicists and engineers who have described the physics of what happens when a person rides a bike. "But does this tell us exactly how to ride a bicycle? No. You obviously cannot adjust the curvature of your bicycle’s path in proportion to the ratio of your unbalance over the square of your speed; and if you could you would fall off the machine...” He concludes that “Rules of art can be useful, but they do not determine the practice of an art; they are maxims, which can serve as a guide to an art only if they can be integrated into the practical knowledge of the art. They cannot replace this knowledge.”
Thanks for the extended comment. I agree with you about Kuhn and Polyani, and have cited both frequently in my writing on knowledge -- if you peruse the archive here, you may be interested in the conversation I had with Sean Trott about developing a new paradigm to understand LLMs.
Thanks. I will take a look back through the archive.
In current critiques of cognitive science, neuroscience and AI, I find there is surprisingly little reference to Polanyi or for that matter Wittgenstein, Ryle, or, now Dreyfus has departed, Heidegger and Merleau-Ponty. I think the critique of Cartesian thinking in cognitive science is devastating to many of the field's pretensions but the critiques (examples below), have for the most part simply been ignored.
Bennett, M. R., and P. M. S. Hacker. Philosophical Foundations of Neuroscience. 2nd edition. Hoboken, NJ: Wiley-Blackwell, 2022.
Button, Graham, Jeff Coulter, John R. Lee, and Wes Sharrock. Computers, Minds, and Conduct. Cambridge, MA, USA: Polity Press, 1995.
Shanker, Stuart G. Wittgenstein’s Remarks on the Foundations of AI. London: Routledge, 1998.