8 Comments

Human thinking doesn’t require language — makes sense. How does that imply that human thinking is not encoded in language? Especially that of all humans, ever.

Also, why do language models need to think “like humans think”. They can achieve the same outcomes through different means. A plane doesn’t flap its wings to become airborne but achieves the same outcome.

Don’t get me wrong, claims of AGI are flawed for many reasons, but, I don’t think your argument quite tracks without further elaboration.

Expand full comment

This is a great comment. My response:

1. The product of human thinking is very much encoded in language -- ideas, theories, poems, novels, the list is long. And the fact that we can do this is what enables civilization. But my argument is that the product is not the process, and that there's a long list of cognitive capabilities we possess that cannot be encoded in language, e.g., dealing with novel problems. I touched on this in a previous essay: https://buildcognitiveresonance.substack.com/p/what-grover-and-good-will-hunting

2. LLMs don't need to think like humans think! They can still be useful tools. My strong claim is in some sense a narrow one: I'm simply arguing that adding more data and computing power to these models will not create a form of intelligence that is equal to our own (much less surpass it).

And you've chosen a perfect example to illustrate why that is. At the turn of the 20th century, the overwhelming scientific consensus held that "heavier-than-air flying machines are impossible" (in the word of Lord Kelvin, president of the British Royal Society). It took a few intrepid entrepeneurs, armed with a novel theory about aerodynamics, to prove them wrong. We humans can imagine a different future, a different world, than the one that is presented to us, and we can act upon that imagination. I do not see a path to LLMs doing anything like this, do you?

For more on this, I recommend this paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4737265

Expand full comment

1. I am going to read the previous essay, but, I am not claiming that the product is the process — it certainly isn’t. The product, does, however, encode the process. As for processes not encoded in language, models learn not from language alone, they can learn from facial expressions, from body language, from signs and symbols, and from art. It is going to be a stretch to prove that all human process is not somehow encoded in the cumulative products of the human civilization as a whole. We know that thought can exist independent of language, but do we know that the same thought isn’t captured by someone else in language?

Of course, it is a fact that humans don’t deal with novel problems by learning from the cumulative products of human civilization. But, such an imposition is not required to be placed on AI.

2. I don’t think your essay supports “adding more data and computing power to these models will not create a form of intelligence that is equal to our own (much less surpass it)”. The gap is that you have shown that a single human can have a thought not encoded in language, but haven’t shown that the same thought isn’t encoded in language by another human.

It is theoretically possible to argue, without being refuted by your essay, that there exists sufficiently varied data (text, image, video, audio etc.) in sufficiently high quantity to encode process.

I agree with your conclusion. Just trying to show that your argument doesn’t lead to that conclusion (I think, humbly :) )

Expand full comment

Thank you for sharing this! I wasn’t into Chomsky until I read a paper on the neuroscience of his MERGE concept. Whatever his ideas in the past were, his most recent notion of universal grammar comes down to one function: taking two things and turning them into one thing. That’s merge. It also doesn’t have anything to do with language, per se. Meaning, it applies to motor activity, object perception, etc.

Arguably, large language models don’t have anything to do with language either. Hence, successes in applying language models to chemical synthesis, etc.

But, actually, I just wanted to share a recent paper I wrote on the concept of resonance in AI. This was published just before LLMs hit, so they aren’t mentioned. You might enjoy it. https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.850489/full

Expand full comment

Thanks for commenting. I like anything that supports resonance, and that the paper makes reference to "the vibe." But I'm not vibing with Universal Grammar, in part because the theory has morphed so many times in response to empirical evidence disproving it. I suspect in 10-20 years it will be seen as very weird that we ever thought it was true.

Expand full comment

You had me at 'You may be dimly familiar with someone called “Ludwig Wittgenstein” but have avoided figuring out why he’s important...'

Expand full comment

I had you in mind Mike when I wrote that. What's up with your sad Substack? Are you writing here or what? Get back in the game. man.

Expand full comment

Thank you for your opinion about that article - the more views the better!

Will you be surprised if I tell you that the role of language is NOT communication? Let's say that language is a module performing a tiny part of the communication process.

I discussed this with Ev via Twitter/X and after that I wrote a post - https://alexandernaumenko.substack.com/p/compositionality-in-action

Philosophy of language has been so flawed all this time. Let's fix it together.

Expand full comment