Discussion about this post

User's avatar
Sreecharan Sankaranarayanan's avatar

Human thinking doesn’t require language — makes sense. How does that imply that human thinking is not encoded in language? Especially that of all humans, ever.

Also, why do language models need to think “like humans think”. They can achieve the same outcomes through different means. A plane doesn’t flap its wings to become airborne but achieves the same outcome.

Don’t get me wrong, claims of AGI are flawed for many reasons, but, I don’t think your argument quite tracks without further elaboration.

Expand full comment
Derek Lomas's avatar

Thank you for sharing this! I wasn’t into Chomsky until I read a paper on the neuroscience of his MERGE concept. Whatever his ideas in the past were, his most recent notion of universal grammar comes down to one function: taking two things and turning them into one thing. That’s merge. It also doesn’t have anything to do with language, per se. Meaning, it applies to motor activity, object perception, etc.

Arguably, large language models don’t have anything to do with language either. Hence, successes in applying language models to chemical synthesis, etc.

But, actually, I just wanted to share a recent paper I wrote on the concept of resonance in AI. This was published just before LLMs hit, so they aren’t mentioned. You might enjoy it. https://www.frontiersin.org/journals/neurorobotics/articles/10.3389/fnbot.2022.850489/full

Expand full comment
6 more comments...

No posts