Nice overview of AGI in relation to the question of the educational value of generative AI models. I especially like your pointing to what I think of as the "Grover paper" as it calls attention to the limits of the datasets the models are trained on relative to human experience. While the more famous "Stochastic Parrot" paper focuses on the significant differences in how LLMs and humans generate words, the difference in inputs are perhaps more important in explaining why LLMs are not "thinking" in any way similar to the way humans do.
Nice overview of AGI in relation to the question of the educational value of generative AI models. I especially like your pointing to what I think of as the "Grover paper" as it calls attention to the limits of the datasets the models are trained on relative to human experience. While the more famous "Stochastic Parrot" paper focuses on the significant differences in how LLMs and humans generate words, the difference in inputs are perhaps more important in explaining why LLMs are not "thinking" in any way similar to the way humans do.
Thank you Rob and I think that's very well said. And your recent book review essay is terrific!
So glad you said so! Thanks!
Do you think tutoring can happen through text processing without reasoning?
Sure, it's happening right now. Whether such tutoring is as effective or more effective than human tutoring, however, is a different matter entirely.
What if it is