6 Comments
author

I wish I'd thought to do this before publishing, but things continued to be weird when I asked GPTo1 to explain why the words Jumping, Soy, and Lady popped into its analysis.

https://chatgpt.com/share/66e448b4-4f6c-8006-a7b6-9ee5c6662386

Expand full comment

I appreciate this summary a lot more than "how many R's in strawberry". The ability to check out the steps and see those as outputs is pretty fascinating...

Expand full comment
author

Thanks Joseph. It really is. And, if the articulation of these steps aligns (even loosely) to how we humans think about various tasks and concepts, the potential of these tools to be valuable to education just shot way up. Maybe.

Expand full comment

True, rather than "do this" and get an answer we get to open the black box and recognize a process that we/learners can also follow...

Expand full comment

The examples I have seen from early testers are word puzzles. The type of puzzle on which a word predictor should be able simulate thinking.

I work in learning strategy and instructional design for workplace learning. I have a mental model for making decisions from discovery to assessment. I gave the new gpt a summary of client needs, performance objectives, and prioritized learning objectives. I asked it to develop a learning program and describe it's decision making with each step.

To help the gpt along, I gave it system instructions stating my goal as a learning strategist and a dense paragraph about how the gpt persona should think about learning design.

I thought the results were mediocre, and no different than what I've experienced until now. In fact, I ran the same test on 4o and thought the 4o output was better - it gave me some interesting things to think about.

Typically, I would not try to achieve a program design in one prompt. I would spend a long time foregrounding background information, models, examples, etc. to generate outputs that support each step of my process.

With this limited experiment, I think for real world work, you have to tell the gpt how to think so it has a model to ape. This is no different than what I have been doing. Is it better now? That's TBD.

Expand full comment

I want to add that reasoning about something is different than generic reasoning skill. It's arguable if generic skill is a skill at all, it's a procedure or method. A gpt can adopt reasoning methods but is still doesn't have a schema for the subject.

Expand full comment