4 Comments

I know this isn't your main point, but I also found the frequent interruptions (by humans, of ChatGPT-4o) to be one of the stranger parts of the videos. I admittedly do find myself deciding to "interrupt" ChatGPT in the text mode sometimes when I can tell the answer just isn't what I'm looking for. But there's something about the fact that these are now spoken interactions that makes it feel weirder to me.

Not because I think ChatGPT-4o is going to be offended, to be clear. But it makes me think of this (I think prescient) essay by Paul Bloom and Sam Harris in the New Yorker about the (human) costs of treating artificial systems callously or even cruelly: https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html

Expand full comment
author

That's so funny Sean, I hadn't seen this op-ed until just now, but I've pointed to season one of West World as my example of how AI might induce our worst behavior. Dan Dennett likewise warned loudly of the dangers of counterfeiting people (https://www.bbc.com/future/article/20240422-philosopher-daniel-dennett-artificial-intelligence-consciousness-counterfeit-people).

Some good news, maybe: Among educators I've spoken to, the tendency to anthropomorphize chatbot behavior among younger students plays out with them (the human kids) wanting to treat the chatbot fairly, not lie to it, etc. -- in other words, to behave ethically toward it. Let's hope that maintains.

Expand full comment

It's encouraging that kids have the impulse to treat the chatbot fairly! Probably another case where adults need to keep modeling the appropriate kind of behavior lest we send the wrong signal.

Expand full comment

This is such an important breakdown. Thank you so much for sharing it!

Expand full comment