9 Comments

Thanks for a great overview with lots of references I could use. And it was funny too, starting with the illustration. I’d say it probably was worth exhausting yourself.

Expand full comment

Seinfeld reference is perfect.

Expand full comment

“Newton’s take: AI Skepticism has two main “camps,” the “AI is real and dangerous” camp versus “AI is fake and it sucks” camp.” And a third camp: people who respect intellectual and creative property and don’t want it stolen to benefit billionaires who chop up our work and spit it out in an AI slop mashup.

People who appreciate real human curiosity, creativity, imagination, heart, soul, and yes, intelligence.

I keep hearing, “its just training, it has to learn”! If so, why did a Google search summary spit out a big chunk of my writing verbatim??

Expand full comment

I am honored to be included on this list. Thank you!

Expand full comment

Enjoyed this article as there is so much back and forth about good and bad, pros and cons.

Expand full comment

What we need is more, not less skepticism. Let's carefully examine the evidence and draw some, always tentative, conclusions. That's what science is all about. We should have less, not more ad hominem arguments. It does not matter who makes an argument, the argument should be evaluated on its merits.

Artificial Intelligence is not limited to just GenAI and language models. There is a much longer history and a much greater range of tools available than just those created in the last 5-10 years. Many of these tools are useful for a range of tasks. When the tasks match the capabilities of the model and the resources are available, these models can be very useful.

The problems come, and the reasons for skepticism rise, when we over-attribute the capabilities of the models. For example, none of the current or past models is even on the path to artificial general intelligence. They solve one kind of problem, but intelligence requires the ability to deal with many kinds of problems. Current models solve problems where some intelligent human has set up the problem so that only simple computations (e.g., gradient descent) are needed to reach the solution. Models cannot be autonomous or exceed the capabilities of their human designers until they are capable of the complete process of problem solving, not just the last step.

GenAI models are trained to fill in the blanks. To claim that they do more than this would be an extraordinary claim that should require extraordinary evidence. Instead, we are mostly treated to the logical fallacy of affirming the consequent. The models perform as if they are reasoning (for example), so they must be reasoning. The alternative hypothesis is that they are copying (approximately) the language that was used by reasoning humans.

Here are a couple of resources for thinking about artificial general intelligence: https://open.substack.com/pub/herbertroitblat/p/super-intelligence-no-not-even-high

https://thereader.mitpress.mit.edu/ai-insight-problems-quirks-human-intelligence/

https://mitpress.mit.edu/books/algorithms-are-not-enough

Expand full comment

Benjamin, I think you need to do a little more research on AI scepticism - you appear to have overlooked Searle, and quite a few others ..

Cf. Artificial Intelligence is stupid: https://tinyurl.com/4efj8m5n

Expand full comment

Hi Dr. Bishop -- Searle is of course a giant in philosophy. He's also 92 (was pleased to learn he's still with us) and to my knowledge not active in fostering AI skepticism/scepticism as a movement today. That's the movement that Casey Newton described in his essay last week and the one I'm redescribing in my post here. There's a separate post to be written perhaps about the intellectual history of AI skepticism and your article is an excellent guide to that, thanks for sharing!

Expand full comment

I appreciated this tour of perspectives and I picked up several new people to follow on this platform. Cheers!

Expand full comment