15 Comments
User's avatar
Jed Sundwall's avatar

Seinfeld reference is perfect.

Expand full comment
Janet Salmons PhD's avatar

“Newton’s take: AI Skepticism has two main “camps,” the “AI is real and dangerous” camp versus “AI is fake and it sucks” camp.” And a third camp: people who respect intellectual and creative property and don’t want it stolen to benefit billionaires who chop up our work and spit it out in an AI slop mashup.

People who appreciate real human curiosity, creativity, imagination, heart, soul, and yes, intelligence.

I keep hearing, “its just training, it has to learn”! If so, why did a Google search summary spit out a big chunk of my writing verbatim??

Expand full comment
Tom Mullaney's avatar

I am honored to be included on this list. Thank you!

Expand full comment
Rich James's avatar

I appreciated this tour of perspectives and I picked up several new people to follow on this platform. Cheers!

Expand full comment
Jurgen Gravestein's avatar

Brilliant analysis. Thanks for sharing!

Expand full comment
Maurice Blessing's avatar

Thanks for a great overview with lots of references I could use. And it was funny too, starting with the illustration. I’d say it probably was worth exhausting yourself.

Expand full comment
Teckedin tech information hub's avatar

Enjoyed this article as there is so much back and forth about good and bad, pros and cons.

Expand full comment
SorenJ's avatar

blob:https://chatgpt.com/44c6de0c-433a-4ca2-8b73-50b676b30775

It got the fork on top of a potato right for me first try

Expand full comment
Steve Wright's avatar

Sorry, one more: "Technology was taking over so much [1980's]... Technology was heavily distracting people... They spent weeks trying to get the perfect snare drum with gated reverb sound."

- Jack White, It Might Get Loud

Expand full comment
Steve Wright's avatar

"In the digital world, the principle ‘everything is information’ considers territories as a simple mode of the map’s existence. The violence of digitalization thus resides not in some project of domination [science over nature], but rather in the negation of all forms of alterity and singular identity to make room for a dimension of pure abstraction. Anything in the territory (the reality of bodies, of ecosystems . . . ) that resists attempts at modeling thus becomes, in the world of digital models, ‘noise in the system.’"

- Miguel Benasayag, Tyranny of Algorithms

Expand full comment
Herbert Roitblat's avatar

What we need is more, not less skepticism. Let's carefully examine the evidence and draw some, always tentative, conclusions. That's what science is all about. We should have less, not more ad hominem arguments. It does not matter who makes an argument, the argument should be evaluated on its merits.

Artificial Intelligence is not limited to just GenAI and language models. There is a much longer history and a much greater range of tools available than just those created in the last 5-10 years. Many of these tools are useful for a range of tasks. When the tasks match the capabilities of the model and the resources are available, these models can be very useful.

The problems come, and the reasons for skepticism rise, when we over-attribute the capabilities of the models. For example, none of the current or past models is even on the path to artificial general intelligence. They solve one kind of problem, but intelligence requires the ability to deal with many kinds of problems. Current models solve problems where some intelligent human has set up the problem so that only simple computations (e.g., gradient descent) are needed to reach the solution. Models cannot be autonomous or exceed the capabilities of their human designers until they are capable of the complete process of problem solving, not just the last step.

GenAI models are trained to fill in the blanks. To claim that they do more than this would be an extraordinary claim that should require extraordinary evidence. Instead, we are mostly treated to the logical fallacy of affirming the consequent. The models perform as if they are reasoning (for example), so they must be reasoning. The alternative hypothesis is that they are copying (approximately) the language that was used by reasoning humans.

Here are a couple of resources for thinking about artificial general intelligence: https://open.substack.com/pub/herbertroitblat/p/super-intelligence-no-not-even-high

https://thereader.mitpress.mit.edu/ai-insight-problems-quirks-human-intelligence/

https://mitpress.mit.edu/books/algorithms-are-not-enough

Expand full comment
Prof J. Mark Bishop's avatar

Benjamin, I think you need to do a little more research on AI scepticism - you appear to have overlooked Searle, and quite a few others ..

Cf. Artificial Intelligence is stupid: https://tinyurl.com/4efj8m5n

Expand full comment
Benjamin Riley's avatar

Hi Dr. Bishop -- Searle is of course a giant in philosophy. He's also 92 (was pleased to learn he's still with us) and to my knowledge not active in fostering AI skepticism/scepticism as a movement today. That's the movement that Casey Newton described in his essay last week and the one I'm redescribing in my post here. There's a separate post to be written perhaps about the intellectual history of AI skepticism and your article is an excellent guide to that, thanks for sharing!

Expand full comment
Steve Wright's avatar

To that point, my favorite quote from the history books: “We started with a big ‘cosmic question’: Can we make a machine to rival human intelligence? Can we make a machine so we can understand intelligence in general? But AI was a victim of its own worldly success. People discovered you could make computer programs so robots could assemble cars. Robots could do accounting!” — Seymore Papert, 2002

Expand full comment