Discussion about this post

User's avatar
Emi Ruff's avatar

Thank you for writing this. I recently designed and conducted my first DIY AI experiment, and the process reminded me that the framework of the scientific method is to disprove one's own beliefs. All that pre-work of theorizing and researching is meant to lead up to the experiment, and the experiment should be designed to independently prove whether or not that theorizing holds up in the real world. It is so fundamentally absurd that AI evangelists follow the inverse of this, it's mind-boggling to even begin correcting their logic. Definitely printing this one out to come back to when I need a sanity check to pull myself out of the churn ...

Expand full comment
Roman's Attic's avatar

I just wrote a long note on the affect heuristic, a bias that I think is pretty prevalent in AI discourse. I’m not necessarily accusing anyone in particular of anything, I just think it’s good to keep in mind. I’m just gonna paste it here:

“The psychologist Paul Slovic has found that when people have positive feelings toward a technology, they tend to list a lot of reasons in favor of it and few against it. When they have negative feelings toward a technology, they list a lot of risks and very few positives. What’s interesting about this is that when graphed, these results show an incredibly strong inverse correlation, even though this doesn’t make sense for how technologies work in the real world. Many technologies are high risk AND high reward, and many things are low risk AND low reward.

To show the strength of the underlying bias, Slovic took the group of people who disliked a technology and showed them a strong list of benefits from that technology. He found that after doing so, people began to downplay the significance of the risks of using that technology, even though their knowledge of the risks stayed the same. If they were acting fully rationally, their assessments of the risk should have stayed the same. Our minds are not designed to hold incredibly nuanced understandings of things.

I feel like I see this happening in AI discourse sometimes. People who are pro AI might downplay environmental costs and the dangers of piracy. People who are anti AI might dislike it because of its effects on the art industry, but they also might refuse to ask GPT questions because of the “environmental impact,” despite the fact that conversations with chatbots only take up 1-3% of the total electricity cost of AI, and streaming Spotify and YouTube for a non-insignificant amount of time are both much more environmentally costly than talking to GPT. In both cases, people create inaccurate assessments of the risks and benefits of using AI.

The takeaway from this note is to understand your biases and be careful about how you evaluate things. Even if something might have strong negatives, it also might have strong positives, and unless you’re careful, you might be evaluating things based on emotion, rather than reason.

(A summary of Slovic’s work here is in Thinking, Fast and Slow, by Daniel Kahneman, which is where I found it. The name of this bias is the affect heuristic.)”

Expand full comment
13 more comments...

No posts