Techno-optimism as digital eugenics
A tech CEO shares his vision for the "marginal returns to intelligence"
Don't place faith in human beings. Human beings are unreliable things.
Where my Gen X readers at? Do you vaguely remember hearing these lyrics repeated on MTV’s 120 Minutes via the video for “Butterfly Wings” from Machines of Loving Grace? This memory resurfaced for me last week after reading the newest “techno-optimist” manifesto emerging from Silicon Valley, authored by Dario Amodei, CEO of Anthropic—the company that makes the large-language model Claude. Amodei’s essay is also titled Machines of Loving Grace, perhaps in homage to the band, or maybe the Richard Brautigan poem, or both.
Anyway, MOLG the Essay is a vision of our future after the advent of something Amodei calls “Powerful AI.” What is Powerful AI, you ask? He defines it as follows:
Powerful AI is “smarter than a Nobel Prize winner across most relevant fields…This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.”
Powerful AI will be able to interact with the world digitally the same way humans can, such as by “taking actions on the internet, taking or giving directions to humans, ordering materials, [and] directing experiments…”
Powerful AI will be autonomous and capable of controlling physical tools, and even “design robots or equipment for itself to use.” It will also be 10 to 100 times faster than humans at absorbing information and completing tasks.
Finally, Powerful AI will be able to replicate itself such that it will be able to run “millions of instances” simultaneously, each acting independently or “if needed [they] can work together in the same way humans would collaborate, perhaps with different sub-populations fine-tuned…”
Wow! Powerful AI would really be something. And Amodei suggests we might have it as soon as 2026—double plus wow. Of course, as we sit here today, our current not-quite-so powerful AI models struggle with relatively simple counting and grammatical tasks:
But we’re in the world of pure imagination, and not, you know, reality. So let’s grant Amodei’s premise and imagine that Powerful AI may arrive soon. What happens next? He argues we will need a new mental model for such a world, one that is firmly grounded in a capitalistic understanding of intellectual pursuits:
Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one—for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
But how will we capture these marginal returns to intelligence? At this point, Amodei offers a few “to be sure” style caveats regarding real-world constraints. These include actual physical laws that slow things down (damn); limits on raw data (irritating); the fact that some things are just inherently chaotic (lame); and “constraints from humans” (ugh the worst). On that last point, he observes that “many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments.”
Don’t place faith in human beings. Human beings are unreliable things.
Sorry, those lyrics popped back in my head, no idea why. Back to Amodei: After quickly rattling off these pragmatic limitations, he then sets forth a MUCH longer list of the socially transformative benefits that Powerful AI might bestow upon us. When I say longer, I mean like 10,000 words long. It’s a lot.
But also hilarious! Here’s a few highlights of what’s to come in our Brave New Imaginary World:
Powerful AI will prevent or cure most infectious and genetic diseases; eliminate most cancers; cure most if not all mental illnesses; and oh, double the human lifespan. Did I say double? That’s just to start: “Once human lifespan is 150, we may be able to reach ‘escape velocity’, buying enough time that most of those currently alive today will be able to live as long as they want….”
As to economics and world governance, it's “plausible that ‘AI finance ministers and central bankers could replicate or exceed 10%’ annual GDP growth in countries in the developing world.” Cool! On the other hand, “AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit.” So that’s a bummer.
But not to worry, as we humans can “tilt things in the right direction.” How? Democratic countries simply must band together and seize the AI supply chain. Yep, you read that right—we should form some sort of League of AI Nations that will “block or delay adversaries’ access to key resources like chips and semiconductor equipment.” Then, presumably after we triumph in our (nuclear?) war with China, we will find ourselves “in an ‘eternal 1991’—a world where democracies have the upper hand and Fukuyama’s dreams are realized.”
So there you have it, once we have Powerful AI, history will—finally—just fucking end once and for all. And then it’ll be an eternal 1991. You know, like in The Matrix.
But what will we do all day? Well, Amodei confesses he’s not really sure. We will, however, have lots of stuff to play with, perhaps distributed among us based on a “capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values).”
From each AI according to its ability, to each human according to what AI decides.
My initial reaction to all this was maniacal laughing which started about the time I read the line about the “escape velocity” of human aging. I mean, come on. If you watch HBO’s Succession, you may remember the scene where Brian Cox’s character, the billionaire patriarch of a sprawling media empire, confronts his idiot children and tells them, “I love you…but you’re not serious people.”
If this essay is any indication of Amodei’s depth of understanding of history, economics, political science, and the myriad other socio-cultural institutions he’s opining about…I’m sorry, he’s not serious people. This manifesto does not deserve to be taken seriously.
So why then have I just spent 1,200 words summarizing it? Fair question. I think it comes back to that unsettling phrase Amodei italicized, the marginal returns of intelligence. I’ve spent the last 15 years of my life pondering human intelligence and cognition, and advocating for broader understanding of scientific principles related to these ideas. At the same time, I’ve tried to both remain conscious of the sordid, indefensible, and truly horrific history of genocidal racism that has accompanied this scientific discipline from its very beginning, and to acknowledge that it hangs over this science still.
I’m talking about eugenics, of course. We can’t ignore this history. As has been oft-observed, eugenics was birthed by thinkers and advocates who were considered “progressive” for their time. We all know the basic eugenic gist—that intelligence is largely the product of genes (nature), and therefore human society should be set up to encourage the breeding of intelligence over time…and to discourage the breeding of the unintelligent. We also know the ideas underlying eugenics were eventually made very real via the policies and practices of Nazism.
For the past century, eugenics has (mostly) been marginalized in democratic societies, and rightfully so—although the current Republican nominee for President hints at reviving it. But what about the pursuit of the marginal returns of intelligence from Powerful AI? Amodei’s vision, and perhaps Anthropic’s vision, is that our world will improve if we breed—sorry, train—AI models that can unlock these wide ranging benefits across every facet of life as we know it.1 Remember, Powerful AI will be able to build its own robots, direct experiments, and work in coordinated fashion with other instances of itself.
“Different subpopulations could even be fine tuned.”
The current fashion is to call this techno-optimism. But we might see it as something else, something we might call digital eugenics, the selective breeding of superior digital creatures who will deliver human utopia. I recognize this analogy is charged. But it’s charged with good reason, namely, that it is an indisputable historical fact that scientific notions about intelligence—and the explicit pursuit of its marginal returns—were once used to support fantasies of social engineering that quickly tipped into fascism, and ultimately mass human slaughter.
“But Ben, surely we don’t have the same ethical obligations toward AI that we do other humans.” Maybe not. Here’s the thing, though—if we’re going to imagine something called Powerful AI that can write epic novels, cure cancer, and make us live to infinity and beyond, might we also imagine that said AI could develop some sense of itself, some form of consciousness? And, granting that possibility, might we also want to imagine whether creating a mass form of slave labor is something worth celebrating in 15,000-word “techno optimist” manifestos?
Of course, none of this will happen, because Amodei’s speculative claims about Powerful AI are ludicrous. I’ll even “boldly” predict that none—none!—of what he’s prognosticated will happen in the next five to 10 years. But if Amodei wants us to take his ideas seriously, we must situate them on a continuum with the same ideas that have led to the worst horrors in human history.
I’ll turn it back over to Machines of Loving Grace, the band, to close us out:
… You can't place faith in a new regime That fascist faith will kill you A hurricane triggered by a butterfly's wings Your conspirators betray you … Don't place faith in human beings Human beings are unreliable things Don't place faith in human beings Human beings or butterfly's wings When I decide to live in the mind The heart dies The heart dies The heart dies
If this wasn’t enough of me ranting to kick off your week, why not listen to my recent podcast debate on the future of AI in education? Things got spicy.
Post-publication update: I was remiss in not remembering this article from Timnit Gebru and Émile P. Torres titled “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence,” which covers some of the same ground I do here.
https://firstmonday.org/ojs/index.php/fm/article/view/13636
I was pointed to Amodei’s essay via a tweet from Stuart Richie, a scientist and author who currently works in Anthropic’s communications department. Remarkably, in Richie’s book Intelligence: All that Matters, he describes Francis Galton, the godfather of eugenics, as someone who “would likely have been appalled to see the end results of the eugenics movement.” (p.102) Richie’s conjecture is hard to square with Galton’s statements that “the Jews are specialised for a parasitical existence upon other nations” and that “there exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race”—to cite only two of Galton’s many racist and genocidal claims.
Dave Karpf had a short take on Open AI on his blog and in The Atlantic earlier this month, which he summarized as " the business model of OpenAI isn’t actually ChatGPT as a product. It’s stories about what ChatGPT might one day become. And, if you read Altman’s “The Intelligence Age” closely, what really stands out is how fantastical the stories really are.
Sounds like Amodei made a play to make sure OpenAI doesn't corner the market in fantastical stories about AGI. It does raise the question of how they pay back their investors, but at least Anthropic is still technically a non-profit.
Curious, do you happen to know Neerav's take? He's a prominent thread connecting AI and ed reform.