It was the summer of 2000. A young Ben Riley, still in law school, is thrilled that his lifelong plan to become an investment banker on Wall Street is on the verge of being realized.1 I’d joined the Energy division of investment bank JP Morgan as a “summer associate,” which is code for “glorified and incredibly well-paid intern.” And, just a few days on the job, I’d been handed a plum gig—valuing a new business venture being launched by the hottest of companies in America: Enron.
The new venture was called The NewPower Company, and the basic idea was pretty simple—eBay, but for consumer power. That’s right, instead of being forced to buy electricity from your local, highly regulated power company, consumers across the nation would go to an online trading platform operated by Enron (along with partners AOL and IBM) and bid obtaining power on the “spot market.”
Who among us hasn’t wanted to gamble on the provision of basic utilities to our home?
Enron was seeking funding from JP Morgan’s private equity division to get NewPower Co started, and boy let me tell you, the PE gang was super gung-ho to put money into it. And with good reason, really. Enron was one of the fastest growing companies in the country, racking up massive profits—or so we thought!—and promising to bring innovation and financial acumen to our nation’s power grid. Plus, they were one of JP Morgan’s biggest clients in terms of generating lucrative I-banking fees. Not only would an equity infusion in NewPower likely generate direct return on investment, it would also continue to grease the wheels for other Enron business deals.
So trust me when I say, I was excited to get placed on the small, three-person deal team assigned to value NewPower Co. There was just one problem—there was no actual business to value. That’s right, the NewPower Co. had no clients, nor any existing revenue, nor even a discernible cost model. It was all vibes. Well, to be fair, it was vibes plus a sense that energy markets would soon be radically deregulated plus the obligatory tie in to “the Internet,” which was a thing we were still trying to understand at the time. (Perhaps we are still?)
These hinderances, however, did not stop Enron from valuing the NewPower Co. as being worth somewhere between $10 and $20 billion, if memory serves.2 And my job was to find a way to justify this valuation so that the Private Equity team could take a slice of this action, and the banking fees would keep flowing.
But reader, I couldn’t do it. Believe me, I tried—I remember spending a weird weekend in the JP Morgan offices trying to come up with some way of putting meat on the financial bones I’d been given. I had the will, but I couldn’t find a way. You can’t really financially value a story. (Or can you? Hold that thought.)
Here’s where the story takes a slightly unexpected twist—my co-workers completely agreed with me. The vice president overseeing the work was offended by the whole exercise. He ranted, “how am I supposed to put a number on horseshit?”, or words to that effect. I remember sitting in on a call with him and a managing director from private equity who chewed his ass out for an hour, telling him if he didn’t find a way to support the investment he (the PE asshole) would make sure that the VP would never get an Enron deal, or any other deal, sent his way again. Still, my boss didn’t blink. We stood by our judgment. What we did say, and I’ll never forget this, is that “the only justification for this deal is partnering with Enron, a company with a sterling reputation and culture of innovation.”
As it turned out, the private equity dudes ignored us and invested in NewPower Co anyway. What’s more, over that summer, the poor VP who’d taken a principled stand against inventing a justification for an imaginary business was indeed marginalized from other work in our division. When some of my fellow junior bankers heard that I’d bucked against Enron, they looked at me as if I was some sort of socialist radical with dreadlocks and penchant for listening to Tracy Chapman.
Enron? We were questioning Enron?
Perhaps you remember how the Enron story ended. What you may not remember, and I have to confess I did not remember myself until digging up this old news reports, is that the NewPower Co was a bit of a canary in the coal mine that helped to hasten Enron’s demise. You see, many of the so-called “special purpose” limited partnerships that Enron used for accounting fraud had investments in NewPower Co, and it was NewPower’s public reporting of these relationships that helped to pull at Enron’s house of cards until the company collapsed.
And it wasn’t just about Enron, of course. The entire “dot com” boom imploded around this time as people—consumers, investors, the broader public—realized that while this newfangled technology called the Internet was surely significant, many tech companies had no business model that would justify their valuations. “Irrational exuberance,” in the words of Alan Greenspan, the Ayn Rand-loving chair of the Federal Reserve. We were running high, until we weren’t.
Flash forward to the present.
In November, I wrote an essay that pointed to a rash of stories suggesting that “scaling up LLMs with more data and computing power is not—repeat, not—continuing to drive exponential improvement.” Then three weeks ago, the Wall Street Journal reported that OpenAI’s efforts to develop the next major version of its flagship model, ChatGPT5—code named Orion internally—aren’t going well:
OpenAI around this same time decrees it “12 days of Shipmas” and puts out a variety of new things, including one new model variation I’m very intrigued by…but no GPT5. Then, on January 5 (two days ago), OpenAI CEO Sam Altman publishes a lengthy blog post that includes this remarkable claim, with my emphasis:
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.
We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.
The Wall Street Journal story and Sam Altman’s pronouncements cannot both be true. If OpenAI is struggling to create GPT5 and has had to undertake multiple training runs, the company cannot—or should not—be confident that it’s on the path to AGI, much less on the verge of something deemed superintelligence. Something has to give here.
In that same blog post, Altman briefly addresses that strange period when he was fired in late 2023. In his words, the experience “kicked off a really crazy few hours, and a pretty crazy few days. The ‘fog of war’ was the strangest part. None of us were able to get satisfactory answers about what had happened, or why.”
Helen Toner, who served on OpenAI’s board at the time, has since described exactly what happened and why. She says that Altman “did not inform the board that he owned the OpenAI startup fund.” He gave “inaccurate information about the small number of safety processes the company had in place.” At least two senior executives reached out to the board in confidence and told them that they didn’t trust Altman and he’d created a toxic atmosphere. Thus, in Toner’s words, OpenAI’s board “came to the conclusion we just couldn’t believe things Sam was telling us.”
So they fired him. But then powerful actors with vested financial stakes in OpenAI, such as Microsoft, got involved in support of Altman. (Does that sound familiar?) The OpenAI board goes quiet and gets crucified in the court of public opinion, while Altman’s leadership of the company is lionized. In just a few days, he’s back in and the board that dismissed him is, eventually, out.
What’s happened since? Ilya Sustkever, OpenAI’s co-founder and chief scientist, leaves. Andrej Karpathy, another co-founder, leaves. John Schulman, another co-founder, leaves. Mira Murati, chief technical officer, leaves. Bob McGrew, chief research officer, leaves. Barret Zoph, VP of research, leaves. Lilian Wen, another VP of research, leaves. Miles Brundage, senior leader of AI readiness, leaves (and his team is disbanded). (Sources here.)
Does it strike you as a little…strange…that all these vested leaders would bolt a company that has already figured out how to emulate human intelligence?
And then of course there’s OpenAI’s famously byzantine corporate structure—see graphic above—with a for-profit (but capped) enterprise nestled inside a larger nonprofit form, a riddle wrapped in a mystery insidge an enigma. All of this is legal, by the way, as I lated learned to my surprise when working in “venture philanthropy.” But it definitely is complicated and weird. Go dig up Matt Levine’s articles on this if you don’t believe me.
With that said, after my BlueSky story around this went viral yesterday, some people took my analogy too far, so let me be clear: I am not suggesting, and there is zero evidence I’m aware of, that OpenAI is committing accounting fraud of the sort that Enron engaged in. What I am suggesting is that our current sociocultural moment, combined with a major new company led by a CEO with known challenges around telling the truth, feels eerily familiar to my time on Wall Street in the early 2000s and the unique role Enron played within the hype cycle of that era.
During the dot-com period, smart people suggested traditional economics no longer were relevant. Nowadays, the CEOs of leading AI companies are suggesting traditional human intelligence may soon be irrelevant. It’s not just Altman, either—here I must refer you to my essay analyzing Anthropic CEO Dario Amodei’s ludicrous predictions about AI helping us to reach “escape velocity” from death itself. I swear I’m not making that up.
I mean, maybe! It’s very possible this essay will thrown in my face for the rest of my life as proof I just didn’t get it. Perhape we really are on the verge of entering Altman’s “glorious future,” an epoch of unimaginable prosperity and abundance.
But I remember the shattered visage and colossal wreck of Enron. And perhaps that history tells us something about the days ahead.
Ozymandias By Percy Bysshe Shelley I met a traveller from an antique land, Who said—“Two vast and trunkless legs of stone Stand in the desert. . . . Near them, on the sand, Half sunk a shattered visage lies, whose frown, And wrinkled lip, and sneer of cold command, Tell that its sculptor well those passions read Which yet survive, stamped on these lifeless things, The hand that mocked them, and the heart that fed; And on the pedestal, these words appear: My name is Ozymandias, King of Kings; Look on my Works, ye Mighty, and despair! Nothing beside remains. Round the decay Of that colossal Wreck, boundless and bare The lone and level sands stretch far away.” https://www.poetryfoundation.org/poems/46565/ozymandias
I know. I KNOW. For what it’s worth, my brief experience in the belly of capitalism put the final stake in the heart of my youthful libertarian phase, so I remain grateful for the experience.
If anyone has an old NewPower Co pitch deck, please get in touch! It’s possible my memory is off on this—this was 25 years ago after all. I am sure the valuation was in the billions but I’d like to know the precise number.
I've always thought "Matt Levine except for AI instead of finance" would make a great newsletter. Here you are doing both at once.
I read your bluesky thread about this last night and thought "dear God please tell me he'll turn this into a blog post. I'm gonna want to refer back to this one a lot."
Just fantastic, thanks.