Schumpeter
Don’t be fooled. AI bosses are regular capitalists
January 30, 2026
Press releases reveal almost nothing about how a company works. To really lift the corporate veil a firm must end up in court. Consider Elon Musk’s legal battle with OpenAI, for which a treasure trove of documents was released this month. Mr Musk is demanding a giant payout from the maker of ChatGPT, which he co-founded and now says defrauded him by abandoning its non-profit structure. The docket reads like a gossipy e-bestseller. Why did OpenAI partner with Microsoft rather than Amazon? “I think Jeff is a bit of a tool,” Mr Musk wrote of its founder in 2016. How does Sam Altman, OpenAI’s boss, feel about his very public scrap with Mr Musk? “It really fucking hurts,” he confided in 2023.
Investors are fixated on the revolutionary potential of AI. Some talk of little else. Primary evidence illuminating the inner workings of its vanguard thus invites careful study. A growing chorus of critics worry that AI will develop more slowly and predictably than its boosters expect: it will, in other words, be a normal technology. But corporate anthropologists studying the behaviour of OpenAI will discover something that Silicon Valley is even less keen to admit: its bosses are normal capitalists.
When a California jury hears the case in April, normality will not be the first impression left on them. As he considered changes to OpenAI’s governance in 2017, Ilya Sutskever wrote that his deliberations with Greg Brockman, a fellow co-founder, might be the “highest stakes conversation the world has seen”. A deal sheet from 2018 warned investors that OpenAI didn’t know “what role money will play” in a world where artificial general intelligence (AGI) beats humans at most economically valuable tasks.
Yet the battle for OpenAI’s soul shows how, even as AI has become more extraordinary, the way its most famous company is run has become more ordinary. The enterprise was conceived in 2015 as a civilisation-saving non-profit funded by Silicon Valley donors. Almost immediately the struggle to balance growth with charity forced it into corporate gymnastics. One idea was issuing cryptocurrency. Another was attaching to a corporate sugar daddy in the form of Tesla, Mr Musk’s car company.
Neither happened. Eventually Microsoft, and then others, invested in a subsidiary of OpenAI via deals which at first capped their returns at an uncharitable 100 times. In October the model-maker went further and reorganised as a public-benefit corporation (PBC). Although a non-profit entity still has formal control, OpenAI now operates as a for-profit business that commits to balancing the interests of shareholders with “all of humanity”. Anthropic, which makes the Claude chatbot, is also a PBC. Its charter has equally vague obligations to everyone in the world.
Silicon Valley has good reasons to cloak its profit-maximising instincts in piety. Anthropic may be the industry’s most do-goody lab, but it is also the developer of its hottest new product, Claude Code. Popular resentment over AI’s potential impact on jobs and electricity prices is a worry, particularly as it could lead to regulation. Four factors ultimately explain why AI’s star-gazing leaders now look more like standard-issue shareholder capitalists.
The first is self-interest. There is little reason to believe those in charge are altruists. “What will take me to $1bn?” wondered Mr Brockman in his diary (selectively exhumed by Mr Musk’s lawsuit), while OpenAI was considering its future in 2017. He should have asked what might take him to $134bn, the top end of what Mr Musk now demands. The huge sum assumes a $500bn valuation for OpenAI, based in part on flattering work by its own bankers. More than a quarter belongs to the non-profit entity—and most of that, says Mr Musk, belongs to him. Not bad for a $38m donation.
Then add competition. The constitutions adopted by OpenAI and Anthropic are built on fuzzy language. Yet the intensifying rivalry between model-makers is clear to all. When high-minded commitments encounter hard commercial reality, the former is destined to become window dressing rather than self-imposed constraints. Fights over regulation and courtroom battles over matters such as copyright will also force bosses to be less woolly.
To stay in the race, model-makers need capital—a third explanation. Investors demand returns, which is why cashflow targets have quickly replaced safety concerns as the industry’s idée fixe. If OpenAI and Anthropic go public, as they are rumoured to be considering, the disciplining effect that markets have on their operations will become much stronger. Good luck explaining to prickly Wall Street activists or retail investors that quarterly profits have been sacrificed for the good of the species.
Finally there is recent history, which is unambiguously unkind to bosses who pretend to ignore the bottom line in pursuit of a higher calling. The AI industry has more in common with the environmental, social and governance (ESG) movement, which reached its zenith in the early 2020s, than it would like to admit. Back then, bosses delivered existential warnings about climate change and inequality. Some even embraced new corporate forms, including the PBC. Little of the ESG project was made of strong enough stuff to survive the return of Donald Trump.
Nobody likes to be called normal, least of all billionaires. Those who believe AI could imperil humanity if not properly controlled may also despair at the assessment: the thought that AI bosses might handle extinction-level risks with the diligence that Wall Street applies to preventing financial crises is a worrying one.
Yet if AI is a normal technology marshalled by normal capitalists, the true risk may be of a different nature. It suggests that the AI boom is part of a normal cycle of hype, over-investment—and eventual crash. That wouldn’t be the end of the world, but it would be painful nonetheless. ■
Subscribers to The Economist can sign up to our Opinion newsletter, which brings together the best of our leaders, columns, guest essays and reader correspondence.