OpenAI Tells Investors It Has a Computing Advantage Over Anthropic - And It's Not Wrong
A leaked memo reveals the infrastructure gap between OpenAI and Anthropic is widening. OpenAI is at 1.9 gigawatts. Anthropic is at 1.4. The question is whether compute equals quality.
OpenAI has a message for its investors: the compute gap is real, and it’s not close.
A leaked memo reported this week shows OpenAI told investors that by aggressively scaling its infrastructure early, it has pulled ahead of Anthropic on available computing capacity. OpenAI says it now has 1.9 gigawatts of available compute. Anthropic has 1.4. By next year, OpenAI expects “low double-digit” gigawatts. Anthropic is targeting 7 to 8.
For context: one gigawatt can power about 750,000 American homes.
Why this matters for products
The memo puts it bluntly: “This gap is crucial because computational power has now become a product constraint.” That’s a significant shift in how to think about AI companies. The limiting factor for AI quality used to be the model architecture. Now it’s physical infrastructure.
OpenAI is essentially arguing that compute availability is now a product feature. If you can’t run enough inference at scale, you can’t ship reliable products. And if you’re Anthropic, your Mythos model - impressive as it is - is bottlenecked by the fact that you simply have less iron backing it.
Anthropic’s response has been to lean into partnerships. A $50B US data center commitment. Deals with Broadcom and Google for an estimated 3.5 gigawatts of capacity by 2027. And notably, Anthropic works with all three major cloud providers: Google, Microsoft, and Amazon. OpenAI, by contrast, is building its own infrastructure, having committed $600B to data centers and chips by 2030.
The IPO timing question
Anthropic is reportedly considering an IPO. The interesting dynamic here is that an IPO would force Anthropic to tell a very specific story: that its more conservative infrastructure spend is a feature, not a bug. That efficient capital deployment beats YOLO spending. That you can win a race by being the smarter runner.
OpenAI’s memo, as reported by Zhitong Finance, essentially calls that a misjudgment of demand. “In retrospect, this caution appears less like self-discipline and more like underestimation of how quickly demand would arrive,” the memo said.
Dario Amodei, characteristically, called the aggressive spenders’ approach a “YOLO” strategy. OpenAI called that a misread of the market.
The engineering reality
There are good reasons Anthropic has been more conservative. Compute is expensive. GPUs sit idle. Capacity planning for AI workloads is genuinely hard. And Anthropic has had strong revenue growth - reportedly hitting a $30B revenue run rate in roughly half the time it took OpenAI, while spending only a quarter on model training.
But OpenAI’s argument is that the game has changed. The question isn’t just how efficiently you spend - it’s whether you have enough capacity to keep your product available when demand spikes. The memo cites Anthropic having “sometimes struggled to maintain service availability” as evidence that the conservative approach has real costs.
What’s the actual takeaway
Both strategies are defensible. OpenAI is betting that compute leadership translates to product leadership, and that you need to build ahead of demand to avoid being capacity-constrained at the wrong moment. Anthropic is betting that disciplined spend and strong model quality will win on margins and efficiency.
The honest answer is that we don’t yet know which bet pays off. The AI infrastructure race is genuinely new territory. But the memo OpenAI sent tells you how they’re thinking: this is a land grab, and you don’t win land grabs by being careful.
Source: Bloomberg | Futunn / Zhitong Finance