Skip to main content
MindStudio
Pricing
Blog About
My Workspace

AI Bubble or Structural Boom? $805B CapEx Forecast vs. Every Prior Tech Bubble Compared

Morgan Stanley forecasts $805B in hyperscaler CapEx for 2026. Larry Fink says it's not a bubble. Here's how the numbers compare to prior tech cycles.

MindStudio Team RSS
AI Bubble or Structural Boom? $805B CapEx Forecast vs. Every Prior Tech Bubble Compared

$805 Billion Says This Isn’t a Bubble — But So Did Every Bubble

You are making a bet right now whether you know it or not. If you’re building on AI infrastructure, hiring AI engineers, or allocating budget toward AI tooling in 2026, you’ve implicitly decided that the capital flowing into this sector represents real demand rather than speculative excess. Morgan Stanley raised its five-hyperscaler CapEx forecast to $805 billion for 2026 and $1.1 trillion for 2027. Larry Fink stood at Milken and said, “There is not an AI bubble. There is the opposite. We’re short power. We’re short compute. We’re short chips.” Both of those statements could be true and you could still be wrong about your bet.

The question isn’t whether the AI industry is large. It clearly is. The question is whether the current capital structure resembles the dot-com bubble, the railroad bubble, or something structurally different — and what that answer means for decisions you make this quarter.


What the Prior Bubbles Actually Looked Like

Before comparing, you need a baseline. The railroad bubble of the 1840s and 1850s involved massive overbuilding of track capacity relative to near-term freight demand. Capital poured in because the technology was obviously real and obviously important. The infrastructure was not wasted — it shaped the American economy for 150 years. But the investors who funded it mostly lost their money, and the companies that survived did so by buying distressed assets from the ones that failed.

Remy doesn't write the code. It manages the agents who do.

R
Remy
Product Manager Agent
Leading
Design
Engineer
QA
Deploy

Remy runs the project. The specialists do the work. You work with the PM, not the implementers.

The dot-com bubble had a different shape. The technology was real. The long-run demand was real. But the revenue wasn’t. Companies were valued on eyeballs and page views and “first mover advantage” in markets that didn’t yet exist. When the question “but where does the money come from?” finally got asked loudly enough, the answer was “we don’t know yet” — and that was enough to collapse the whole structure.

The key distinction between a bubble and a structural boom is whether the capital is chasing real, current, measurable demand or speculative future demand. Railroads: real long-run demand, speculative near-term demand. Dot-com: real long-run demand, speculative near-term demand. The pattern is the same.

So what’s different now?


The Backlog Problem (Which Is Actually a Demand Problem)

The single most important number in this debate isn’t the CapEx figure. It’s the backlog figure.

The five hyperscalers spent over $400 billion in CapEx in Q1 of this year. Their reported and projected backlog — the committed future orders from customers — sits at approximately $1.3 trillion. That backlog is not only larger than current spend, it’s diverging upward. The gap between what’s being built and what’s been promised is growing, not shrinking.

This is structurally unlike the dot-com era. In 2000, Cisco and Nortel were building fiber capacity ahead of demand. The demand projections were extrapolations from early adoption curves. When enterprise customers slowed purchasing, the backlog collapsed. The fiber sat dark for years.

The current backlog includes Anthropic’s $200 billion commitment to Google Cloud spread over five years. That single deal represents over 40% of Google’s entire $462 billion reported backlog — the number that sent Google stock to an all-time high. This is not a speculative order. It’s a contractual commitment from a company whose annualized revenue, according to SemiAnalysis, has gone from $9 billion to over $44 billion in a single year. Anthropic is reportedly adding roughly $96 million in ARR per day. AWS took 13 years to reach $35 billion in annual revenue. Salesforce took over 20 years to pass $20 billion.

When the customer making the commitment is growing that fast, the commitment looks less like speculation and more like a company trying to secure supply before it runs out.


The Token Economy vs. the Seat Economy

The dot-com analogy breaks down further when you examine the revenue model.

In 2000, the question was how many users you could acquire and whether you could eventually monetize them. The unit economics were unclear. In 2026, the question is how many tokens you can supply, and the unit economics are increasingly clear.

The shift from seats to tokens matters enormously here. In the seat model, a company sells a subscription — $20 a month, maybe $200 for enterprise. There’s a natural ceiling on how many seats exist. In the token model, a single power user running Claude Code or Codex can consume hundreds or thousands of dollars of compute per month. There’s no obvious ceiling on token consumption because the demand is driven by work output, not by user count.

Other agents start typing. Remy starts asking.

YOU SAID "Build me a sales CRM."
01 DESIGN Should it feel like Linear, or Salesforce?
02 UX How do reps move deals — drag, or dropdown?
03 ARCH Single team, or multi-org with permissions?

Scoping, trade-offs, edge cases — the real work. Before a line of code.

Atlassian’s recent earnings illustrate this concretely. Revenue grew 32% year-over-year, up from 23% the prior quarter. Their AI search tool Rovo was a significant driver — customers using Rovo were growing their own ARR at twice the pace of non-Rovo customers. What’s interesting about Rovo technically is that it reduces token costs by using the existing Jira and Confluence knowledge graph rather than token-hungry RAG retrieval. Twenty years of structured relationships between work, teams, people, and code means Rovo does a graph lookup instead of a vector dump. The token efficiency is a feature, not a limitation. (This is the same insight behind Karpathy’s LLM wiki approach to knowledge bases, which cuts token use by up to 95% on small knowledge bases — structured data beating brute-force retrieval.)

Palantir’s Q1 numbers tell a similar story from a different angle. 85% year-over-year revenue growth, their fastest pace since their 2020 IPO. Net income up 4x year-over-year to $870 million. Government revenue growth accelerated from 66% to 84%. CTO Shyam Sankar’s framing: “Tokens are the new coal. Palantir is the train.” That’s not a metaphor about future potential. It’s a description of current revenue.


Where the Bubble Argument Still Has Teeth

None of the above means the bubble argument is wrong. It means the strongest version of the bubble argument has shifted.

The weak version — “there’s no revenue” — is clearly dead. The revenue is there and accelerating.

The stronger version is the circular spending argument. Google invested up to $40 billion in Anthropic. Anthropic committed $200 billion back to Google Cloud. Google reports that commitment as backlog, which sends the stock to an all-time high. The circularity is real. Some portion of the “demand” in the backlog is money flowing between entities that are financially intertwined. Microsoft, Oracle, Google, and Amazon have reported a $2 trillion backlog between them, with OpenAI and Anthropic accounting for nearly half of it.

The question is whether the underlying demand — from enterprises, governments, and developers who are not hyperscalers — is sufficient to justify the infrastructure being built. If Anthropic’s revenue growth is real and independent, the circular argument weakens. If Anthropic’s revenue is substantially driven by hyperscaler investment flowing back through the system, the circular argument strengthens.

The Cerebrus IPO is an interesting data point here. The company planned to raise $3.5 billion at a $26.6 billion valuation. Private investors submitted requests for $10 billion in allocations. The pre-sale turned into an auction — investors submitted desired allocations and maximum prices, a break from standard IPO protocol, because demand so massively exceeded supply. That’s not the behavior of a market that thinks it’s in a bubble. It’s the behavior of a market that thinks it’s being rationed.


The CapEx-as-GDP-Tailwind Argument

David Sacks has argued that AI CapEx will contribute roughly 2.5% to GDP growth this year and over 3% next year, based on the Morgan Stanley numbers. His point is that this understates the impact because CapEx is just the investment to build the token factories — it doesn’t count the economic activity generated inside them.

This is worth taking seriously as a structural argument. The tokens being produced are being used to generate code, which increases productivity throughout the economy. The ROI on the CapEx, if the productivity gains are real, could dwarf the CapEx itself. That’s why investment keeps growing even as the numbers get large.

The railroad analogy is instructive here too, but in the opposite direction from how bubble-callers use it. Yes, railroad investors mostly lost money. But the railroads themselves created enormous economic value — they enabled the industrialization of the American interior, the growth of Chicago, the cattle industry, the grain trade. The infrastructure outlasted the investors who funded it. If AI infrastructure follows a similar pattern, the question for builders isn’t whether the investors will be right, but whether the infrastructure will exist and be accessible when you need it.

Larry Fink’s prediction that compute will become a financialized commodity — traded on futures markets like oil or wheat — is the logical endpoint of this argument. If compute becomes a commodity with futures markets, it becomes permanently accessible infrastructure, not a speculative asset class. That’s a very different world than dot-com, where the infrastructure (dark fiber) sat unused for years.


What This Means for What You Build

The practical question for builders and engineers is: how should this analysis change your decisions?

If you’re building on top of AI infrastructure, the backlog divergence is actually good news. It means the hyperscalers have strong incentives to keep building capacity, which means supply will increase over time even if there are near-term constraints. The token drought that Palantir’s CTO is describing is a real current constraint, not a permanent structural feature.

Token efficiency matters more in a constrained supply environment. The Atlassian Rovo example — using structured knowledge graphs instead of RAG — is a concrete engineering decision with real business impact. If you’re building agents or workflows that consume tokens at scale, the Claude Code vs Codex comparison is worth understanding not just for capability differences but for token consumption patterns. Similarly, the GPT-5.5 vs Claude Opus 4.7 real-world coding performance comparison shows that GPT-5.5 uses 72% fewer output tokens than Opus 4.7 on the same tasks — a meaningful cost difference at production scale.

The startup formation data is also relevant for builders. Stripe Atlas reported 130% year-over-year growth in new incorporations in Q1 2026, hitting 100,000 all-time incorporations. That’s not a lagging indicator — it’s a leading indicator of demand for the tools, infrastructure, and platforms that new companies will need. If you’re building developer tools or enterprise software, your addressable market is expanding faster than the job displacement narrative would suggest.

For teams building agentic workflows that need to connect to business systems at scale, platforms like MindStudio handle the orchestration layer — 200+ models, 1,000+ integrations, and a visual builder for chaining agents and workflows — which matters when the bottleneck shifts from “can AI do this?” to “can we deploy this across our actual stack?”

The Stripe data also shows AI-sector startups achieving faster revenue growth than historical norms. If you’re building something in the AI sector and your revenue growth looks slow relative to Anthropic’s numbers, that’s not necessarily a problem — Anthropic’s growth rate is historically anomalous. But it does suggest that the market is rewarding AI-native products with faster adoption curves than previous software categories.


The Verdict

The honest answer is that the current AI buildout doesn’t look like the dot-com bubble or the railroad bubble in the ways that matter most. The revenue is real. The demand backlog is real and growing faster than supply. The unit economics are improving — Anthropic’s inference margins reportedly went from 38% to 70% in a year. The customers making the largest commitments are themselves growing fast enough to plausibly fulfill those commitments.

The circular spending concern is legitimate and worth watching. If Anthropic’s revenue growth slows significantly, the Google backlog story changes. If the hyperscalers’ enterprise customers don’t convert AI pilots into production deployments at the rate the backlog implies, the divergence between backlog and spend will eventually correct.

But “this could correct” is different from “this is a bubble.” Every capital cycle could correct. The question is whether the underlying demand is structural or speculative. The weight of the evidence — the Palantir revenue numbers, the Atlassian earnings, the Stripe startup data, the Cerebrus IPO auction, the software engineering job posting data — points toward structural.

When you’re building something that will take six to eighteen months to ship, the question isn’t whether there’s a bubble. The question is whether the infrastructure you’re building on will still be there when you’re done. On that question, the $805 billion forecast and the $1.3 trillion backlog are the most relevant numbers you have.

For teams thinking about how to build production applications on top of this infrastructure, Remy takes a different approach to the code generation layer: you write a spec — annotated markdown where prose carries intent and annotations carry precision — and it compiles into a complete TypeScript backend, SQLite database, frontend, auth, and deployment. The spec is the source of truth; the generated code is derived output. That matters when the underlying models are improving fast enough that recompiling from a clean spec is cheaper than maintaining hand-written code.

The builders who will be wrong are the ones who treat the current moment as either obviously safe or obviously doomed. The data says structural growth. The circular spending concern says watch the revenue carefully. Both of those things can be true at once, and acting on both simultaneously is the only reasonable position.

Presented by MindStudio

No spam. Unsubscribe anytime.