Anthropic's $50B Raise at a Near-$1T Implied Valuation: Why Secondary Shares Now Trade Above OpenAI
Anthropic confirmed a $50B raise. On secondary markets, Anthropic shares are now trading above OpenAI — with some trades implying a $1 trillion valuation.
Anthropic Just Raised $50B. Secondary Shares Are Trading Above OpenAI.
Anthropic confirmed a $50 billion raise. On secondary markets, Anthropic shares are now trading above OpenAI — and some individual trades have implied a valuation of $1 trillion. That’s not a typo, and it’s not a rounding error.
If you’re building on top of these models, or deciding which labs to bet on, this matters. Not because the exact valuation multiple is meaningful, but because of what it signals about how the market is reading the next five years of AI infrastructure.
Here’s what happened, what the numbers actually mean, and how to think about it.
What the Numbers Say
Bloomberg reported mid-week that Anthropic had begun talks to raise at a valuation above $900 billion. That would put them beyond OpenAI’s last primary-round valuation of $825 billion, which was set in March. By Thursday, TechCrunch had the confirmation: the raise was expected to close at $50 billion.
Then the secondary market data started surfacing. Secondary markets — where existing shareholders sell shares to new buyers outside of official funding rounds — are often the most honest signal of what sophisticated investors actually believe, because they’re putting real money on the line without the PR machinery of a formal round. And on secondary markets, Anthropic shares have flipped above OpenAI. Some trades have implied a valuation as high as $1 trillion.
Seven tools to build an app. Or just Remy.
Editor, preview, AI agents, deploy — all in one tab. Nothing to install.
That’s a meaningful shift. For most of the last two years, OpenAI was the default “safe” bet in private AI markets. The fact that Anthropic has flipped that — even on secondary, even in some trades — is worth understanding.
Why Anthropic’s Valuation Makes Sense Right Now
The logic isn’t primarily about Anthropic’s current revenue multiple. It’s about scarcity.
OpenAI CFO Sarah Fryer described token demand as “a vertical wall of demand with compute being the bottleneck.” Dylan Patel from SemiAnalysis made the same point on Patrick O’Shaughnessy’s podcast: the question of which lab is technically “in first place” is almost irrelevant, because “even tier two or tier three labs are going to be sold out of tokens.” Every token that can be produced will be sold. The constraint is physical compute, not demand.
In that environment, the handful of companies that can actually produce frontier tokens at scale are not competing for customers. They’re rationing access. That changes the valuation math entirely.
Anthropic sits in that group. So does OpenAI. So does Google. The secondary market bet on Anthropic isn’t that it will beat OpenAI on some benchmark — it’s that there are roughly half a dozen companies writing the story of the next decade of computing, and Anthropic is one of them. Given that framing, the question isn’t whether $50B is too high. It’s whether you can get in at all.
The Compute Constraint Is Real and Getting More Visible
You can see the compute scarcity showing up in places that would have seemed absurd six months ago.
AWS Q1 2026 cloud revenue grew 28% year-over-year — its best performance since climbing out of a trough in 2021. Andy Jassy, discussing demand for Trainium chips, said: “We have such demand right now for Trainium from various companies who will consume as much as we make. I expect over time there’s a good chance we’re going to sell racks.” Microsoft Azure grew 40% year-over-year. Google Cloud grew 63% — resulting in the second-biggest single-day market cap jump in Google’s history. Analyst Joseph Carlson described the backlog chart as “so crazy, it literally looks fake.”
And then there’s the Mac mini. Apple’s Mac mini is sold out for at least several months. Tim Cook discussed it on the earnings call. Consumer AI hardware — not data center GPUs, not H100 clusters — is now compute-constrained. That’s how deep the demand runs.
For builders, this is the context in which Anthropic’s valuation makes sense. When you’re building on Claude, you’re building on infrastructure that is, by any reasonable measure, undersupplied relative to demand. That’s not a bad place to be as a supplier.
How Anthropic Compares to OpenAI Right Now
The Anthropic-OpenAI comparison is more interesting than just “who has the higher valuation.”
Hire a contractor. Not another power tool.
Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.
The Microsoft-OpenAI deal was restructured this week in ways that reveal a lot about OpenAI’s position. Microsoft gets free (not revenue-share) access to OpenAI models for another roughly five years, plus the removal of the AGI clause that could have cut off Microsoft’s access on a whim. In exchange, OpenAI is now free to sell its models through AWS and Google Cloud. The framing from analyst Raizo was direct: OpenAI has grown too big for any single cloud to fully serve.
That’s a sign of strength, but it’s also a sign of a company that has outgrown its original infrastructure relationships. OpenAI is now a multi-cloud model vendor. That’s a different business than it was two years ago.
Anthropic, meanwhile, has its own cloud relationships — primarily with AWS and Google — and has been more deliberate about its enterprise positioning. The Anthropic vs OpenAI vs Google agent strategy comparison is worth reading if you want the full picture of how each lab is thinking about the agentic layer, because the valuation story and the product strategy story are connected.
One concrete example of that product strategy: Anthropic split its developer-facing tools into Claude Code (for technical work) and Claude Cowork (for non-technical knowledge workers), while OpenAI’s Codex is betting on a single interface for everyone. These are real bets with real implications for which user base each company captures — and that shapes the revenue trajectory that justifies the valuation.
The Mythos Wrinkle
There’s one factor in Anthropic’s valuation story that deserves its own paragraph: Mythos.
Anthropic’s most capable unreleased model — Claude Mythos — has been in restricted preview with roughly 70 companies. Anthropic’s plan was to expand that access incrementally. But the White House told Anthropic it opposes broader rollout due to national security concerns. Some officials are reportedly worried that Anthropic won’t have the compute to serve more entities without degrading the government’s own access.
Anthropic says compute isn’t the constraint. The White House isn’t convinced.
AI governance expert Dean Ball described this as “the very first case that we know of of the US government restricting rollout of a new AI model based on policy considerations.” He called it an informal but real licensing regime. His conclusion: “The training wheels have come off on AI policy. The trial runs are over.”
For the valuation story, this cuts both ways. On one hand, government scrutiny of your most capable model is a friction. On the other hand, the fact that the US government is paying this much attention to Anthropic’s model rollout is itself a signal of how seriously Mythos is being taken. You don’t restrict something you think is irrelevant.
If you want the full technical picture of what Mythos actually does, the Mythos vs Opus 4.6 capability comparison is a good place to start.
What This Means If You’re Building on These Models
Here’s the practical read for AI builders.
The end of the AI subsidy era is real. GitHub Copilot’s move to usage-based billing — CPO Mario Rodriguez said the current premium request model is “no longer sustainable” — is the clearest signal yet. Satya Nadella said on the Microsoft earnings call: “Any per user business of ours… will become a per user and usage business.” That’s not a GitHub-specific statement. That’s a direction-of-travel statement for the entire industry.
If you’re building agents or workflows that make heavy use of frontier models, you need to be thinking about cost architecture now. The companies that will do well in a usage-based world are the ones that use premium models for tasks that actually require them, and cheaper models for everything else. Platforms like MindStudio make this kind of multi-model orchestration practical — 200+ models, 1,000+ integrations, and a visual builder for chaining agents — so you’re not locked into a single provider’s pricing as the market shifts.
The Anthropic compute shortage is also real and getting more visible. If you’ve noticed Claude rate limits tightening, the compute shortage context explains what’s driving it. The short version: Anthropic underinvested in compute relative to demand, and the $50B raise is partly about fixing that. But it takes time to build out infrastructure, and in the meantime, the scarcity is the point — it’s what’s driving the valuation.
The Secondary Market Signal
Secondary market valuations are noisy. Individual trades at implied $1 trillion don’t mean Anthropic is worth $1 trillion today. They mean that some buyers, in some transactions, were willing to pay that price for a small number of shares.
But the direction of the signal matters. A year ago, OpenAI was the unambiguous leader in private market sentiment. The fact that Anthropic has flipped that — even partially, even on secondary — reflects something real: a belief that Anthropic’s technical positioning, its enterprise relationships, its safety reputation (which matters to regulated industries), and its model quality have converged into a credible long-term bet.
The $50B raise, if it closes at that number, will be one of the largest private fundraises in history. It will give Anthropic the capital to build out compute infrastructure, expand its model lineup, and compete for enterprise contracts at a scale that wasn’t possible before.
For builders, the most useful frame isn’t “is Anthropic worth $50B” — it’s “is Anthropic going to be a major infrastructure provider for the next decade.” The secondary market is saying yes. The cloud earnings numbers are saying the demand is there. The government’s attention to Mythos is saying the capability is real.
Where This Goes Next
The Anthropic raise is part of a broader consolidation story. A small number of companies — Anthropic, OpenAI, Google DeepMind, maybe one or two others — are pulling away from the rest of the field in terms of frontier capability, compute access, and enterprise trust. The secondary market is pricing that consolidation.
For builders, this means the model layer is becoming more like cloud infrastructure: a few dominant providers, significant switching costs, and pricing power that will only increase as demand outstrips supply. The smart move is to build in a way that doesn’t assume any single provider’s pricing or availability will stay constant.
The Claude Code source code leak surfaced some interesting details about how Anthropic is thinking about developer tooling — worth reading if you’re building heavily on Claude. And if you’re thinking about the full stack of what gets built on top of these models, tools like Remy represent one answer to the “how do you go from spec to deployed application” question: you write annotated markdown, and a complete TypeScript backend, database, auth layer, and frontend get compiled from it. The source of truth is the spec; the code is derived output. That abstraction layer starts to matter more as the underlying model costs become variable.
Other agents start typing. Remy starts asking.
Scoping, trade-offs, edge cases — the real work. Before a line of code.
The valuation numbers will keep moving. The underlying dynamic — more demand than supply, a handful of companies with the infrastructure to serve it, and a market that’s starting to price that reality — is going to be stable for a while.