XAI Is Dead: 5 Surprising Facts About Elon Musk's U-Turn on Anthropic
Elon called Anthropic 'missanthropic' in March 2026. Weeks later he leased them his entire data center and dissolved XAI. Here's the full story.
Elon Called Anthropic “Missanthropic” in March. By May He’d Leased Them His Entire Data Center.
XAI is being dissolved as a separate company and folded into SpaceX AI, and Elon Musk went from tweeting “is there a more hypocritical company than Anthropic?” to praising their team and handing them 220,000 Nvidia GPUs. These 5 facts explain how that happened — and why the reversal matters more than the headline suggests.
From “Missanthropic” to “No One Set Off My Evil Detector”: The Timeline
On March 10, 2026, Elon Musk retweeted Holly Elmore’s post telling Anthropic employees to quit, adding his own comment: “Is there a more hypocritical company than Anthropic?” That tweet got 8.8 million views. He’d previously called them “missanthropic” and said “winning was never in the set of possible outcomes for Anthropic.” He had Grok generate a “vulgar roast” of Dario Amodei. He accused Anthropic of “stealing training data at massive scale.”
Weeks later, he tweeted: “Everyone I met was highly competent and cared a great deal about doing the right thing. No one set off my evil detector.” That tweet got 20 million views.
The gap between those two moments is the story.
What Actually Happened
Built like a system. Not vibe-coded.
Remy manages the project — every layer architected, not stitched together at the last second.
Anthropic announced a compute partnership with SpaceX that gives them full use of the Colossus 1 data center — not a partial lease, not a shared arrangement, the entire facility. Colossus 1 sits in Memphis, Tennessee, houses over 220,000 Nvidia GPUs (mostly H100s), and runs at 300 MW capacity. Anthropic is using 100% of it.
Elon explained his position in a follow-up tweet: “I was okay leasing Colossus 1 to Anthropic, as SpaceX AI had already moved training to Colossus 2.” Colossus 2 is XAI’s Blackwell-based cluster containing around 550,000 GPUs, where XAI has moved its own model training. So the picture is: Anthropic gets the H100 cluster, XAI keeps the Blackwell cluster, and everyone pretends this was the plan all along.
Simultaneously, Elon announced that XAI will be dissolved as a separate company. It becomes SpaceX AI — not a subsidiary, not a division with its own identity, just SpaceX AI. The rebrand is a signal about where Elon thinks the value actually is.
The compute deal is already live. Anthropic confirmed inference on Colossus 1 would be available within the month of the announcement, and the usage limit changes went into effect immediately. Claude Code’s 5-hour rate limit was doubled for Pro, Max, Team, and seat-based enterprise plans. Peak hour usage reductions for Claude Code were eliminated for Pro and Max accounts. API rate limits for Opus models jumped substantially: Tier 1 max input tokens per minute went from 30,000 to 500,000; Tier 2 from 450K to 2M; Tier 3 from 800K to 5M; Tier 4 from 2M to 10M.
If you’ve been hitting Claude limits constantly — and if you use Claude Code heavily, you almost certainly have — Anthropic’s compute shortage has been a real problem. This deal is the first meaningful fix.
Why This Matters Beyond the Drama
Dario Amodei said something at Anthropic’s Code with Claude developer day that reframes the entire situation. His exact words: “We planned for a world of 10x growth per year. In Q1 2026, we saw 80x annualized growth per year in revenue and usage.”
80x in a single quarter. That’s not a rounding error or a favorable comparison period. That’s a company that correctly identified a massive market, built the best product for it, and then ran out of the physical infrastructure to serve the demand they created. The compute shortage wasn’t a strategic failure — it was a consequence of being right faster than they could build.
Anthropic has been scrambling to fix this from multiple directions simultaneously. They have an up-to-5 GW deal with Amazon AWS (targeting nearly 1 GW of new capacity by end of 2026). They have a 5 GW deal with Google and Broadcom coming online in 2027. They have a $30 billion Azure capacity deal with Microsoft and Nvidia. And now they have Colossus 1.
The SpaceX deal is notable specifically because it’s already online. The other deals are future capacity. Colossus 1 is running today, which is why the rate limit changes happened immediately rather than being announced as a future roadmap item.
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
For developers building on Claude — particularly those using Claude Code for production workflows — the practical implication is that the artificial scarcity that’s been degrading the experience for months has a real fix in place, not just a promise of one.
The Non-Obvious Part: Elon’s Actual Strategic Pivot
The “enemy of my enemy” framing is too simple. Yes, Elon is in an active lawsuit with Sam Altman. Yes, helping Anthropic is indirectly bad for OpenAI. But that’s not the full picture.
The more interesting read is that Elon has concluded his comparative advantage in AI was never model development. It was infrastructure.
XAI’s model trajectory tells the story. Grok 4.3 just launched and it’s a decent model — cheaper than the frontier alternatives, meaningfully improved from previous versions — but nobody puts it in the same tier as Claude Opus or GPT-5. The agentic harness story is even thinner: XAI has no meaningful product competing with Claude Code or Codex. The cursor deal, announced in April, was supposed to be the answer, but reporting from The Information suggested there were no concrete plans to co-develop a coding model, with Cursor keeping its distance from XAI’s direction.
Meanwhile, Colossus 1 came online faster than anyone expected. Elon’s actual demonstrated skill — compressing money, resources, and time to build known-but-hard things at scale — showed up clearly in the data center buildout. The model side required something different: research breakthroughs, the kind of thing you can’t just throw resources at.
So XAI folds into SpaceX AI. The Grok model doesn’t disappear — it stays integrated into X/Twitter and remains an option for Optimus as embodied robotics mature — but it’s no longer the primary bet. The primary bet is being the compute provider for the labs that are winning the model race.
Chimath Palihapitiya predicted this on the All-In podcast weeks before the deal, saying: “That’s a huge lane for Grok and SpaceX to run through because they have a ton of excess capacity. If I were Elon now, I’d be running all over this market… He and Dario should do a deal tomorrow.” The deal happened roughly on that timeline.
The Terrafab chip manufacturing project in Texas adds another layer. A legal filing in Grimes County put the project cost at $55 billion to $119 billion — significantly higher than the $20-25 billion previously estimated. If completed, it would be the largest chip fab on the planet. That project makes a lot more sense if Elon is positioning SpaceX as infrastructure for frontier labs rather than as a frontier lab itself. Anthropic is now, in effect, a guaranteed anchor customer.
The Cursor Wrinkle Nobody Is Talking About
There’s a complication buried in the deal structure that deserves attention.
In April, Cursor announced a partnership with SpaceX to use Colossus infrastructure for model training. The deal included an option for SpaceX to acquire Cursor later in 2026 for $60 billion, or pay a $10 billion breakup fee. Cursor’s announcement said they’d be “leveraging XAI’s Colossus infrastructure.”
Then Anthropic announced they’re using 100% of Colossus 1.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Elon’s tweet clarified that XAI had already moved its own training to Colossus 2. The implication is that Cursor’s training work is also happening on Colossus 2, not Colossus 1. But the sequencing here is worth watching. Cursor signed a deal with SpaceX, then SpaceX leased its entire first data center to a competitor in the coding AI space. If you’re a Cursor founder, that’s a non-trivial thing to process.
The $10 billion breakup fee gives Cursor some protection, and the Colossus 2 capacity is real. But the strategic picture shifted. Cursor was supposed to be XAI’s answer to Claude Code. Now XAI is providing compute to Claude Code’s parent company. These things can coexist operationally, but the alignment is messier than either announcement made it sound.
What the Secondary Market Is Saying
Anthropic’s implied valuation on secondary markets has crossed $1 trillion, surpassing OpenAI’s $850 billion. That gap is recent — a few months ago the comparison would have looked different.
The valuation gap reflects something real about the enterprise market. Anthropic has 42-54% market share in AI coding use cases. OpenAI has 21%. Coding is now 51% of all enterprise generative AI usage according to Menlo Ventures’ State of Generative AI report. Claude Code alone is doing $2.5 billion in annualized revenue.
For teams evaluating which models to build on — whether through direct API access or through orchestration platforms — the market signal matters. If you’re building agents that chain Claude with other tools, platforms like MindStudio handle this orchestration across 200+ models and 1,000+ integrations, which gives you flexibility to route between providers as the competitive landscape shifts. The Anthropic-SpaceX deal changes the capacity picture, but the multi-model strategy remains sound insurance.
The Anthropic vs OpenAI agent strategy comparison is worth reading in this context — the compute deal doesn’t change the fundamental architectural bets each company is making, but it does change Anthropic’s ability to execute on theirs.
The Orbital Compute Footnote
One line in the SpaceX announcement is easy to skip past: “SpaceX AI and Anthropic AI have also expressed interest in partnering to develop multiple gigawatts of orbital AI compute capacity.”
This is not a product announcement. It’s not even a commitment. But it’s notable that Anthropic put their name on it. Sam Altman has publicly dismissed orbital compute as impractical. Jensen Huang has been more open to the idea. Elon has been bullish on it as part of SpaceX’s long-term vision.
The fact that Anthropic is willing to express interest — even in vague terms — signals something about how seriously they’re taking the infrastructure relationship with SpaceX. It’s also a useful data point for thinking about where the compute ceiling actually is. If demand is growing at 80x annualized and terrestrial buildout has physical limits (power, land, cooling), the question of what comes next is not purely theoretical.
What to Watch
The immediate practical question is whether Anthropic’s rate limit improvements hold as more users pile onto the newly available capacity. Doubling the 5-hour rate limit and eliminating peak-hour reductions for Pro and Max accounts is meaningful, but the weekly limits haven’t changed yet. Anthropic’s head of growth noted that only a small percentage of users hit weekly limits while a much larger portion hit the 5-hour limit — so they fixed the more common problem first. Weekly limits are the next thing to watch.
The longer-term question is what SpaceX AI actually becomes. XAI dissolving as a separate company is a structural change, not just a rebrand. The personnel story at XAI has been rough — co-founders leaving one after another, acknowledged rebuilds from scratch. Folding into SpaceX gives the remaining team a different organizational context and a clearer mission: build the infrastructure, not the frontier model.
If you’re building applications on top of Claude — whether that’s using Claude Code in production or building agents that use Claude as a reasoning layer — the capacity situation just improved materially. The deal is live, the limits are already higher, and Colossus 2’s Blackwell GPUs are still coming online for future capacity. For developers who’ve been routing around Claude’s limits with workarounds, it’s worth retesting your actual throughput against the new numbers.
Tools like Remy are worth thinking about in this context too — when you’re compiling a full-stack app from an annotated spec rather than hand-writing TypeScript, the underlying model’s throughput and reliability become load-bearing in a different way than in interactive chat. Consistent API availability matters more when your build pipeline depends on it.
Tom Brown, one of Anthropic’s founders, tweeted: “Grateful to be partnering with SpaceX. We’re going to need to move a lot of atoms.” That’s the honest summary of where things stand. The model quality is there. The demand is there. The constraint has always been physical infrastructure, and the SpaceX deal is the most direct fix Anthropic has found yet.
The fact that it required Elon Musk to reverse two years of public hostility to make it happen is either a sign of how desperate the compute situation was, or a sign of how good the business case is. Probably both.