Claude Code Hourly Limits Just Doubled — Here's the Compute Deal That Made It Possible
Claude Code's hourly limits just doubled. The reason is Anthropic's takeover of SpaceX's Colossus 1 data center. Here's what changed and what's still limited.
Claude Code’s Hourly Limits Just Doubled — Here’s Why
Claude Code’s hourly limits doubled this week, and the direct cause is Anthropic taking over the entire capacity of the Colossus 1 data center — the same facility Elon Musk’s XAI built in Memphis to train Grok. If you’ve been hitting rate limits mid-session and watching your work queue up until 2am, this change is immediately relevant to you.
The announcement came paired with a second deal: Anthropic committed $200 billion to Google Cloud over five years. Two massive compute agreements landing in the same week, both pointing at the same underlying problem Anthropic has had for a while: not enough infrastructure to match the demand their models generate.
This post covers what changed, what didn’t, why the Colossus deal happened at all, and what it means for how you build with Claude Code going forward.
What Actually Changed (and What Hasn’t Yet)
The hourly limits for Claude Code doubled. That’s the concrete, immediate change.
The weekly limits have not changed yet, as of the time this was reported. So if you were hitting weekly caps, you’re still hitting them. The hourly change matters most for intensive sessions — the kind where you’re running parallel agentic workflows across multiple branches or chaining long multi-step tasks without wanting to babysit a queue.
Anthropic’s own announcement framed it clearly: the SpaceX partnership “substantially increases compute capacity,” and that capacity increase is what enabled the limit change. This isn’t a pricing adjustment or a policy decision. It’s a direct function of having more physical compute available.
The Claude API limits also increased alongside Claude Code. If you’re building on the API — running agents, automations, or anything that hammers the endpoint — you have more headroom now too.
Why Colossus 1 Was Available at All
Here’s the part that doesn’t make obvious sense until you look at the numbers.
XAI, Elon Musk’s AI company, built Colossus 1 at extraordinary speed — the facility was stood up in record time at the end of 2024. It’s a serious piece of infrastructure. But Elon himself stated that XAI is only using approximately 11% of its compute capacity. Grok is a real model, but it hasn’t captured the usage share that Anthropic, OpenAI, and Google have. Most developers and enterprises aren’t routing their workloads through Grok.
That leaves 89% of Colossus 1 sitting idle, generating costs without generating revenue. Selling that capacity to Anthropic isn’t charity — it’s basic asset utilization.
XAI is also ceasing to exist as a separate company and is being fully folded into SpaceX. The organizational consolidation makes the compute-as-infrastructure play even cleaner: SpaceX becomes a compute provider, not just a rocket company, and the Anthropic deal is the anchor tenant that makes the economics work.
The irony is hard to miss. As recently as March 2026, Elon called Anthropic “missanthropic” and described it as “the most hypocritical company.” He said, in his words, that “winning was never in the set of possible outcomes for Anthropic.” Weeks later, he signed a deal giving them his entire data center. The most charitable read is that this is an “enemy of my enemy” play — Elon’s lawsuit with Sam Altman is ongoing, and anything that helps Anthropic pull ahead of OpenAI is, from Elon’s perspective, a win against his actual target.
The Google Deal Is the Other Half of the Story
The Colossus deal gets the headlines because of the Musk angle, but the Google Cloud commitment is arguably the larger structural move.
$200 billion over five years is not a rounding error. That number came from reporting by The Information, which put a specific figure on a deal that had already contributed to the $462 billion backlog Google Cloud announced in its earnings. Google’s stock was already up 10% after the backlog announcement. It spiked another 1.5% overnight when the $200 billion figure was reported, and held those gains through the rest of the week.
Markets aren’t treating this as circular funding — the old argument that Anthropic is just recycling Google’s investment back to Google. Analysts are reading it as evidence that Anthropic’s revenue is real and growing fast enough to support massive new infrastructure commitments. That’s a different story than the one people were telling six months ago.
For builders, the practical implication is that Anthropic is now locked into a multi-year infrastructure expansion. The limit increases this week are the first visible output of that. They won’t be the last.
What This Means for How You Build
Other agents ship a demo. Remy ships an app.
Real backend. Real database. Real auth. Real plumbing. Remy has it all.
If you’ve been designing workflows around Claude Code’s rate limits — batching work, staggering requests, building retry logic to handle hourly caps — some of that defensive architecture just became less necessary.
The five core Claude Code workflow patterns (subagent loops, parallel branches, schema migrations, test-driven iteration) all become more viable when you’re not rationing tokens against a hard hourly ceiling. You can run longer unattended sessions. You can spawn more parallel agents without worrying that one branch will exhaust the budget before the others finish.
That said, a few things haven’t changed and probably won’t change soon:
Weekly limits are still in place. Plan your heaviest workloads accordingly. If you’re running something like the AutoResearch self-improving skills loop overnight, you still need to account for weekly consumption, not just hourly burst.
Cost is still real. More headroom doesn’t mean free tokens. If you’re building production systems that route through Claude Code or the API, your cost modeling should still be conservative. The limit increase means you can use more, not that you should use more indiscriminately.
The API and Claude Code limits moved together. If you’re building agents that orchestrate Claude via API — the kind of multi-model pipelines where you might use MindStudio’s visual builder to chain Claude alongside other models and integrations — the API headroom increase matters as much as the Claude Code change. Platforms that let you compose across 200+ models and connect to business tools without writing orchestration code benefit directly from upstream capacity increases like this one.
The Terrafab Signal You Shouldn’t Ignore
The Colossus deal also adds credibility to something most people dismissed when it was announced: Terrafab, Elon’s chip manufacturing project in Grimes County, Texas.
When Terrafab was announced in March, the cost estimate was $20-25 billion. A legal filing in Grimes County now puts the revised estimate at $55 billion to $119 billion. That’s not a rounding error — that’s a complete recategorization of the project’s ambition. If completed at the high end, it would be the largest chip fab on the planet.
Intel was added as a Terrafab partner in April. At the time, skeptics pointed out that even if Elon could execute the construction, the demand justification was thin — Tesla and Optimus alone don’t require the world’s largest fab.
The Anthropic deal changes that calculus. Anthropic is now a committed, long-term customer with essentially unlimited appetite for compute. The demand justification that was missing in March now exists. Whether Terrafab gets built is still an open question — chip fabs are notoriously hard to stand up, and the skepticism from analysts like Tay Kim (who noted it “takes a lot of trial and error accumulated over decades”) is legitimate. But the business case is no longer obviously broken.
For builders, this matters because it signals that the compute expansion isn’t a one-time event. If Terrafab comes online at even a fraction of its projected scale, the infrastructure available to Anthropic — and therefore the limits available to Claude Code users — will continue to expand over a multi-year horizon.
Practical Steps for This Week
How Remy works. You talk. Remy ships.
If you’re actively building with Claude Code, here’s what to actually do with this information.
Revisit your session architecture. If you built workflows that artificially break work into smaller chunks to stay under hourly limits, test whether those chunks can now be larger. Longer unattended runs are more viable. The self-evolving memory system with Obsidian hooks pattern, for example, becomes more useful when you can run longer review sessions without hitting a wall.
Test your API-based agents under higher load. If you’ve been conservative about how aggressively your agents call Claude, run some load tests. Find the new ceiling empirically rather than assuming the old one still applies.
Don’t remove all your retry logic. Limits still exist. Weekly caps are unchanged. Infrastructure deals don’t eliminate the need for graceful degradation — they just push the failure point further out.
Watch for the weekly limit announcement. Anthropic’s statement said the hourly limits increased; the weekly limits hadn’t changed as of the announcement. That’s likely the next thing to move as the additional capacity gets fully provisioned. If you’re planning a major project that will consume heavily over multiple days, it’s worth waiting to see if weekly limits follow.
If you’re building full-stack apps on top of Claude Code outputs, the abstraction layer matters. Tools like Remy take a different approach to the code generation question: you write an annotated markdown spec, and the full-stack app — TypeScript backend, SQLite database, auth, deployment — gets compiled from it. The spec is the source of truth; the generated code is derived output. As Claude Code’s capacity expands and longer autonomous sessions become practical, the question of what to do with the output becomes more important.
The Broader Pattern
Anthropic was, by most accounts, comparatively conservative in its infrastructure deals compared to OpenAI. That conservatism showed up as rate limits. Users who hit those limits went back to ChatGPT. The limits weren’t a policy choice — they were a capacity constraint.
The Google and SpaceX deals are Anthropic racing to close that gap. The doubled hourly limits are the first visible output. The $200 billion Google commitment and the Colossus 1 takeover are the structural moves that make sustained limit increases possible.
For builders who chose Claude Code because of the quality of the underlying model but kept one foot in OpenAI because of the limits: the calculus just shifted. Not all the way — weekly limits still exist, and one week of data doesn’t rewrite infrastructure history. But the direction is clear.
The compute is there now. The question is what you build with it.