Google Cloud vs AWS vs Azure Q1 2026 — Which Hyperscaler Is Winning the AI Infrastructure Race?
Google Cloud grew 63%, Azure 40%, AWS 28% in Q1 2026. All three are compute-constrained. Here's what the numbers say about who's winning.
Google Cloud Is Winning, AWS Is Spending Everything, and Azure Is Stuck in the Middle
Google Cloud grew 63% year-over-year in Q1 2026. AWS grew 28%. Azure grew 40%. If you’re deciding which hyperscaler to build on, or trying to understand where the AI infrastructure market is actually headed, those three numbers tell a story — but not the one you might expect.
The headline comparison is Google Cloud +63% vs AWS +28% vs Azure +40% YoY. Google won the quarter by a wide margin. But the more interesting question is why, and what each company’s numbers reveal about their strategic position heading into the rest of 2026.
This isn’t a theoretical exercise. If you’re building AI-powered applications, the hyperscaler you choose affects model availability, latency, pricing, and which frontier models you can access without jumping through hoops. The infrastructure race is also the model access race.
What the Numbers Actually Measure
Before comparing, it’s worth being precise about what “cloud revenue growth” means for each company — because they’re not measuring the same thing.
AWS is a standalone business unit. Its $28% growth is pure cloud infrastructure and services revenue, with no cross-subsidization from advertising or hardware. That number represents AWS’s own economics.
Day one: idea. Day one: app.
Not a sprint plan. Not a quarterly OKR. A finished product by end of day.
Google Cloud includes Google Cloud Platform (GCP) infrastructure plus Google Workspace. The 63% growth is heavily weighted toward AI workloads — Sundar Pichai said on the earnings call that “AI is now the largest tailwind for cloud” and that “our enterprise AI solutions have become our primary growth driver for cloud for the first time in Q1.” That’s a meaningful statement. A year ago, Google Cloud’s growth story was still largely about catching up to AWS and Azure on general infrastructure.
Azure’s 40% growth is embedded inside Microsoft’s “Intelligent Cloud” segment, which also includes on-premises server products. The AI-specific contribution is harder to isolate, though Satya Nadella’s comments about Copilot and per-user-plus-usage billing give some signal.
The growth rates are real, but they’re measuring different things. Keep that in mind throughout.
Four Dimensions That Separate the Three
1. Backlog and Forward Demand
Google’s $460 billion backlog — up from $240 billion at the end of Q4 2025 — is the single most striking data point from the entire earnings season. Analyst Joseph Carlson described the chart as “so crazy it literally looks fake.” That’s not hyperbole; the curve is genuinely exponential. A significant chunk of that backlog comes from Google’s expanded deal with Anthropic, but the underlying demand signal is real regardless of how you attribute it.
AWS doesn’t publish backlog in the same way, but Andy Jassy’s comments on Trainium give a proxy. He said demand for Trainium chips is so strong that Amazon has to decide how much to allocate to existing customers versus holding back to sell as full racks. If their custom silicon business were a standalone company, Jassy estimated it would be sitting at $50 billion ARR — and he called it “one of the top three data center chip businesses in the world.” That’s a forward demand signal embedded in a supply constraint story.
Azure’s backlog isn’t disclosed, but Microsoft’s CFO Amy Hood raised CapEx guidance by $25 billion to $190 billion for the year — attributing the entire increase to higher component prices rather than new data center projects. That’s a different kind of signal: they’re paying more for the same capacity, not expanding faster.
2. CapEx and Burn Rate
This is where the stories diverge most sharply.
Amazon is spending at a pace that has essentially zeroed out free cash flow. AWS CapEx for Q1 was $43.2 billion, putting them slightly behind their $200 billion annual target — but that’s still a 60% jump from last year. The consequence: free cash flow collapsed from $26 billion in Q1 2025 to $1.2 billion in Q1 2026. Amazon is reinvesting almost every dollar it generates. Jassy’s position is that this is rational because “most of the new supplies are already spoken for.”
Google spent $35.7 billion in Q1 CapEx, which annualizes to roughly $140 billion — below their $180–190 billion guidance range. The market read this as capital discipline, and the stock responded accordingly (up 7% overnight, contributing to what Google described as the second biggest one-day market cap jump in history). Whether Google is genuinely being disciplined or is simply compute-constrained and can’t spend faster is an open question. Pichai acknowledged: “Our cloud revenue would have been higher if we were able to meet the demand.”
Built like a system. Not vibe-coded.
Remy manages the project — every layer architected, not stitched together at the last second.
Microsoft raised CapEx guidance to $190 billion but attributed the increase entirely to component price inflation, not new capacity. That framing matters — it suggests Microsoft isn’t accelerating its buildout, just paying more for what it already planned to build.
3. Model Access and Ecosystem
This is where the competitive picture gets complicated for builders specifically.
AWS has Anthropic as a deep strategic partner (Amazon has invested heavily in Anthropic, and Anthropic’s models are available via Bedrock). Now, with OpenAI models also available on Bedrock, AWS has arguably the broadest frontier model access of any cloud. As one observer noted, many companies defaulted to Anthropic and Claude simply because they were already on Bedrock — and that path of least resistance now extends to OpenAI models too.
Google Cloud has Gemini deeply integrated, plus the Anthropic relationship (which contributes to that backlog number). The 40% quarter-over-quarter surge in paid enterprise Gemini customers suggests the integration is working. Google also has a cost-quality advantage for many workloads — the cheaper Gemini models are genuinely competitive, which matters as enterprises start applying cost discipline to their token usage.
Azure had exclusive distribution rights to OpenAI’s models. That’s now gone. Nadella downplayed this on the earnings call, noting Microsoft has “frontier model royalty-free with all the IP rights” through 2032. But the strategic moat is narrower than it was six months ago. Microsoft’s Copilot has 20 million paid enterprise seats (up from 15 million in January), which is real traction — but it’s still a small fraction of the roughly 320 million paid Office 365 seats.
For teams building on top of models like Claude, the strategic differences between Anthropic, OpenAI, and Google’s agent approaches matter as much as the underlying cloud infrastructure choice.
4. Token Throughput and Compute Constraints
Google is processing 16 billion tokens per minute, up 60% quarter-over-quarter. That’s a concrete operational metric, not a financial one, and it tells you something about the scale of inference workloads Google is running.
All three hyperscalers are compute-constrained. OpenAI CFO Sarah Fryer described it as “a vertical wall of demand with compute being the bottleneck.” Meta CFO Susan Li said on her own earnings call: “Our experience so far has been that we have underestimated our compute needs, even as we have been ramping capacity significantly.” This isn’t a Google-specific problem — it’s the defining constraint of the current moment.
The practical implication for builders: if you’re running large-scale inference workloads, you may hit capacity limits regardless of which cloud you’re on. The hyperscaler with the most forward capacity — which the backlog numbers suggest is Google — may have an advantage in the medium term.
Google Cloud: The Momentum Story
Google’s quarter was genuinely exceptional. The 63% growth rate, the $460 billion backlog, the 81% year-over-year net income growth to $62.6 billion — these aren’t incremental improvements. Something structural shifted.
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
The search revenue story is worth pausing on. The prevailing narrative for the past two years was that AI chatbots would cannibalize Google search. Instead, search revenue grew 19% year-over-year and queries hit an all-time high. Google appears to have converted what looked like an existential threat into a growth driver. That’s not guaranteed to continue, but it’s the current reality.
For builders, Google’s cost-to-quality ratio on Gemini models is a real advantage for many workloads. If you’re building applications that need to run inference at scale without premium model pricing for every request, Google’s model tier structure is currently one of the better options. The Gemma 4 local inference options are also worth knowing about if you need edge deployment alongside cloud.
The risk with Google Cloud is execution consistency. Google has a long history of building excellent infrastructure and then failing to maintain enterprise relationships. The 40% QoQ growth in paid enterprise Gemini customers is a positive signal, but enterprise trust takes years to build and quarters to lose.
AWS: The Infrastructure Bet
AWS’s 28% growth looks modest next to Google’s 63%, but context matters. This is the fastest AWS has grown since climbing out of a trough in 2021. The business is a $152 billion ARR run rate growing at nearly 30% annually. That’s not a company in trouble.
The free cash flow collapse — from $26 billion to $1.2 billion in one year — is the most dramatic single data point in the entire earnings season. Amazon is making a massive forward bet that the infrastructure they’re building will be spoken for before it’s even online. Jassy’s confidence is backed by the Anthropic and OpenAI partnerships, both of which represent committed demand.
The Trainium story is underappreciated. If Amazon’s custom silicon business is genuinely one of the top three data center chip businesses in the world at $50 billion ARR equivalent, that’s a strategic asset that doesn’t show up cleanly in AWS revenue numbers. It also means Amazon has more control over its compute supply chain than it did two years ago.
For builders, AWS’s advantage is breadth and enterprise integration. Bedrock now has both Anthropic and OpenAI models. If your organization is already deeply in the AWS ecosystem — IAM, VPC, existing data pipelines — the path of least resistance for AI workloads runs through Bedrock. Platforms like MindStudio that support 200+ models and 1,000+ integrations can abstract over the underlying cloud, but for teams that want direct API access with enterprise compliance controls, Bedrock’s model catalog is now genuinely competitive.
Azure: The Incumbent Under Pressure
Azure’s 40% growth is solid. It’s not spectacular relative to Google, but it’s consistent — Hood projected the same growth rate to continue into Q2. The problem for Microsoft is narrative, not fundamentals.
The loss of OpenAI exclusivity is real, even if Nadella minimized it. The strategic value of being the only cloud where you could run GPT-4 or GPT-5 was enormous. That’s gone. Microsoft still has royalty-free access through 2032, which is meaningful, but other clouds can now offer the same models.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
Copilot’s 20 million paid enterprise seats is the number Microsoft needs to grow aggressively. Weekly engagement is reportedly at the same level as Outlook, which is a strong utilization signal. But 20 million against 320 million Office 365 seats means penetration is still low. The per-user-plus-usage billing shift that Nadella described — “any per user business of ours will become a per user and usage business” — could accelerate revenue per seat, but it also introduces friction for customers used to flat-rate pricing.
The CapEx guidance increase attributed entirely to component price inflation, rather than new capacity, is the most concerning signal for Azure’s long-term position. If Google and AWS are racing to build more capacity and Azure is just paying more for the same capacity, the gap in available compute could widen.
For teams building on Azure, the integration with Microsoft 365 and the enterprise compliance infrastructure remains a genuine advantage. If your organization runs on Teams, SharePoint, and Entra ID, Azure AI services have integration depth that’s hard to replicate elsewhere. The comparison between Claude Code and Codex is relevant here too — both are available via Azure, but the harness and tooling differences matter for developer workflows.
Verdict: Which Cloud for Which Use Case
Build on Google Cloud if: You’re running high-volume inference workloads where cost-per-token matters, you want the deepest Gemini integration, or you’re betting on Google’s infrastructure capacity advantage (the backlog suggests they’re building aggressively). Also if you need Anthropic models and want to stay outside the AWS ecosystem.
Build on AWS if: You’re already in the AWS ecosystem and want to minimize integration friction, you need the broadest frontier model access (Anthropic plus OpenAI on Bedrock), or you’re building infrastructure that needs to scale with enterprise compliance requirements. The Trainium availability for high-volume training workloads is also a differentiator if you’re doing model fine-tuning at scale.
Build on Azure if: Your organization is Microsoft-first — Office 365, Teams, Entra ID, Dynamics — and the Copilot integration depth matters for your use case. Also if you need OpenAI models with enterprise SLAs and your compliance team is already comfortable with Microsoft’s data handling agreements.
The honest answer is that for most AI builders in 2026, the cloud choice is less important than it was two years ago. Model APIs are increasingly available across all three. The differentiation is in ecosystem integration, pricing at scale, and which models you can access with the lowest latency and compliance overhead.
What the Q1 numbers make clear is that all three are running at capacity, all three are spending aggressively, and the demand signal is not slowing down. The question isn’t whether to build on cloud AI infrastructure — it’s which specific combination of models, tools, and integrations fits your workload.
When you’re thinking about how to structure the applications that sit on top of this infrastructure, the abstraction layer matters. Tools like Remy take a spec-driven approach — you write annotated markdown describing your application, and it compiles into a complete TypeScript backend, SQLite database, auth, and deployment. The generated code is real and owned by you; the spec is the source of truth. That’s a different model than hand-wiring cloud APIs, and it’s worth understanding as the infrastructure layer commoditizes.
Other agents start typing. Remy starts asking.
Scoping, trade-offs, edge cases — the real work. Before a line of code.
The $460 billion backlog at Google Cloud, the $43.2 billion quarterly CapEx at AWS, the 16 billion tokens per minute flowing through Google’s infrastructure — these aren’t projections. They’re Q1 2026 actuals. The infrastructure race is already underway, and the builders who understand where capacity is being built will have an advantage in knowing where to place their own bets.