Google Cloud vs AWS vs Azure Q1 2026: Which Cloud Is Actually Winning the AI Infrastructure Race?
Google Cloud grew 63%, Azure 39%, AWS 28% in Q1 2026. We break down which cloud is winning on AI and why Google search survived the chatbot threat.
Three Cloud Giants, Three Very Different AI Stories
Google Cloud grew 63% year-over-year. Azure grew 39%. AWS grew 28%. If you’re deciding where to build your AI infrastructure in 2026, those three numbers are the starting point — but they’re not the whole story.
The gap between Google and the other two is wide enough to demand an explanation. And the explanation matters to you whether you’re picking a primary cloud, evaluating a vendor’s roadmap, or just trying to understand which platform will have capacity when you need it six months from now.
There’s also a subplot that deserves more attention than it’s getting: Google Search revenue grew 19% year-over-year in the same quarter. The prevailing narrative for the past two years has been that AI chatbots would cannibalize Google’s core business. The opposite happened. That’s either a fluke or it tells you something important about how AI demand actually works in practice.
Here’s the full picture.
What the Numbers Actually Measure
Before comparing platforms, it’s worth being precise about what “cloud AI growth” means in these earnings reports — because the three companies are measuring slightly different things.
Google Cloud’s 63% growth reflects both infrastructure (GCP) and workspace AI products. The $460 billion backlog — up from $240 billion at the end of Q4, nearly doubling in a single quarter — is forward-looking committed spend, not revenue already recognized. That backlog number is the one that made analyst Joseph Carlson write that it “literally looks fake.”
Built like a system. Not vibe-coded.
Remy manages the project — every layer architected, not stitched together at the last second.
AWS’s 28% growth is cleaner: it’s almost entirely infrastructure and platform services. The $152 billion ARR figure puts AWS’s absolute scale in perspective. Growing 28% on a $152B base is a different achievement than growing 63% on a smaller one.
Azure’s 39% growth sits in the middle on both dimensions. Microsoft’s cloud includes Azure AI services, but also a lot of traditional enterprise software revenue that’s harder to separate from the AI signal.
The dimensions worth comparing across all three: raw growth rate, capacity availability, enterprise traction, pricing and chip strategy, and strategic positioning for the next 18 months.
Google Cloud: The Clearest AI Signal, With a Catch
Google’s quarter was the kind of result that makes you reconsider your priors. The 63% growth rate is extraordinary. The 40% surge in paid enterprise customers quarter-over-quarter suggests this isn’t just a few large contracts inflating the number. And the infrastructure metrics back it up: 16 billion tokens per minute processed, up 60% quarter-over-quarter.
But CEO Sundar Pichai said something in the earnings call that should be in every enterprise buyer’s decision-making process: “We are compute-constrained in the near term. Our cloud revenue would have been higher if we were able to meet the demand.”
That’s a remarkable admission. Google is leaving money on the table because they can’t build fast enough. Their CapEx guidance for the year is $180–190 billion, but they only spent $35.7 billion in Q1 — an annualized pace of roughly $143 billion. They’re either back-loading spend or the guidance is aspirational padding. Either way, the constraint is real.
For enterprise buyers, this creates a specific risk: Google Cloud is the fastest-growing platform and arguably the most technically capable, but it’s also the one most likely to have capacity queues, pricing pressure, and service degradation when demand spikes. The backlog doubling in a quarter is a bullish signal for Google’s business. It’s a mixed signal for a buyer trying to provision capacity today.
The search revenue story is worth a separate paragraph. Google Search grew 19% year-over-year, and queries hit an all-time high. The AI-cannibalization thesis was wrong, or at least premature. What seems to be happening instead is that AI tools are expanding the total volume of information-seeking behavior, and Google is capturing a share of that expansion. Whether this holds as AI assistants get better at replacing search for specific query types is still an open question — but for now, the data says Google’s core business is not in distress.
Google’s net income of $62.6 billion, up 81% year-over-year, means they have the financial capacity to solve the compute constraint. The question is execution speed.
AWS: The Infrastructure Bet, Priced in Cash
Amazon’s quarter tells a different story. Revenue up 17% overall, AWS up 28%, net profit up 77% — but that profit number is partly explained by pre-tax income from their Anthropic investment, which makes it a less clean read on operational performance.
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
The number that stands out is free cash flow: down from $26 billion a year ago to $1.2 billion this quarter. Amazon is spending essentially every dollar it generates on AI buildout. Q1 CapEx was $43.2 billion — the largest absolute number among the four major hyperscalers — and CEO Andy Jassy said they plan to accelerate construction further.
Jassy’s confidence that this spending will convert to profits rests on a specific claim: most of the new capacity is already spoken for. With OpenAI now on AWS Bedrock alongside Anthropic — GPT-5.4 in limited preview, GPT-5.5 coming within weeks — that claim has more credibility than it might otherwise. AWS has positioned itself as the neutral platform where enterprises can access any major model without picking sides.
The Trainium chip story is underappreciated. Jassy said that if their custom silicon business were a standalone company booking revenue from AWS, it would be sitting at $50 billion ARR. He described it as “one of the top three data center chip businesses in the world.” That’s not a chip business most people were tracking two years ago. If accurate, it means Amazon has a cost advantage in AI compute that doesn’t show up cleanly in the growth numbers.
For enterprise buyers, AWS’s value proposition is breadth and neutrality. You can run Anthropic models, OpenAI models, and Amazon’s own Titan models on the same platform, with the same security posture, in the same VPC where your data already lives. AWS CEO Matt Garman put it plainly: “This is what our customers have been asking for for a really long time. Their production applications run in AWS, their data is in AWS, they trust the security of AWS.”
The risk is that AWS is growing slower than Google on the AI-specific metrics that matter most. 28% is healthy, but it’s not 63%. If the AI workload shift continues accelerating, AWS needs to demonstrate it can capture more of that growth rather than ceding the high end to Google.
Azure: Solid, Steady, and Strategically Complicated
Microsoft’s quarter was fine. Azure grew 39% — one percentage point faster than the previous quarter. CFO Amy Hood projected that rate to continue into Q2. Top-line revenue came in at $82.9 billion, an 18% year-over-year gain. Nothing broke, nothing surprised.
The Copilot numbers are genuinely interesting: 20 million paid enterprise seats, up from 15 million in January. That’s real adoption. Satya Nadella said weekly engagement is now at the same level as Outlook, which would mean it’s become habitual rather than experimental for a meaningful user base. If you’re evaluating enterprise AI tooling, 20 million paid seats is not a rounding error — though it’s also worth noting that Office 365 has roughly 320 million paid seats, so the penetration rate is still in single digits.
The strategic complication is the OpenAI relationship restructuring. Microsoft is no longer the exclusive cloud partner for OpenAI. Models must be released first on Azure, but the exclusivity window isn’t public. In exchange, Microsoft retains a 20% revenue share through 2030 and keeps its 27% equity stake. The financial terms are arguably better for Microsoft than the old arrangement — but the narrative advantage of “the OpenAI cloud” is gone.
Other agents start typing. Remy starts asking.
Scoping, trade-offs, edge cases — the real work. Before a line of code.
Microsoft raised CapEx guidance by $25 billion to $190 billion for the year, but CFO Hood attributed the entire increase to higher component prices rather than new data center projects. That’s a different signal than Google or Amazon’s capacity expansion. Microsoft is paying more for the same buildout, not building more.
The market’s response — essentially flat overnight — reflects a company that’s executing competently but not breaking out. Azure is the safe choice for enterprises already deep in the Microsoft ecosystem. It’s a harder sell as a primary AI platform for new workloads where you’re not already locked in.
When you’re building AI-powered workflows that need to connect to existing enterprise tools — Salesforce, HubSpot, Slack, the full stack of business software — the orchestration layer matters as much as the underlying cloud. Platforms like MindStudio handle this kind of multi-model orchestration across 200+ models and 1,000+ integrations, which is relevant when your cloud choice doesn’t have to be your only model choice.
The Meta Footnote
Meta isn’t a cloud provider in the same sense, but their earnings are relevant context. Revenue of $56.3 billion, up 33% year-over-year, with CapEx raised from $135 billion to $145 billion. CFO Susan Li said: “Our experience so far has been that we have underestimated our compute needs, even as we have been ramping capacity significantly.”
That quote applies to every major player in this space. The compute shortage is not a Google-specific problem. It’s an industry-wide constraint that’s showing up in every earnings call.
Which Cloud for Which Workload
Use Google Cloud if you’re building AI-native applications where model quality and throughput are the primary constraints, you have flexibility on timeline (given current capacity constraints), and you want access to Gemini’s multimodal capabilities natively integrated with your infrastructure. The 63% growth and 16 billion tokens per minute are signals that Google is winning the AI-native workload competition. The compute constraint is real but temporary — their financial position and CapEx commitments suggest it resolves over the next 12–18 months. For teams evaluating how different AI labs are positioning their agent strategies, Google’s infrastructure depth is increasingly relevant to that comparison.
Use AWS if your data is already there, your security and compliance requirements are non-negotiable, and you want model neutrality — the ability to run Anthropic, OpenAI, and Amazon models on the same platform without architectural changes. The OpenAI-on-Bedrock announcement is significant for teams that have been defaulting to Anthropic because of Bedrock availability. Now you can add OpenAI models to the same stack without a separate integration. The Trainium chip advantage also means AWS’s cost structure for high-volume inference may improve faster than the growth numbers suggest.
Use Azure if you’re an enterprise already running on Microsoft 365, your team uses Copilot, and the path of least resistance through your procurement and security review process runs through Microsoft. Azure’s 39% growth is real, and the Copilot traction is real. But if you’re building net-new AI infrastructure without existing Microsoft dependencies, the case for Azure over Google or AWS requires more justification than it did a year ago.
How Remy works. You talk. Remy ships.
The multi-cloud reality is that most enterprises of any size will end up using more than one of these. The interesting question isn’t which cloud wins — it’s which cloud becomes the primary AI inference platform for each workload type. Right now, Google is winning the AI-native inference competition on growth metrics. AWS is winning the enterprise-neutral platform competition. Azure is winning the Microsoft-ecosystem extension competition.
What the Search Revenue Number Actually Tells You
The 19% search revenue growth deserves a final word, because it’s the most counterintuitive data point in this entire earnings cycle.
The AI-cannibalization thesis assumed a zero-sum relationship: every query answered by a chatbot is a query not sent to Google. What the data suggests instead is that AI tools are expanding total information-seeking behavior faster than they’re substituting for search. People are asking more questions, not fewer. Google is capturing a share of that expansion through AI Overviews and other integrations.
This has a direct implication for builders. If you’re building AI applications that answer questions or surface information, you’re probably not competing with Google — you’re operating in a market that Google’s own growth is helping to expand. The rising tide is real.
For teams building production applications on top of these clouds, the spec-driven approach is worth considering. Remy compiles annotated markdown specs into complete full-stack applications — TypeScript backend, SQLite database, auth, deployment — treating the spec as the source of truth rather than the generated code. When your infrastructure choices are moving this fast, keeping your application logic in a human-readable spec rather than scattered across cloud-specific configurations has practical value.
The three clouds are all growing. The AI boom is not a narrative — it’s showing up in every earnings line. The question for builders is which platform’s specific strengths align with your specific workload, and which constraints you can tolerate while the industry builds its way out of the compute shortage.
The compute shortage, by the way, is the one thing all three clouds agree on. Pichai said Google’s revenue would have been higher with more compute. Li said Meta keeps underestimating its compute needs. Jassy is spending $43 billion in a single quarter to try to get ahead of it. When the three largest infrastructure companies in the world are all capacity-constrained simultaneously, the right response for builders is to treat compute availability as a first-class architectural concern — not an afterthought.
For a sense of how the underlying model competition maps onto these infrastructure choices, the GPT-5.4 vs Claude Opus 4.6 comparison is a useful reference point, since both models are now available on AWS Bedrock and the performance differences have real implications for which cloud you’d want to run them on. Similarly, if you’re evaluating open-weight alternatives that reduce your dependency on any single cloud’s proprietary models, the Gemma 4 vs Qwen 3.5 comparison covers the tradeoffs in detail.
The race isn’t over. But after Q1 2026, the shape of it is clearer than it’s been.