Google's Quantum Attack Estimate vs. Caltech's: Which Timeline Should You Actually Plan Around?
Google says under 500K physical qubits in minutes. Caltech says 26K qubits in days. The numbers differ — here's how to read both for planning purposes.
Two Papers, One Threat, Very Different Numbers
Google says fewer than 500,000 physical qubits, execution in minutes. Caltech and Atom Computing say roughly 26,000 physical qubits, attack completes in days. If you’re an engineer trying to figure out which timeline to actually plan around, the gap between those two estimates is not reassuring — it’s the problem. Both papers are pointing at the same target: the P-256 elliptic curve that secures a significant fraction of the internet. They just disagree, by a lot, on how hard it is to get there.
The specific numbers matter here. Google’s estimate: fewer than 1,200 logical qubits and fewer than 19 million Toffoli gates to attack the 256-bit elliptic curve discrete logarithm problem. An alternate Google configuration: 1,450 logical qubits and fewer than 17 million Toffoli gates. The Caltech/Atom Computing paper: approximately 10,000 reconfigurable atomic qubits for a cryptographically relevant run of Shor’s algorithm, scaling to around 26,000 physical qubits to attack P-256 in a matter of days.
You should read both papers. And you should understand why they’re not actually contradicting each other in the way the headlines suggest.
Why the Numbers Look So Different
The first thing to understand is that logical qubits and physical qubits are not the same unit. They’re not even close.
Physical qubits are the actual hardware — fragile, error-prone, requiring constant correction. Logical qubits are the error-corrected abstractions you build on top of physical ones, typically requiring hundreds to thousands of physical qubits each to maintain reliability. When Google says fewer than 1,200 logical qubits, that number has to be multiplied by a significant overhead factor to get to physical qubits. Google’s own estimate puts the physical qubit requirement at under 500,000 on a superconducting quantum computer.
The Caltech/Atom Computing paper is working in a different hardware paradigm: reconfigurable atomic qubits, controlled by lasers. The claim is that this architecture is inherently more efficient — that the 26,000 physical qubit figure already accounts for error correction under their system’s assumptions.
So the comparison isn’t 1,200 vs. 26,000. It’s more like: two different hardware architectures, two different error correction assumptions, two different ways of counting. The Google number is a logical qubit estimate that expands to ~500,000 physical qubits. The Caltech number is a physical qubit estimate that starts at ~26,000 under more optimistic assumptions about qubit quality.
That’s a meaningful distinction. It’s also where the uncertainty lives.
The Dimensions That Actually Matter for Planning
There are four things worth comparing across these two estimates: hardware assumptions, attack speed, verification rigor, and the role AI played in getting there.
Hardware assumptions. Google’s estimate is built around superconducting qubits — the same architecture Google, IBM, and others have been scaling for years. The error correction overhead is well-characterized, which is why the logical-to-physical conversion is relatively tractable. The Caltech paper is built around neutral atom qubits, which Princeton’s Jeff Thompson has noted carry many untested assumptions. The qubit quality required to hit 26,000 physical qubits is not something that currently exists. As Thompson put it, it’s very easy to shrink a computer on paper if you assume better qubits.
Attack speed. Google estimates execution in minutes once the machine exists. Caltech estimates days. This sounds like Google’s scenario is worse, but the framing is slightly misleading — a days-long quantum computation is still a cryptographic catastrophe. The difference in attack duration matters less than the difference in machine size required.
Verification rigor. Google did something unusual: rather than publishing the full attack circuits, they used a zero-knowledge proof to let people verify the claims without exposing the sensitive details. A zero-knowledge proof lets you prove “I know how this works” without showing the method. That’s a deliberate choice, and a responsible one — once quantum attacks become practical enough, publishing every detail becomes a different kind of problem. The Caltech paper, by contrast, had not been peer-reviewed at the time of reporting.
AI’s role. This is where both papers converge on something important, and where the Caltech paper is more explicit. The Atom Computing team used OpenEvolve, an open-source tool that uses large language models to optimize algorithms through a process analogous to natural selection. Instead of a handful of human researchers testing ideas, the AI searched through thousands of possibilities. The team’s early algorithms were reportedly about 1,000 times worse. They said plainly that the approach would not work before the AI-assisted improvements. John Preskill, one of the paper’s authors and one of the most respected names in quantum computing, told Time he was surprised by how much they reduced the qubit count.
Coding agents automate the 5%. Remy runs the 95%.
The bottleneck was never typing the code. It was knowing what to build.
That last point deserves emphasis. The expert was surprised. When the person who coined the term “quantum supremacy” is surprised by the magnitude of an efficiency improvement in his own paper, that’s a signal worth taking seriously. For a broader look at how competing AI labs are thinking about agent-driven research like this, Anthropic vs OpenAI vs Google: Three Different Bets on the Future of AI Agents is worth reading alongside these papers.
Reading the Google Estimate
Google’s paper is the more conservative and more rigorous of the two. The logical qubit counts — under 1,200 or 1,450 depending on configuration — represent a genuine reduction from prior estimates, but the physical qubit requirement of under 500,000 is still a machine that doesn’t exist. Today’s largest superconducting quantum computers are in the hundreds to low thousands of physical qubits, and the gap between current hardware and 500,000 error-corrected qubits is not a rounding error.
What makes the Google estimate significant isn’t that it describes an imminent threat. It’s that it describes a more efficient path than anyone had previously published. The recipe got shorter. That matters because hardware scales faster than algorithm efficiency typically changes — and now both are moving.
The zero-knowledge proof approach also signals something about how Google is thinking about this research. They’re not trying to publish a how-to guide. They’re trying to establish a credible lower bound on the threat while limiting the information available to bad actors. That’s a careful line to walk, and it suggests the researchers believe the threat is real enough to warrant that caution.
The AI-assisted optimization angle is less prominent in Google’s paper than in Caltech’s, but the broader pattern — LLMs helping researchers navigate large, weird technical search spaces — is the same. This is what AI-assisted research looks like in practice: not replacing scientists, but dramatically expanding the territory they can explore. MindStudio handles this kind of multi-model orchestration at the workflow level — 200+ models, 1,000+ integrations, visual chaining of agents across complex pipelines — which is a different scale of problem, but the underlying dynamic is the same: AI as a search tool over a space too large for humans to navigate manually.
Reading the Caltech/Atom Computing Estimate
The Caltech paper is the more aggressive claim and the less verified one. The 26,000 physical qubit figure is striking precisely because it’s so much lower than Google’s ~500,000. But that gap is almost entirely explained by hardware assumptions.
Neutral atom qubits — atoms controlled by lasers in reconfigurable arrays — have theoretical advantages in connectivity and gate fidelity that could, in principle, reduce error correction overhead significantly. The Caltech paper is essentially arguing that if you build the right kind of quantum computer, the resource requirements drop dramatically. That’s a meaningful claim. It’s also one that depends on achieving qubit quality that hasn’t been demonstrated at scale.
The peer review gap matters here. Google’s estimate has been through more scrutiny. The Caltech paper is a theoretical resource estimate, not a working system. Princeton’s Thompson is right that assumptions about qubit quality can make a machine look much smaller on paper than it would be in practice.
Seven tools to build an app. Or just Remy.
Editor, preview, AI agents, deploy — all in one tab. Nothing to install.
But here’s the thing: even if the Caltech numbers are off by a factor of ten, the direction of travel is clear. Both papers are pointing toward smaller machines than anyone estimated five years ago. The question isn’t whether the Caltech paper is exactly right — it’s whether the trend it represents is real. And the trend is real. The same AI-driven optimization dynamic that produced these results is also reshaping how models are evaluated against each other — GPT-5.4 vs Claude Opus 4.6: Which AI Model Is Right for Your Workflow? covers some of that terrain if you want to understand the current capability landscape.
What Each Estimate Implies for Your Timeline
The honest answer is that you shouldn’t be planning around either estimate in isolation. You should be planning around the combination of both.
Google’s estimate tells you the threat is real and the efficiency improvements are continuing. Caltech’s estimate tells you that hardware architecture choices could compress the timeline further than superconducting-only estimates suggest. Together, they tell you that the comfortable assumption — “we have until the 2030s, maybe later” — is no longer defensible.
Cloudflare moved its post-quantum security deadline from 2035 to 2029. Six years of acceleration, citing this research directly. Cloudflare is not a crypto influencer. It’s an infrastructure company that has to make decisions with real consequences. When they say “it’s a real shock, we’ll need to speed up our efforts considerably,” that’s an operational judgment, not a marketing statement.
The threat isn’t only about decryption, either. Cloudflare’s framing is pointed: the question is no longer when will encrypted data be at risk, but how long before an attacker walks through the front door with a quantum-forged key. Authentication is the underappreciated half of this problem. Long-lived keys — root certificates, API authentication keys, code signing certificates — are high-value targets. A quantum computer that can forge those keys doesn’t just read your messages. It impersonates your bank, your software vendor, your server.
The harvest now, decrypt later attack vector makes this urgent even before any quantum computer capable of breaking P-256 exists. The NSA, CISA, and NIST have all warned about this explicitly. Data stolen today can be decrypted later. Government files, medical records, long-term business secrets — the stealing can start before the machine exists. NIST finalized its first three post-quantum cryptography standards on August 13, 2024. The transition has a starting line.
There’s also a subtler risk that gets less attention: downgrade attacks. Adding post-quantum cryptography isn’t sufficient if you leave the old cryptography in place. An attacker can trick two systems into negotiating down to the weaker protocol even when a stronger one is available. Cloudflare is explicit about this: disabling quantum-vulnerable cryptography is required, not optional. And after old cryptography is turned off, secrets like passwords and access tokens may need rotation, which creates a dependency chain involving third parties, validation systems, and fraud monitoring. This is not an app update. It’s more like replacing every lock, key, and ID card in a building while the building is still open.
Other agents start typing. Remy starts asking.
Scoping, trade-offs, edge cases — the real work. Before a line of code.
The engineering work required here is substantial enough that teams building security-sensitive systems should be thinking about it now at the architecture level, not just the implementation level. The kind of structured thinking that goes into a well-annotated spec — where intent is explicit and edge cases are called out — is exactly what makes this kind of migration tractable. Remy takes that spec-driven approach seriously: you write annotated markdown as the source of truth, and the full-stack application — TypeScript backend, database, auth, deployment — gets compiled from it. The discipline of making requirements explicit before building is the same discipline that makes a cryptographic migration survivable. If you’re scoping a post-quantum migration, starting from a spec rather than retrofitting one is the difference between a planned transition and an emergency patch. The 9 AI agents for research and analysis covered in this breakdown are increasingly being used for exactly this kind of structured technical scoping work.
Which Estimate Should You Plan Around
Plan around the Caltech estimate for threat modeling. Plan around the Google estimate for technical depth.
The Caltech paper gives you the more aggressive timeline and the lower hardware bar. Even if its assumptions are optimistic, it represents the direction the field is moving. If neutral atom qubits continue improving, the 26,000 physical qubit figure could become realistic faster than the 500,000 superconducting qubit figure. Planning around the more conservative estimate is planning to be caught off guard.
The Google paper gives you the more rigorous technical foundation. The zero-knowledge proof verification, the careful logical-to-physical qubit accounting, the two distinct circuit configurations — this is the paper you cite when you need to justify a security investment to a skeptical audience. It’s also the paper that tells you what the attack actually looks like at a circuit level, even if it withholds the details.
The 65% of human traffic that Cloudflare has already moved to post-quantum encryption since 2022 represents the encryption half of the problem. The authentication half — the part involving root certificates and code signing keys — is what the 2029 deadline is actually about.
Both papers agree on the destination. The disagreement is about how far away it is. Given that AI-assisted optimization just moved the goalposts by roughly 1,000x in one research cycle, betting on the more conservative timeline is a bet that the optimization curve stops improving. That’s not a bet worth making.
The world is not prepared. That’s not a headline. It’s a planning constraint.