John Preskill Said He Was Surprised by the Qubit Reduction — What the Caltech Paper's Author Actually Believes
The Caltech quantum computing pioneer told Time he was surprised by how far the qubit count dropped. Here's what his paper actually claims and what it doesn't.
John Preskill Said He Was Surprised — That’s the Part You Should Pay Attention To
John Preskill, one of the most respected names in quantum computing and a co-author of the Caltech/Atom Computing paper, told Time that he was surprised by how much the qubit count dropped. Not mildly surprised. Surprised in the way that makes you stop and recalibrate your priors.
That’s the signal worth tracking. When the person who helped build the theoretical framework for quantum error correction says the numbers came in lower than he expected, you don’t dismiss it as hype. You ask: what exactly did the paper claim, what’s solid, what’s still assumption, and what does it mean for the systems you’re responsible for?
This post is an attempt to answer those questions honestly.
The Numbers in the Paper, Stated Precisely
The Caltech/Atom Computing paper argues that Shor’s algorithm — the quantum algorithm from the 1990s that can theoretically break public-key cryptography — could run at cryptographically relevant scales with approximately 10,000 reconfigurable atomic qubits. That’s the headline figure.
The more operationally specific claim is that around 26,000 physical qubits could be sufficient to attack the P-256 elliptic curve in a matter of days, under what the authors describe as “plausible assumptions.” P-256 is the elliptic curve used in ECDSA, which underpins TLS certificates, code signing, and a significant fraction of the authentication infrastructure holding the internet together.
Seven tools to build an app. Or just Remy.
Editor, preview, AI agents, deploy — all in one tab. Nothing to install.
For comparison, Google’s separate research estimated that attacking the 256-bit elliptic curve discrete logarithm problem would require fewer than 1,200 logical qubits and fewer than 19 million Toffoli gates — or in an alternate configuration, 1,450 logical qubits and fewer than 17 million Toffoli gates. Google further estimated that under standard assumptions, this could run on a superconducting quantum computer with fewer than 500,000 physical qubits, potentially completing execution in minutes.
Two different research groups. Two different hardware architectures. Both pointing in the same direction: the machine you’d need is smaller than the field assumed five years ago.
What “Surprised” Actually Means Coming From Preskill
Preskill coined the word “quantum supremacy.” He’s been thinking about fault-tolerant quantum computation since the late 1990s. When he says the qubit reduction surprised him, he’s not expressing wonder at a magic trick — he’s updating a model he’s spent decades building.
The specific thing that moved the numbers was AI assistance. The Atom Computing team used OpenEvolve, an open-source tool that uses large language models to optimize algorithms through a process analogous to natural selection. Instead of researchers manually testing a handful of algorithmic variants, the LLM could search across thousands of possibilities, combining results from niche subfields of quantum computing in ways that wouldn’t have been obvious to any single human researcher.
The team’s early algorithms were reportedly about 1,000 times worse than what they ended up with. One author stated plainly that the approach “would not work” before the AI-assisted improvements. That’s not a marginal gain from better tooling — that’s a qualitative change in what was achievable. It’s also a useful data point alongside other recent capability jumps in AI-assisted research; the pattern of models accelerating technical work in ways that surprise even domain experts is showing up across multiple fields simultaneously. For a broader look at how frontier model capabilities are shifting, the comparison between GPT-5.4 and Claude Opus 4.6 is worth reading as context for how quickly the underlying models are improving.
Preskill’s own comment in Time was careful: he noted that humans were still driving the research, asking the right questions and guiding the AI toward useful answers. The AI didn’t discover the algorithm. It helped search a space that was too large for humans to explore manually. That distinction matters for how you think about the trajectory of this field going forward.
Why the Qubit Count Is the Number That Matters
Most public discussion of quantum threats focuses on the wrong variable. People ask “when will quantum computers be powerful enough?” as if raw qubit count is the only axis. But there are two axes: hardware scale and algorithmic efficiency. Progress on either one moves the threat timeline.
For most of the last decade, the field assumed you’d need millions of physical qubits to run Shor’s algorithm at cryptographically relevant scales. That assumption shaped how urgently organizations treated post-quantum migration. If the machine is that big and that expensive, you have time.
The Caltech paper, if its assumptions hold, suggests the relevant threshold might be closer to tens of thousands of physical qubits — not millions. That’s a different planning horizon. Cloudflare, which routes a significant fraction of global internet traffic, moved its post-quantum security deadline from 2035 to 2029 specifically in response to this research. A six-year acceleration in a security migration timeline is not a minor adjustment.
The caveat that Princeton’s Jeff Thompson raised is real: it is easy to shrink a quantum computer on paper if you assume better qubits. The Caltech paper has not yet been peer-reviewed, and many of its assumptions remain untested. But “not yet proven” and “not worth planning around” are different things. The direction of travel is clear even if the exact arrival time isn’t. The same dynamic applies to AI model development — Anthropic’s compute constraints are a useful reminder that even well-resourced labs face real physical limits, which is part of why algorithmic efficiency gains like the ones in the Caltech paper matter so much.
What’s Actually Buried in the Paper
The part that gets less attention than the qubit count is the authentication threat.
Most people, when they hear “quantum computers could break encryption,” think about confidentiality — someone reading your messages. That’s real. The “harvest now, decrypt later” attack vector is well-documented: the NSA, CISA, and NIST have all issued warnings that adversaries can collect encrypted traffic today and hold it until a capable quantum computer exists to decrypt it. Data that needs to stay secret for years — government records, medical files, long-term business communications — is already at risk in this sense.
But the authentication threat is arguably more acute in the near term. Elliptic curve cryptography doesn’t just protect the content of messages. It’s what lets your browser verify that the server it’s talking to is actually your bank, or that a software update was actually signed by the vendor. Root certificates, API authentication keys, and code signing certificates are all built on the same mathematical foundations that Shor’s algorithm attacks.
If those keys are compromised, an attacker doesn’t just read your data — they can impersonate trusted systems. They can forge a software update. They can present a valid-looking certificate for a site they don’t control. Cloudflare’s blog framed it precisely: the question is no longer when will encrypted data be at risk, but how long before an attacker walks through the front door with a quantum-forged key.
This is why Cloudflare’s 2029 target specifically includes post-quantum authentication, not just encryption. They’ve had post-quantum encryption deployed for all websites and APIs since 2022, and more than 65% of human traffic through their network is already post-quantum encrypted. The harder problem — authentication — is what’s driving the urgency now.
The Downgrade Attack Problem Nobody Talks About
NIST finalized its first three post-quantum cryptography standards on August 13, 2024. That’s the good news. The harder news is that adding post-quantum cryptography to a system is not sufficient. You also have to disable the quantum-vulnerable cryptography.
This is the downgrade attack problem. If a system supports both classical and post-quantum cryptographic methods, an attacker can force a negotiation that selects the weaker classical method — and then attack that. The new standards don’t protect you if the old ones are still available as a fallback.
Disabling old cryptography creates its own dependency chain. Secrets like passwords and access tokens may need to be rotated. Third-party integrations need to be validated. Fraud monitoring systems need to be updated. The analogy that comes to mind is replacing the locks, keys, ID cards, alarm systems, and backup access routes across an entire building while it’s still occupied and operational.
For engineering teams building systems today, this means the migration isn’t a single task. It’s a sequenced project with real dependencies, and the 2029 deadline — if you take Cloudflare’s read seriously — is closer than it sounds when you account for the full chain.
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
The AI-assisted nature of the Caltech breakthrough also has implications for how you think about tooling in adjacent domains. The fact that an LLM optimizer searching through algorithmic space could produce a 1,000x improvement in a research context is a signal about what’s possible when you apply this kind of search to other hard optimization problems. Teams building AI-assisted research workflows are already exploring this — MindStudio is an enterprise AI platform with 200+ models and 1,000+ integrations that lets you chain models and tools visually, which is useful when you’re trying to connect a reasoning model to domain-specific data sources without writing orchestration code from scratch.
What Preskill’s Surprise Should Change in Your Planning
The honest read of the Caltech paper is: not proven, not deployed, but directionally credible enough that serious infrastructure companies are accelerating their timelines.
Preskill’s surprise is meaningful precisely because he’s not an optimist by default. He’s spent his career being careful about what quantum computers can and can’t do. The fact that the qubit reduction exceeded his expectations suggests the field’s intuitions about the lower bound on required hardware were wrong — and that AI-assisted algorithm search may continue to push that bound down.
For anyone building systems that handle sensitive data, authentication, or long-lived cryptographic keys, the practical implication is this: the question of whether to migrate to post-quantum cryptography is settled. NIST has the standards. The question is sequencing and speed.
Long-lived keys are the highest priority. Root certificates, code signing keys, and API authentication credentials that will still be in use in 2029 or beyond should be on a migration plan now. The harvest-now-decrypt-later threat means that data encrypted today with classical methods is already potentially compromised — not by a quantum computer that exists today, but by one that might exist when the data still needs to be secret.
The AI piece of this story is also worth sitting with separately from the cryptographic threat. The Atom Computing team’s use of OpenEvolve to search algorithmic space is an early example of a pattern that’s going to become more common: LLMs as search engines over technical possibility space, guided by human researchers who know which questions to ask. The results can be surprising even to the experts who designed the search. That’s a different kind of tool than a faster calculator. For teams thinking about how to apply this kind of AI-assisted search to their own technical problems, the research on AI agents for analysis is a useful survey of what’s already practical.
When you’re building systems that need to reason about or act on complex technical domains — security configurations, compliance requirements, cryptographic policy — the architecture of how you connect AI capabilities to domain knowledge matters. Remy takes a spec-driven approach to this kind of structured problem: you write a markdown spec with annotations, and it compiles into a complete TypeScript application with backend, database, auth, and deployment included. The precision lives in the spec rather than scattered across implementation files, which is a meaningful property when the domain knowledge you’re encoding is security-critical.
The Actual Takeaway From Preskill’s Quote
The most important thing Preskill said wasn’t about the qubit count. It was about the mechanism: AI helped the team search through a space that was too large to explore manually, and the results exceeded what the researchers expected.
Remy doesn't build the plumbing. It inherits it.
Other agents wire up auth, databases, models, and integrations from scratch every time you ask them to build something.
Remy ships with all of it from MindStudio — so every cycle goes into the app you actually want.
That’s a statement about the rate of change, not just the current state. If AI-assisted algorithm search continues to improve, the lower bound on the hardware needed to run cryptographically relevant quantum algorithms will continue to fall. The Caltech paper’s 26,000 physical qubit estimate might itself be revised downward in future work.
Preskill has been thinking about this longer than almost anyone. The fact that he was surprised is not reassuring. It’s a calibration signal. The field’s best intuitions about the timeline were wrong in the optimistic direction — wrong in the direction that made the threat feel further away than it was.
The systems you’re building today will still be running in 2029. The cryptographic assumptions baked into them were made when the threat model looked different. That gap is what needs to close.