Scott Aaronson's 2029 Warning: Why the World's Top Quantum Skeptic Is Now Sounding the Alarm
Scott Aaronson — historically skeptical of quantum timelines — now says fault-tolerant quantum computers capable of breaking crypto are expected by ~2029.
The Quantum Skeptic Who Changed His Mind
Scott Aaronson published a blog post on May 1, 2026 titled “Will you heed my warnings?” — and if you know who Aaronson is, that title alone should stop you cold.
For two decades, Aaronson has been the person you go to when you want someone to pour cold water on quantum hype. He’s the Schlumberger Centennial Chair of Computer Science at UT Austin, co-founding director of UT Austin’s Quantum Information Center, and was just elected to the US National Academy of Sciences. He spent years patiently correcting journalists, investors, and researchers who overstated what quantum computers could actually do. He is not a hype man. He is the opposite of a hype man.
So when he writes that “some of the most reputable people in quantum hardware and quantum error correction — people whose judgment I trust more than my own on these topics — now tell me that a fault-tolerant quantum computer able to break deployed crypto systems ought to be possible by around 2029,” you should read that sentence twice.
That’s not a hedge. That’s not “maybe someday.” That’s a specific year, from a known skeptic, citing sources he explicitly says he trusts more than himself.
Why This Warning Is Different From Every Other Quantum Warning
Not a coding agent. A product manager.
Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.
You’ve probably heard quantum computing threats to cryptography before. It’s been a background hum since Peter Shor published his algorithm in 1994 — showing that a fault-tolerant quantum computer could factor large numbers efficiently, which breaks RSA and elliptic curve cryptography. Shor showed fault-tolerant quantum computation was possible in principle in 1996.
Here’s the thing: Bitcoin launched more than a decade after Shor’s algorithm. Ethereum launched in 2015, nearly two decades after. Both chose quantum-vulnerable cryptography anyway. The threat was known. It just felt distant.
The reason it felt distant is that building a fault-tolerant quantum computer at scale requires solving a brutal engineering problem: quantum error correction. Qubits are noisy. They decohere. They interact with their environment in ways that introduce errors faster than you can compute. Every serious estimate of “when will quantum computers break RSA” has been gated on solving this problem.
That gate is now moving.
In November 2024, Google DeepMind released AlphaQubit — an AI-based decoder that identifies and corrects quantum computing errors with state-of-the-art accuracy. The parallel to AlphaFold is direct: just as Demis Hassabis’s team used neural networks to crack protein folding, they used the same approach to crack quantum error prediction. AlphaQubit doesn’t just improve error correction incrementally. It changes the trajectory of when fault-tolerant quantum computing becomes practical.
This is the part that makes the 2029 timeline feel less like speculation. AI is now directly accelerating the development of quantum computers capable of breaking the cryptography that secures the internet. The two most consequential technology races of this decade are not running in parallel — they’re feeding each other. The broader pattern of AI capabilities advancing faster than expected is relevant context here — we’ve seen repeatedly that timelines compress when multiple enabling technologies converge. AlphaQubit is one such convergence. Google’s qubit efficiency improvements are another.
The Evidence That’s Actually Piling Up
Aaronson’s blog post didn’t appear in a vacuum. It was accompanied by a detailed position paper he co-authored with Dan Boneh — widely considered one of the world’s leading cryptographers — and Justin Drake, a major Ethereum Foundation researcher. The paper, connected to Coinbase, addresses the quantum threat to blockchain specifically. The fact that Boneh is involved matters: this isn’t quantum researchers speculating about crypto. This is the cryptography community taking the quantum timeline seriously enough to publish.
On March 25, 2026, Google published a post on blog.google titled “Quantum Frontiers may be closer than they appear.” The headline is understated. The content is not. Google announced an internal 2029 deadline to migrate all its infrastructure to post-quantum cryptography (PQC). The stated reason: faster-than-expected progress in reducing the number of qubits needed to break current RSA encryption.
Google also disclosed something technically significant: they published a zero-knowledge proof showing they know how to break elliptic curve cryptography with fewer qubits and gates than previously realized. They did not publish the attack recipe. The zero-knowledge proof structure is deliberate — it proves knowledge of the vulnerability without handing anyone a working exploit. That’s responsible disclosure. It’s also a signal that the capability is real and the timeline is not theoretical.
Cloudflare, which sits in front of a substantial fraction of global internet traffic, is also targeting 2029 for full quantum security.
Remy doesn't build the plumbing. It inherits it.
Other agents wire up auth, databases, models, and integrations from scratch every time you ask them to build something.
Remy ships with all of it from MindStudio — so every cycle goes into the app you actually want.
Three independent organizations — Google, Cloudflare, and a research coalition including Aaronson and Boneh — have independently converged on the same year. That convergence is the actual news.
What the 2029 Timeline Breaks (And What It Doesn’t)
Quantum computers don’t break everything equally. Symmetric encryption (AES-256, for example) is relatively resistant — Grover’s algorithm provides a quadratic speedup against it, which is manageable by doubling key sizes. The acute threat is to public-key cryptography: RSA, elliptic curve cryptography (ECC), and Diffie-Hellman key exchange.
These are the protocols that underpin TLS (the “S” in HTTPS), software signing, certificate authorities, SSH, and essentially all cryptocurrency wallets. When Shor’s algorithm runs on a sufficiently large fault-tolerant quantum computer, the math that makes these hard problems hard stops being hard. It’s not that the lock gets picked slowly — it’s that the lock stops working as a lock.
The exposure list is long: classified government communications, banking infrastructure, satellite control systems, CPU microcode update signing, medical records, power grid authentication, and every blockchain that uses elliptic curve keys — which is most of them.
The “store now, decrypt later” problem makes this worse than it sounds. Governments — including the US, Russia, and China — have been collecting and archiving encrypted internet traffic for decades, specifically planning to decrypt it once quantum computers arrive. This isn’t speculation; it’s the obvious thing to do if you have the storage capacity and a credible belief that decryption will eventually be possible. The implication is that data encrypted today, yesterday, and ten years ago is potentially exposed the moment a fault-tolerant quantum computer comes online.
So the 2029 threat isn’t just about future communications. It’s about everything that was ever encrypted with vulnerable algorithms.
The cybersecurity capabilities of frontier AI models are also relevant context here — the same models that can assist with security research can be used to probe vulnerabilities, which is part of why the intersection of AI progress and quantum progress is worth tracking carefully. Understanding how capable these models are becoming at reasoning about cryptographic systems is directly relevant to assessing the combined risk surface.
The Governance Problem Nobody Wants to Talk About
The technical problem is solvable. NIST has already standardized post-quantum cryptographic algorithms. Google, Cloudflare, and Apple (for iMessage, at least) are actively migrating. The engineering is hard but tractable.
The governance problem is harder.
Ethereum has Vitalik Buterin and an active development community capable of coordinating protocol-level changes. A migration to post-quantum cryptography would be enormously complex — everything built on top of the protocol would need to migrate — but the governance structure exists to attempt it.
Bitcoin doesn’t have that. There’s no central coordination mechanism. There’s no Vitalik. And there’s a specific, famous vulnerability: Satoshi Nakamoto’s dormant wallet. If elliptic curve keys can be broken, someone could derive the private key from the public key and move those coins. The question of whether the Bitcoin community could coordinate a response — and whether doing so would itself violate the protocol’s foundational principles — is what Aaronson calls not just a technical problem but a “constitutional” one.
Who gets to change the locks when the old locks stop working? In Bitcoin’s case, the answer is genuinely unclear. That ambiguity is a risk that exists independent of whether the quantum timeline is 2029 or 2035.
This kind of multi-variable risk assessment — tracking technical timelines, governance structures, and second-order effects simultaneously — is exactly where AI-assisted analysis starts to earn its keep. MindStudio is an enterprise AI platform with 200+ models and 1,000+ integrations that lets you chain models and tools visually, which makes it practical to build monitoring workflows that track developments across quantum computing, cryptography standards, and protocol governance without writing orchestration code from scratch. For security teams that need to stay current across multiple fast-moving fronts simultaneously, that kind of orchestration capability is increasingly practical rather than aspirational.
What Aaronson Is Actually Asking You to Do
The blog post title is “Will you heed my warnings?” — and the ask is specific. Aaronson is not predicting doom. He’s not saying quantum computers will definitely break RSA by 2029. He’s saying the probability is now high enough that organizations that haven’t started migrating to post-quantum cryptography are taking on real, unhedged risk.
The migration path exists. NIST’s post-quantum standards are published. Google has been working on PQC since 2016. Chrome and Android have post-quantum work underway. The tools are available. What’s missing is urgency in the organizations that haven’t started.
Aaronson is also pointing at a dynamic that should feel familiar to anyone who’s watched the AI race: the quantum computing labs are not going to slow down to give the rest of the world time to prepare. The reasoning from those labs — “better that this capability emerges from US-based companies in the open than from Chinese or Russian intelligence in secret” — is structurally identical to the reasoning that has driven AI development forward regardless of safety concerns. Aaronson notes this explicitly, and with some visible frustration.
He used a recent GPT model as a collaborator on a published paper about Quantum Merlin Arthur (QMA) complexity — so he’s not someone who dismisses AI capabilities. But he’s watching the same race dynamic play out in quantum that he watched in AI, and he’s not optimistic that the outcome will be more carefully managed.
The practical implication for anyone building systems today: if your application handles authentication, signs software, manages cryptographic keys, or stores sensitive data, the question isn’t whether to migrate to post-quantum cryptography. The question is whether you’ll do it before 2029 or after.
For teams building security-adjacent tooling, the spec layer matters here too. Remy is MindStudio’s spec-driven full-stack app compiler — you write a markdown spec with annotations, and it compiles into a complete TypeScript stack with backend, database, auth, and deployment. When cryptographic requirements change at the protocol level, having the spec as the source of truth means you update the spec and recompile rather than hunting through layers of hand-written code. That’s a meaningful advantage when the underlying security primitives your application depends on are being replaced wholesale.
The Skeptic’s Credibility Is the Point
One coffee. One working app.
You bring the idea. Remy manages the project.
There’s a version of this story where you dismiss it. Quantum timelines have slipped before. 2029 is still three years away. Maybe AlphaQubit’s progress doesn’t compound the way the optimists expect. Maybe the qubit counts needed to run Shor’s algorithm at scale are still too high.
Those are reasonable objections. Aaronson would probably agree with most of them as probability-weighted concerns.
But here’s what’s different about this moment: the person raising the alarm is the person who spent twenty years telling you not to panic. When a known skeptic updates this sharply, the update itself is information. Aaronson isn’t saying “I’ve become a quantum optimist.” He’s saying “the people I trust most on this specific technical question have told me something that changed my assessment, and I think you should know.”
That’s a different kind of signal than a press release from a quantum computing startup. It’s worth taking seriously.
We’ve seen this pattern before in AI: a researcher known for careful, conservative assessments updates publicly and sharply, and the update turns out to be a leading indicator rather than a lagging one. The compute constraints shaping how frontier AI labs operate offer a useful parallel — resource bottlenecks that seemed permanent have a way of resolving faster than expected when the incentives are strong enough. Quantum error correction may be following a similar curve.
The question isn’t whether quantum computers will eventually break current cryptography. Shor’s algorithm has been published for thirty years. The question is whether 2029 is close enough that you need to act now.
Aaronson thinks it is. The people he trusts most on this think it is. Google’s internal infrastructure team thinks it is. Cloudflare thinks it is.
If you’re building systems that will still be running in three years — and most production systems will be — that’s probably enough people to listen to.