Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Scott Aaronson's Quantum Warning: The World's Top Skeptic Now Says Crypto-Breaking Computers Arrive by 2029

Aaronson — the most prominent quantum skeptic alive — now says people 'familiar with the matter' believe fault-tolerant quantum computers breaking RSA…

MindStudio Team RSS
Scott Aaronson's Quantum Warning: The World's Top Skeptic Now Says Crypto-Breaking Computers Arrive by 2029

The World’s Most Prominent Quantum Skeptic Just Changed His Mind

Scott Aaronson published a blog post on May 1, 2026 titled “Will you heed my warnings?” — and if you follow quantum computing at all, you should read it carefully. Aaronson isn’t a hype merchant. He’s spent the better part of two decades as the internet’s most reliable corrective force against quantum overselling. His blog, Shtetl-Optimized, has been the place serious researchers go to get quantum claims stress-tested since 2005. When he says something alarming, it’s alarming precisely because he’s the last person you’d expect to say it.

What he’s saying now: people whose judgment he trusts more than his own on quantum hardware and error correction — some of the most knowledgeable humans alive on these specific topics — are telling him that a fault-tolerant quantum computer capable of breaking deployed cryptographic systems ought to be possible by around 2029.

That’s three years from now.

Why This Warning Lands Differently Than the Others

You’ve probably heard quantum computing threats to encryption before. The field has been generating alarming headlines for a decade. Most of them were premature. Aaronson himself spent years explaining why.

VIBE-CODED APP
Tangled. Half-built. Brittle.
AN APP, MANAGED BY REMY
UIReact + Tailwind
APIValidated routes
DBPostgres + auth
DEPLOYProduction-ready
Architected. End to end.

Built like a system. Not vibe-coded.

Remy manages the project — every layer architected, not stitched together at the last second.

The standard rebuttal went like this: yes, Shor’s algorithm (published 1994) can break RSA and elliptic curve cryptography on a fault-tolerant quantum computer. But fault-tolerant quantum computers don’t exist yet. The error rates in current quantum hardware are too high. You’d need millions of physical qubits to get enough logical qubits to run Shor’s at meaningful scale. We’re nowhere near that. Calm down.

That rebuttal was correct for a long time. What’s changed is the error correction problem.

Quantum computers fail constantly. Qubits are fragile — they decohere, they flip, they produce noise. The gap between a noisy intermediate-scale quantum (NISQ) device and a fault-tolerant one has always been the central engineering challenge. And in November 2024, Google DeepMind published results on AlphaQubit: an AI-based decoder that identifies quantum computing errors with state-of-the-art accuracy. The same team that built AlphaFold for protein folding applied the same basic insight — train a neural network to recognize patterns in a domain where humans struggle — to quantum error correction.

This is the part that makes the 2029 timeline feel different. AI didn’t just accelerate quantum computing at the margins. It attacked the specific bottleneck that was keeping fault-tolerant quantum computers theoretical. The thing that was supposed to buy us time got solved by a neural network. If you’ve been following AI progress in protein folding and biological modeling, you’ve seen this pattern before: a domain that seemed intractable to classical approaches yields to a well-trained model. The same thing just happened to the main technical barrier between us and cryptographically relevant quantum computers.

Aaronson’s credentials matter here. He holds the Schlumberger Centennial Chair of Computer Science at UT Austin, co-founded UT Austin’s Quantum Information Center, and was just elected to the US National Academy of Sciences — one of the highest honors for American scientists. He collaborated with GPT models on published proofs. He moonlighted at OpenAI on the super-alignment team. He is not someone who reaches for alarm lightly. When he writes “don’t you dare come to this blog and tell me that I failed to warn you,” that’s not rhetorical flourish. That’s a careful person who has updated his model of the world and wants you to update yours.

The Evidence He’s Assembling

The warning doesn’t rest on Aaronson’s intuition alone. He co-authored a detailed position paper on the quantum threat to cryptocurrencies alongside Dan Boneh — one of the world’s leading cryptographers — and Justin Drake from the Ethereum Foundation, with Coinbase as an institutional co-author. That paper was still being written when major new results from Google and Caltech/QuEra shifted the timeline further forward.

Google’s own blog post from March 25, 2026 — titled “Quantum Frontiers may be closer than they appear,” published at blog.google — announced an accelerated internal deadline. Google is now targeting 2029 for migrating its own infrastructure to post-quantum cryptography (PQC). The stated reason: faster-than-expected progress in quantum computing has reduced the estimated qubit count needed to break current RSA encryption.

Read that again. Google — the company actively building the quantum computers that would break encryption — has set a 2029 internal deadline to protect its own systems from those computers.

REMY IS NOT
  • a coding agent
  • no-code
  • vibe coding
  • a faster Cursor
IT IS
a general contractor for software

The one that tells the coding agents what to build.

Cloudflare, which sits in front of an enormous fraction of internet traffic, is targeting the same year for full quantum security. These aren’t companies hedging against a remote possibility. They’re companies with direct visibility into the hardware roadmap, and they’re treating 2029 as a real deadline.

Google also published research noting that future quantum computers may break elliptic curve cryptography — the kind protecting cryptocurrency — with fewer qubits and gates than previously estimated. They disclosed this via zero-knowledge proof: demonstrating they know the vulnerability without publishing a complete attack recipe. That’s responsible disclosure. It’s also a signal that the attack surface is more tractable than the field assumed.

The “store now, decrypt later” dynamic makes this urgent even before 2029. Governments — US, Russia, China — have been archiving encrypted communications for years with no ability to read them, banking on future quantum capability to decrypt them retroactively. Past secrets are already compromised in principle. The window for protecting future communications is closing.

What This Means If You Build Systems That Touch Encryption

If you’re an engineer, the practical question is: what’s actually vulnerable, and what do you need to do about it?

The threat isn’t symmetric encryption. AES-256, for example, is weakened by quantum (Grover’s algorithm roughly halves the effective key length) but not broken. The threat is public-key cryptography — RSA, elliptic curve cryptography (ECC), Diffie-Hellman key exchange. These are the systems that underpin TLS, SSH, code signing, certificate authorities, and most blockchain architectures.

Shor’s algorithm makes the math behind these systems stop being hard. The analogy Aaronson’s post gestures at: it’s not that quantum computers try harder to factor large numbers. It’s that they make factoring large numbers easy in a way that adding more digits can’t fix. The security assumption that made RSA-2048 safe evaporates.

NIST finalized its first post-quantum cryptography standards in 2024: ML-KEM (formerly CRYSTALS-Kyber) for key encapsulation, and ML-DSA (formerly CRYSTALS-Dilithium) for digital signatures. If you’re building anything that handles key exchange or signing, these are the algorithms you should be migrating toward. OpenSSL 3.x has experimental support; BoringSSL (used in Chrome) has had PQC hybrid modes in production. The migration isn’t trivial — certificate chains, protocol negotiation, key sizes all change — but the path exists.

The harder problem is systems you can’t easily update. Blockchains are the obvious case. Bitcoin uses elliptic curve cryptography. When you spend Bitcoin, your public key gets exposed on-chain. A sufficiently powerful quantum computer could derive the private key from that public key and move the funds. Satoshi’s dormant wallet has never spent coins — the public key has never been exposed — but that protection disappears the moment those coins move, or the moment quantum computers can attack addresses from transaction history.

Ethereum has active governance. Vitalik Buterin and the core development community can coordinate a migration to quantum-resistant signature schemes. It’s complex — everything built on top of Ethereum would need to migrate too — but the governance mechanism exists. Bitcoin has no equivalent. The question of who gets to change the locks when the old locks stop working is, as Aaronson frames it, not just a technical problem but a constitutional one.

For teams building security-sensitive applications today, the threat model has to include “store now, decrypt later.” Any data you encrypt today with RSA or ECC that needs to remain confidential for more than three to five years is potentially already compromised — not because someone can read it now, but because they may be able to read it in 2029. Medical records, financial data, classified communications, anything with a long confidentiality horizon needs PQC migration now, not when quantum computers arrive.

If you’re building AI-powered security tooling or compliance workflows — the kind of thing where you’re chaining models together to monitor infrastructure, flag vulnerabilities, or generate remediation plans — MindStudio offers a no-code path to wire those workflows together across 200+ models and 1,000+ integrations without writing the orchestration layer from scratch. The visual builder lets security teams prototype and deploy agent workflows without needing to manage the underlying model infrastructure.

The Race Dynamic Aaronson Can’t Quite Endorse

There’s a section of Aaronson’s post where he describes the reasoning inside quantum computing labs. The argument goes: isn’t it better for fault-tolerant quantum computers to be built first by US-based companies, in the open, than by Chinese or Russian intelligence, in secret?

He calls this reasoning “suspiciously self-serving and convenient.” He notes it’s structurally identical to the argument AI labs have used to justify racing toward dangerous capabilities — that pressing an advantage is safer than ceding it. He doesn’t say the argument is wrong. He says it’s not his place to answer that question. But he’s clearly uncomfortable with it.

This is the same dynamic playing out in AI development right now. The Claude Mythos cybersecurity capabilities discussion touches on a similar tension: models that can find zero-day vulnerabilities at scale are being built by the same companies that provide security infrastructure. The capability and the threat are inseparable. Aaronson is watching the same pattern repeat in quantum hardware and finding it no more reassuring the second time. And as frontier model capabilities expand — see the capability jump analysis between recent Claude releases — the overlap between offensive and defensive tooling only grows more pronounced.

The May 1, 2026 announcement that the War Department entered agreements with eight frontier AI companies — SpaceX, OpenAI, Google, Nvidia, Reflection, Microsoft, AWS, and Oracle — on the same day as Aaronson’s post is worth noting. Anthropic was absent from that list. The US government is clearly treating quantum-vulnerable infrastructure as a national security problem, not just a technical one.

The Specific Thing Aaronson Is Asking You to Do

His ask is concrete. Start switching to quantum-resistant encryption. Urge your company, your organization, your blockchain, your standards body to do the same. He’s not asking you to panic. He’s asking you to treat 2029 as a real deadline rather than a theoretical one.

For engineers, that means auditing your cryptographic dependencies now. Find every place you’re using RSA or ECC for key exchange or signing. Map the migration path to ML-KEM or ML-DSA. Understand which systems you control and which you don’t. The systems you don’t control — legacy certificate infrastructure, blockchain assets, archived encrypted data — are the ones where the damage is already done or will be hardest to fix.

When teams are building the compliance and monitoring tooling to track PQC migration across complex systems, the spec-driven approach matters. Tools like Remy take a different approach to that kind of application: you write a spec — annotated markdown describing your data model, rules, and edge cases — and the full-stack app gets compiled from it, TypeScript backend, SQLite database, auth, and deployment included. The spec is the source of truth; the generated code is derived output. For security-sensitive tooling where the requirements need to be auditable, that traceability matters considerably. Knowing exactly what your application does — and being able to prove it from a human-readable document — is a meaningful property when you’re building systems that touch cryptographic key management or compliance reporting.

The AI angle here is underappreciated. AlphaQubit didn’t just solve an engineering problem. It demonstrated that the bottleneck everyone was counting on — quantum error correction being too hard — is the kind of problem neural networks are good at. The same insight that made Claude Code useful for navigating complex codebases — that pattern recognition at scale can surface structure humans miss — applies directly to the error syndrome decoding problem that was supposed to keep fault-tolerant quantum computers out of reach for another decade.

The Credibility of the Warning

Aaronson ends his post with a line that’s worth quoting directly: “If quantum computers start breaking cryptography a few years from now, don’t you dare come to this blog and tell me that I failed to warn you.”

The reason this lands is the source. This is the person who spent years telling the quantum hype cycle to calm down. He corrected breathless claims about quantum supremacy. He explained why NISQ devices weren’t going to break encryption anytime soon. He was right about all of that, for all of that time.

He’s not right about everything — nobody is. But when the world’s most prominent quantum skeptic says the timeline has shifted and the threat is real, the appropriate response isn’t to wait for more evidence. The evidence he’s citing — AlphaQubit, Google’s internal 2029 deadline, the Coinbase/Boneh/Drake position paper, Cloudflare’s migration timeline — is already substantial. The people telling him 2029 is plausible are, by his own account, people whose judgment he trusts more than his own on the hardware specifics.

That’s a specific, credible, falsifiable warning from someone with every incentive to be conservative. You should probably heed it.

Presented by MindStudio

No spam. Unsubscribe anytime.