Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Cloudflare Moved Its Quantum Security Deadline from 2035 to 2029: 5 Numbers That Explain Why

Cloudflare accelerated its post-quantum deadline by 6 years. Here are the five specific research numbers that forced the change.

MindStudio Team RSS
Cloudflare Moved Its Quantum Security Deadline from 2035 to 2029: 5 Numbers That Explain Why

Six Years Vanished Overnight

Cloudflare just moved its post-quantum security deadline from 2035 to 2029. That’s a six-year acceleration, announced quietly in a blog post, driven by a handful of research papers that most people outside cryptography circles haven’t read. If you run infrastructure, build APIs, or sign code for a living, those six years are the most important thing that happened in security this month.

The shift isn’t panic. Cloudflare is not a company that does panic. It’s an infrastructure firm that routes a meaningful fraction of the world’s internet traffic, and when it says “credible new research and rapid industry developments suggest that the deadline to migrate is much sooner than expected,” that’s a considered statement from people who have to make sure the web keeps working. Someone at Cloudflare told Time: “It’s a real shock. We’ll need to speed up our efforts considerably.”

Here are the five specific numbers buried in the research that explain why.


Number One: 1,200 Logical Qubits

Google researchers published an estimate that a future quantum computer could attack the 256-bit elliptic curve discrete logarithm problem — the math underpinning most digital signatures and a large chunk of cryptocurrency security — using fewer than 1,200 logical qubits and fewer than 19 million Toffoli gates. A second version of the estimate put it at 1,450 logical qubits and fewer than 17 million Toffoli gates.

Cursor
ChatGPT
Figma
Linear
GitHub
Vercel
Supabase
remy.msagent.ai

Seven tools to build an app. Or just Remy.

Editor, preview, AI agents, deploy — all in one tab. Nothing to install.

Those numbers need a translation. A logical qubit is not the same as a physical qubit. Physical qubits are the actual fragile hardware components inside a quantum computer — they make errors constantly. Logical qubits are the reliable, error-corrected versions you build by combining many physical ones. The ratio between physical and logical is where the real engineering challenge lives.

What makes the Google estimate significant isn’t just the qubit count. It’s that Google did something unusual with the results: instead of publishing the exact attack circuits, they used a zero-knowledge proof to let people verify the claims without revealing the full method. A zero-knowledge proof is a way to say “I can prove I know how this works” without handing over the recipe. The fact that Google felt the need to withhold the attack details — to prevent misuse — tells you something about how seriously they’re treating the practical implications.


Number Two: 500,000 Physical Qubits, Execution in Minutes

The logical qubit count is the headline, but the physical qubit estimate is the number that connects theory to hardware. Google estimated that the circuits described above could run on a superconducting quantum computer with fewer than 500,000 physical qubits, and potentially execute in minutes.

To be clear: Google has not built that machine. Nobody has. The largest superconducting quantum computers today are in the range of hundreds to low thousands of physical qubits, and they’re nowhere near the error-correction thresholds needed for cryptographically relevant computation. But the estimate for what the dangerous machine would need has dropped significantly from where it was five years ago. That’s the trend line that matters.

The question security planners have to answer isn’t “does this machine exist today?” It’s “how long until it might?” And the honest answer keeps getting shorter. For context on how frontier AI labs are thinking about compute constraints and capability timelines more broadly, the Anthropic compute shortage analysis is worth reading alongside this research — the resource pressures shaping AI development and quantum computing progress are increasingly intertwined.


Number Three: 26,000 Physical Qubits (Under Plausible Assumptions)

The second major research contribution came from a team connected to Caltech and Atom Computing, and this is where the numbers get genuinely surprising. Their paper argues that Shor’s algorithm — the quantum algorithm from the 1990s that can theoretically break public-key cryptography — could run at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. More specifically, the paper estimates that around 26,000 physical qubits could be enough to attack the P-256 elliptic curve in a matter of days, under plausible assumptions.

The architecture here is different from Google’s superconducting approach. Atom Computing uses neutral atoms controlled by lasers — a technology that allows qubits to be physically repositioned, which changes the error-correction math considerably. John Preskill, one of the most respected names in quantum computing and a co-author on the paper, told Time he was surprised by how much they managed to reduce the qubit count. That’s not a throwaway quote. Preskill coined the term “quantum supremacy.” When he says he’s surprised, it registers.

One coffee. One working app.

You bring the idea. Remy manages the project.

WHILE YOU WERE AWAY
Designed the data model
Picked an auth scheme — sessions + RBAC
Wired up Stripe checkout
Deployed to production
Live at yourapp.msagent.ai

The paper has not yet been peer-reviewed, and Princeton’s Jeff Thompson cautioned that many of the assumptions are untested — it is, he noted, very easy to shrink a computer on paper if you assume better qubits. That’s a fair warning. But even a theoretical estimate that’s off by a factor of ten is a different world than the estimates from a decade ago.


Number Four: 1,000x

This is the number that changes the threat model structurally, not just incrementally.

The Atom Computing team used OpenEvolve, an open-source tool that uses large language models to optimize algorithms through a process similar to natural selection. Instead of a human researcher trying a handful of approaches, the AI searches through thousands of possibilities, combining results from niche subfields in ways that a single human working through the literature might never find. The team’s early algorithms were reportedly about a thousand times worse than what they ended up with. One author said plainly that the approach “would not work” before the AI-assisted improvements.

This is the part of the story that should make security planners genuinely uncomfortable. The threat to encryption has always been framed as a hardware problem: build a big enough quantum computer and you can break the math. But if AI assistance keeps accelerating algorithmic efficiency — if the qubit count required keeps dropping because researchers keep finding smarter ways to use smaller machines — then the dangerous threshold isn’t fixed. It’s a moving target that’s moving toward us.

The combination of improving hardware, improving algorithms, and AI-accelerated algorithm discovery is qualitatively different from any one of those trends alone. Cloudflare’s deadline shift is a direct response to that combination.

This kind of AI-assisted research acceleration is also showing up in other domains. MindStudio is built around the idea of chaining AI models and agents together for complex workflows — 200+ models, 1,000+ integrations, a visual builder for orchestrating multi-step reasoning across specialized tasks. The OpenEvolve use case is a research-grade version of the same underlying pattern: AI models searching a problem space that humans can’t exhaust manually. For a broader look at how AI agents are being applied to research and analysis tasks specifically, the AI agents for research and analysis roundup covers tools that are accelerating exactly this kind of literature synthesis and problem-space search.


Number Five: 65% (and Why It’s Not Enough)

Here’s the number that makes the 2029 deadline feel both reassuring and insufficient at the same time.

Cloudflare says that more than 65% of human traffic passing through its network is already post-quantum encrypted. It enabled post-quantum encryption for all websites and APIs back in 2022. NIST finalized its first three post-quantum cryptography standards on August 13, 2024, and has been urging system administrators to start transitioning as soon as possible.

So encryption is largely in progress. The problem is authentication.

Cloudflare’s blog makes the distinction explicit: encryption protects the content of a message — it’s the locked box. Authentication proves identity — it’s the proof that the person holding the key is actually the bank, the software company, or the server you think you’re talking to. Post-quantum authentication is what the 2029 deadline is actually about, and it’s significantly harder to migrate than encryption.

The reason is the nature of the keys involved. Long-lived keys — root certificates, API authentication keys, code signing certificates — are the high-value targets. If a quantum attacker can forge one of those, they don’t need to read your old messages. They can walk through the front door pretending to be someone trusted. They can impersonate a software update server and push malicious code. They can impersonate a certificate authority. The attack surface is identity itself.

And there’s a subtler problem on top of that. Adding post-quantum cryptography isn’t sufficient on its own. Systems also have to disable the old, quantum-vulnerable cryptography — otherwise attackers can run what’s called a downgrade attack, tricking two systems into negotiating the weaker protocol even when a stronger one is available. Once the old cryptography is disabled, secrets like passwords and access tokens may need to be rotated. That creates a dependency chain involving third parties, validation systems, and fraud monitoring. It’s not an app update. It’s closer to replacing the locks, keys, ID cards, alarm systems, and backup access routes across the entire digital world while the building is still open and people are still walking through it.


The Harvest Now, Decrypt Later Problem

There’s one more dimension to this that the 2029 deadline doesn’t fully capture, and it’s the reason the NSA, CISA, and NIST have all issued explicit warnings about it.

The threat isn’t only future decryption. It’s present-day collection.

An adversary with sufficient resources can collect encrypted data today — government communications, medical records, business secrets, long-term personal information — and store it until a quantum computer capable of breaking the encryption exists. The data doesn’t need to be readable now. It just needs to be captured. This is the “harvest now, decrypt later” attack vector, and it means the clock on sensitive data started running years ago, not in 2029.

This is why the migration timeline matters even for organizations that don’t think they’re obvious targets. If your data needs to stay confidential for more than a decade — and a lot of data does — the encryption protecting it today may not be adequate for the full period it needs to remain secret.

The engineering challenge of migrating at scale is real. For teams building applications that handle sensitive data, the spec-level decisions matter enormously — what cryptographic primitives you’re using, how keys are managed, how authentication flows are structured. Tools like Remy take a different approach to application architecture: you write a spec in annotated markdown, and the full-stack application — TypeScript backend, SQLite database, auth, deployment — gets compiled from it. The spec is the source of truth. When the underlying standards change, you update the spec and recompile, rather than hunting through layers of hand-written code for every cryptographic dependency.


What 2029 Actually Means

Six years sounds like a long time until you account for how long infrastructure migrations actually take.

Cloudflare is one of the most technically sophisticated infrastructure companies in the world, and it’s saying 2029 is the target for full post-quantum security including authentication. That’s not a comfortable buffer — that’s a deadline with real engineering work behind it. The fact that they’re already at 65%+ for encryption and still treating 2029 as a stretch goal for the full picture tells you something about the scope of what’s left.

Remy doesn't write the code. It manages the agents who do.

R
Remy
Product Manager Agent
Leading
Design
Engineer
QA
Deploy

Remy runs the project. The specialists do the work. You work with the PM, not the implementers.

The research that forced this deadline shift is worth sitting with. Google’s estimate of fewer than 500,000 physical qubits to break 256-bit elliptic curve cryptography. The Caltech/Atom Computing paper’s estimate of 26,000 physical qubits to attack P-256 in days. The 1,000x improvement in algorithmic efficiency that AI assistance made possible. The zero-knowledge proof Google used to verify its attack circuits without publishing the full method. And NIST’s August 2024 finalization of the first three post-quantum standards, which started the official clock on migration.

None of this means a quantum computer is breaking your bank account tomorrow. The machines described in these papers don’t exist yet. But the estimates for what those machines would need to look like keep getting smaller, and the tools for finding more efficient algorithms keep getting better. The threat is not static.

Cloudflare’s decision to move from 2035 to 2029 is the most concrete signal yet that the people responsible for keeping the internet running have looked at the trajectory and concluded that the old timeline was wrong. That’s worth taking seriously — not because the sky is falling, but because the people who have to catch it if it does are already running.

If you’re thinking about the security implications of frontier AI models specifically, the Claude Mythos capability analysis covers how Anthropic is thinking about cybersecurity benchmarks at the frontier — which is increasingly relevant context for anyone thinking about AI and security in the same breath. And for a direct comparison of how the latest frontier models stack up on capability, the GPT-5.4 vs Claude Opus 4.6 comparison is useful context for understanding which AI tools are actually available for the kind of algorithmic research that’s accelerating this threat landscape.

The deadline moved. The question now is whether your migration plan did too.

Presented by MindStudio

No spam. Unsubscribe anytime.