Skip to main content
MindStudio
Pricing
Blog About
My Workspace

OpenEvolve Cut the Qubit Count for Breaking Encryption by 1000x — How an LLM Optimizer Changed the Threat Timeline

The Atom Computing team said their quantum attack approach 'would not work' before AI assistance. OpenEvolve's LLM-based optimizer changed that by 1000x.

MindStudio Team RSS
OpenEvolve Cut the Qubit Count for Breaking Encryption by 1000x — How an LLM Optimizer Changed the Threat Timeline

OpenEvolve Reduced the Qubit Count for Breaking Encryption by ~1000x

The Atom Computing team’s quantum attack approach “would not work” before they ran it through OpenEvolve. That’s not a paraphrase — that’s what the researchers told Time. After using the open-source LLM-based optimizer, the required qubit count dropped by approximately 1000x. That single fact is what caused Cloudflare to move its post-quantum security deadline from 2035 to 2029.

If you build systems that handle authentication, sign code, or store data that needs to stay private for years, this is the research you need to understand — not because the threat is here today, but because the trajectory just changed.

OpenEvolve is the tool at the center of this. It uses large language models to optimize algorithms through a process analogous to natural selection: generate candidates, evaluate them, select the best, repeat. Instead of a human researcher trying a handful of approaches, the LLM can search through thousands of possibilities across niche subfields simultaneously. The Caltech/Atom Computing team used it to improve the most important algorithms in their paper, and the results surprised even John Preskill — one of the most respected names in quantum computing and a co-author on the paper — who told Time he was surprised by how much the qubit count came down.

That’s the story. Not “AI is coming for encryption.” More precisely: AI-assisted algorithm search just made the required quantum hardware significantly smaller than anyone expected.


What OpenEvolve Actually Did

Plans first. Then code.

PROJECTYOUR APP
SCREENS12
DB TABLES6
BUILT BYREMY
1280 px · TYP.
yourapp.msagent.ai
A · UI · FRONT END

Remy writes the spec, manages the build, and ships the app.

The Caltech/Atom Computing paper argues that Shor’s algorithm — the quantum algorithm from the 1990s that can theoretically break public-key cryptography — could run at cryptographically relevant scales with as few as 10,000 reconfigurable atomic qubits. The paper also estimates that around 26,000 physical qubits could attack the P-256 elliptic curve problem in a matter of days, under plausible assumptions.

Those numbers matter because older estimates put the required hardware at a scale that felt safely distant. The “too small and too error-prone” comfort zone has been the standard answer to quantum threat questions for years. This paper moves that line.

OpenEvolve’s role was specific. The team used it to optimize the algorithms that determine how efficiently a quantum circuit can be constructed and executed. Early versions of those algorithms were reportedly about 1000x worse. The AI didn’t design the quantum attack from scratch — humans were still driving the research, asking the right questions, and guiding the search. But the LLM could explore combinations of past scientific results across subfields that no single human researcher would have time to survey manually.

This is a meaningful distinction. OpenEvolve isn’t doing physics. It’s doing combinatorial search over a space of algorithmic choices, using LLMs to generate and evaluate candidates. The researchers said the AI “combined past scientific results in a novel way across niche subfields of quantum computing.” That’s a description of retrieval-augmented synthesis, not autonomous discovery — but the output was still a 1000x improvement in a metric that determines whether a quantum computer needs to be the size of a warehouse or the size of a room.

The paper itself has not yet been peer-reviewed, and Princeton’s Jeff Thompson has noted that many assumptions are untested. Shrinking a computer on paper is easier than shrinking a physical one. But the direction of travel is clear, and the researchers are credible. The broader pattern — AI systems finding non-obvious improvements in algorithm efficiency — is also showing up in frontier model research; for context on how rapidly model capabilities are shifting in parallel, see Claude Mythos Benchmarks: 93.9% SWE-Bench and 59% Multimodal Score.


Why the 1000x Number Changes the Threat Model

The standard framing of quantum risk has always been: yes, Shor’s algorithm can break RSA and elliptic curve cryptography in theory, but the machine you’d need is enormous, expensive, and decades away. That framing depends on the qubit estimates staying large.

When those estimates drop by three orders of magnitude, the threat model changes in two ways.

First, the hardware threshold becomes achievable on a shorter timeline. Google’s separate research estimated that attacking the 256-bit elliptic curve discrete logarithm problem would require fewer than 1,200 logical qubits and fewer than 19 million Toffoli gates — or in an alternate configuration, 1,450 logical qubits and fewer than 17 million Toffoli gates. Google also estimated that under standard assumptions, this could run on a superconducting quantum computer with fewer than 500,000 physical qubits, potentially executing in minutes. That’s still a machine that doesn’t exist yet. But it’s a machine that looks like a plausible engineering target, not a science fiction premise.

Not a coding agent. A product manager.

Remy doesn't type the next file. Remy runs the project — manages the agents, coordinates the layers, ships the app.

BY MINDSTUDIO

Second, the rate of improvement is now faster than expected, because AI is in the loop. If OpenEvolve can find a 1000x improvement in algorithm efficiency in one research cycle, the question becomes: what does the next cycle find? The threat isn’t just that quantum computers are getting bigger — it’s that the algorithms they run are getting more efficient, and AI is accelerating that process.

This is why Cloudflare’s response was to move its deadline six years forward, not to issue a reassuring blog post. Cloudflare is an infrastructure company. When they say “it’s a real shock, we’ll need to speed up our efforts considerably,” that’s an operational statement, not a marketing one.

The same dynamic — AI-assisted search producing results that outpace human intuition — is visible in coding benchmarks as well. The Qwen 3.6 Plus Review: Alibaba’s Frontier-Level Agentic Coding Model covers a model that similarly surprised researchers with its ability to navigate large combinatorial search spaces in software. The OpenEvolve result is a more dramatic version of the same underlying phenomenon.


The Authentication Problem Nobody Talks About Enough

Most coverage of quantum cryptography risk focuses on encryption — the idea that a quantum computer could decrypt intercepted traffic. That’s real, and the “harvest now, decrypt later” attack vector is serious: the NSA, CISA, and NIST have all warned that adversaries can collect encrypted data today and wait for quantum hardware to mature before decrypting it. Government files, medical records, long-term business secrets — anything that needs to stay private for a decade or more is already at risk from this pattern.

But Cloudflare’s framing points at something less discussed: authentication.

Encryption protects the contents of a message. Authentication proves identity — that the server you’re talking to is actually the bank, that the software update you’re installing was actually signed by the vendor, that the API key belongs to who it claims to belong to. Long-lived keys are the specific vulnerability here: root certificates, API authentication keys, and code signing certificates don’t rotate frequently. If a quantum attacker can forge those keys, they can impersonate trusted systems — not just read old messages.

Cloudflare has already enabled post-quantum encryption for all websites and APIs since 2022, and reports that more than 65% of human traffic through its network is already post-quantum encrypted. But post-quantum authentication is the harder problem, and that’s what the 2029 deadline is actually about.

The dependency chain is long. Adding post-quantum cryptography isn’t sufficient — you also have to disable the quantum-vulnerable cryptography, or attackers can use downgrade attacks to force systems back to weaker methods. After that, secrets like passwords and access tokens may need rotation. Third parties need to be coordinated. Fraud monitoring systems need updating. NIST finalized its first three post-quantum cryptography standards on August 13, 2024, and has been encouraging system administrators to begin transitioning — but “begin transitioning” and “fully migrated” are very different states.


What’s Buried in the Google Paper

Google did something unusual with its quantum attack research: instead of publishing the full attack circuits, the team used a zero-knowledge proof to let people verify their claims without revealing the sensitive details.

Hire a contractor. Not another power tool.

Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.

A zero-knowledge proof lets you prove you know something without showing what you know. In this context, Google is saying: we have the attack circuit, we can prove it works, but we’re not going to hand you the blueprint. That’s a deliberate choice, and it signals something about where the field is heading. Once quantum attacks become sufficiently practical, publishing every detail of an optimized attack circuit becomes a different kind of decision than publishing a theoretical paper.

This is the part of the story that doesn’t get enough attention. The research community is already starting to treat quantum attack optimization the way it treats certain classes of vulnerability research — with disclosure norms, not just open publication. That’s a meaningful shift.

It also means that the published estimates — fewer than 1,200 logical qubits, fewer than 19 million Toffoli gates, fewer than 500,000 physical qubits — are the sanitized version. The actual circuits may be more efficient than what’s been disclosed.

The broader pattern here is that AI-assisted algorithm search is now producing results that researchers themselves find surprising. Preskill said he was surprised by the magnitude of the qubit reduction. When the people running the experiments are surprised by the outputs, that’s a signal that the search space is being explored faster than human intuition can track. This is exactly the kind of problem that MindStudio is built around — orchestrating 200+ AI models and 1,000+ integrations through a visual builder to explore large solution spaces that would be intractable for a single model or a single human. The OpenEvolve approach is a research-grade version of the same underlying idea: use LLMs to search, evaluate, and iterate at a scale that humans can’t match manually.


What OpenEvolve Is and How to Think About It

OpenEvolve is open-source. The repository describes it as an implementation of AlphaEvolve-style evolutionary algorithm search using LLMs. The core loop is: generate candidate algorithms (or algorithm modifications), evaluate them against a fitness function, keep the best, repeat. The LLM handles the generation step — proposing modifications based on what’s worked before and what it knows from training.

For the Atom Computing team, the fitness function was presumably something like: how many qubits does this circuit require? How many Toffoli gates? The LLM proposes modifications to the quantum circuit construction algorithm, the evaluator measures resource requirements, and the loop runs until the numbers stop improving.

This is not magic. It’s structured search with a very capable proposal generator. But “structured search with a very capable proposal generator” turns out to be enough to find improvements that human researchers missed across decades of work on Shor’s algorithm variants.

The implications for AI-assisted scientific research are significant. This isn’t a case where the AI is doing the science — humans defined the problem, chose the fitness function, and interpreted the results. But the AI is doing the exploration, and the exploration is finding things that matter. That’s a new kind of research workflow, and it’s going to show up in more fields. For a parallel look at how AI is accelerating capability jumps in a different domain, Claude Code Effort Levels Explained: When to Use Low, Medium, High, and Max covers how compute allocation decisions in coding agents produce non-linear output improvements — the same tradeoff structure that makes OpenEvolve’s search loops expensive but productive.

For engineers thinking about what this means for their own work: the pattern is replicable. If you have an optimization problem with a well-defined fitness function and a large combinatorial search space, LLM-based evolutionary search is now a serious option. The quantum algorithm case is dramatic because the stakes are high, but the technique is general. Tools like Remy apply a related abstraction to software development — you write a spec as annotated markdown, and the system compiles it into a complete TypeScript backend, database, auth layer, and deployment. The underlying idea is the same: move the human to a higher level of abstraction, let the automated system handle the search over implementation space.


What to Watch and What to Do Now

The honest assessment: no quantum computer exists today that can break P-256. The Caltech paper is a theoretical resource estimate, not a working system. The assumptions are untested. Peer review hasn’t happened yet.

But the trajectory is what matters. Three things are moving simultaneously: quantum hardware is improving, algorithms are getting more efficient, and AI is accelerating the algorithm improvement process. The combination means the timeline is compressing faster than the security community’s previous models assumed.

For engineers and AI builders, the practical watchpoints are:

The NIST post-quantum standards are finalized and available. ML-KEM (formerly CRYSTALS-Kyber), ML-DSA (formerly CRYSTALS-Dilithium), and SLH-DSA (formerly SPHINCS+) are the three standards from August 2024. If you’re building new systems that handle authentication or long-lived keys, there’s no good reason not to be using these now.

Long-lived keys are the highest-priority target. Root certificates, code signing certificates, API authentication keys — anything that won’t rotate for years is the thing to audit first. The harvest-now-decrypt-later threat is already active for data; the authentication forgery threat becomes active when quantum hardware reaches the thresholds these papers are estimating.

Disabling old cryptography is as important as adding new. A system that supports both TLS 1.2 with RSA and post-quantum key exchange is vulnerable to downgrade attacks. Adding PQC without removing the old path doesn’t solve the problem.

OpenEvolve itself is worth watching. The repository is active and the technique is general. If you work on algorithm optimization problems — not just quantum computing — this is a tool worth understanding. The quantum cryptography application is the headline, but the method is applicable anywhere you have a large search space and a computable fitness function.

The thing Preskill said that deserves to stay with you: he was surprised by how much the qubit count came down. When the person who coined the term “quantum supremacy” is surprised by the results of his own team’s research, the right response is to update your priors about how fast this is moving.

Presented by MindStudio

No spam. Unsubscribe anytime.