How to Start Your Post-Quantum Migration Before 2029: A Practical Checklist for Engineering Teams
NIST published three PQC standards in August 2024. Here's the practical migration checklist for engineering teams who need to act before the 2029 window closes.
You Have Until 2029. Here’s What to Do With That Time.
NIST finalized its first three post-quantum cryptography standards on August 13, 2024. That date is your starting gun. The finish line is somewhere around 2029 — Cloudflare just moved its own internal deadline from 2035 to 2029, citing the new research, and they’re not a company that makes that kind of call casually. If you’re an engineering team responsible for any system that handles authentication, encrypted data, or long-lived keys, you have roughly four years to migrate. That sounds like a lot. It isn’t.
The reason it isn’t is “harvest now, decrypt later.” The NSA, CISA, and NIST have all warned about this attack vector explicitly. An adversary doesn’t need a quantum computer today to threaten your data today. They just need to collect your encrypted traffic now and wait. If your data needs to stay secret for more than a few years — government records, medical data, financial transactions, API keys — the stealing may already be happening.
This post is a practical checklist. Not a theoretical overview of post-quantum cryptography. Not a history of Shor’s algorithm. A checklist for engineering teams who need to start moving.
Step 1: Know What You’re Actually Protecting
Before you touch a single config file, you need an inventory. The goal is to find every place your systems rely on public-key cryptography — because that’s what quantum computers threaten.
Hire a contractor. Not another power tool.
Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.
Public-key cryptography is the math behind TLS handshakes, digital signatures, certificate chains, and key exchange. It’s not symmetric encryption (AES-256 is fine for now). It’s specifically the asymmetric stuff: RSA, elliptic curve (ECDSA, ECDH), and Diffie-Hellman.
Start by asking three questions:
Where do you do key exchange? Every TLS connection your servers make or accept. Every API call that negotiates a session. Every VPN tunnel.
Where do you use digital signatures? Code signing certificates. JWT signing keys. Root certificates and intermediate CAs. SSH host keys.
Where do you have long-lived keys? This is the dangerous category. Root certificates live for years. API authentication keys often never rotate. Code signing certificates can be valid for a decade. These are the high-value targets Cloudflare specifically called out — because if a quantum attacker forges one of these, they can impersonate your software update server, your API gateway, or your certificate authority.
Write this down. Literally. A spreadsheet is fine. You want columns for: system name, cryptographic primitive in use, key lifetime, and data sensitivity. You can’t prioritize what you haven’t mapped.
Step 2: Understand the Three NIST Standards
NIST finalized three post-quantum cryptography standards in August 2024. You need to know what they’re for, because they’re not interchangeable.
ML-KEM (FIPS 203) — formerly CRYSTALS-Kyber. This is for key encapsulation — the mechanism you use to establish a shared secret over an untrusted channel. It replaces ECDH and RSA key exchange. This is what goes into your TLS handshake.
ML-DSA (FIPS 204) — formerly CRYSTALS-Dilithium. This is for digital signatures. It replaces ECDSA and RSA signatures. This is what you use for code signing, certificate signing, and authentication tokens.
SLH-DSA (FIPS 205) — formerly SPHINCS+. Also for digital signatures, but based on hash functions rather than lattices. It’s slower and produces larger signatures, but it’s a different mathematical foundation — useful as a hedge if lattice-based schemes turn out to have weaknesses.
For most teams, the practical starting point is ML-KEM for key exchange and ML-DSA for signatures. SLH-DSA is worth knowing about but probably not your first deployment.
One thing that trips people up: these standards don’t replace TLS or SSH or JWT. They replace the cryptographic primitives inside those protocols. Your TLS library still does TLS — it just uses ML-KEM for the key exchange step instead of ECDH.
Step 3: Audit Your Dependencies, Not Just Your Code
Here’s where most teams underestimate the work. You probably don’t implement cryptography yourself. You use libraries. And those libraries may or may not support the new standards yet.
Check your TLS library. OpenSSL added experimental support for some post-quantum algorithms via the OQS (Open Quantum Safe) project. BoringSSL (used by Chrome and Android) has been adding support. LibreSSL is behind. If you’re on a managed cloud service, check whether your provider’s TLS termination supports hybrid post-quantum key exchange — AWS, Google Cloud, and Cloudflare have all started rolling this out.
Check your JWT library. If you’re signing JWTs with RS256 or ES256, you’ll eventually need ML-DSA support. Most JWT libraries don’t have this yet. Know that gap exists.
Check your certificate tooling. If you issue internal certificates, your CA software needs to support the new algorithms. Step CA and CFSSL are both actively working on this. Let’s Encrypt has announced plans but hasn’t deployed PQC certificates yet.
Check your HSMs and key management systems. Hardware security modules are the hardest part of this migration. Many HSMs have firmware that can’t be updated to support new algorithms — you may need hardware replacement, not just software updates. Find out now, not in 2028.
The output of this step is a dependency gap list: every library or service in your stack that doesn’t yet support the NIST standards, with a rough estimate of when it will or what the migration path is. If you’re building internal tooling to track this gap list — a certificate tracker, a key rotation dashboard, a dependency scanner — Remy is worth looking at. It’s MindStudio’s spec-driven full-stack app compiler: you write the application as an annotated markdown spec, and it compiles that into a complete TypeScript backend, database, auth, and deployment. For internal tools with evolving requirements, keeping the spec as the source of truth is a useful property when the underlying requirements keep shifting as new library support lands.
Step 4: Start With Hybrid Mode
You don’t have to choose between old cryptography and new cryptography on day one. The right starting point for most systems is hybrid mode — running both simultaneously.
In hybrid TLS, the key exchange uses both ECDH and ML-KEM. The session key is derived from both. This means: if the post-quantum algorithm has a bug, you still have classical security. If the classical algorithm gets broken by a quantum computer, you still have post-quantum security. You get both, at the cost of slightly larger handshakes.
Cloudflare has been doing this since 2022 for encryption — they report that over 65% of human traffic through their network is already post-quantum encrypted, using hybrid key exchange. That’s not a future plan; it’s already deployed at scale.
For your own services, hybrid mode is the right first deployment because it’s low-risk. You’re not betting everything on algorithms that have been in the wild for less than a year. You’re adding a layer.
The concrete implementation path depends on your stack. If you’re using nginx or Caddy with a modern OpenSSL build, you can enable hybrid key exchange via configuration. If you’re on a cloud load balancer, check the provider’s documentation — most have a toggle for this now. If you’re writing your own TLS client (please don’t, but if you are), look at the Open Quantum Safe liboqs library, which provides implementations of the NIST algorithms.
Step 5: Prioritize Long-Lived Keys First
Not everything needs to migrate at the same pace. The harvest-now-decrypt-later threat is most acute for data and keys with long lifetimes.
Here’s a rough priority order:
Highest priority: Root certificates and intermediate CAs. These have multi-year lifetimes and are the trust anchors for everything else. If an attacker harvests traffic signed by your root CA today, they can potentially forge certificates later. Plan to issue post-quantum root certificates as soon as your toolchain supports it.
High priority: Code signing certificates. Software you ship today may be installed and trusted for years. A quantum-forged code signing certificate could be used to push malicious updates to software that’s already deployed. This is the authentication threat Cloudflare flagged explicitly.
High priority: API authentication keys that don’t rotate. If you have service-to-service API keys that were issued once and never changed, those are targets. Implement rotation now, even before you migrate the algorithm.
Medium priority: TLS certificates for public-facing services. These typically have 90-day lifetimes (if you’re using Let’s Encrypt), so the harvest-now-decrypt-later risk is lower. Still migrate, but it’s not the fire.
Lower priority: Session tokens and ephemeral keys. Short-lived keys are much less valuable to a harvest-now attacker. Migrate these last.
This prioritization is where a lot of teams get it wrong — they focus on the visible stuff (their HTTPS endpoints) and miss the invisible stuff (their internal CA, their code signing pipeline, their long-lived service credentials).
Step 6: Don’t Just Add — Also Remove
This is the step that’s easiest to skip and most dangerous to skip.
Adding post-quantum cryptography to your systems is not enough. You also have to disable the old, quantum-vulnerable cryptography. If you don’t, attackers can use a downgrade attack: they trick your system into negotiating with the older, weaker algorithm even though you support the newer one.
Concretely: if your TLS server supports both TLS 1.2 with ECDH and TLS 1.3 with hybrid ML-KEM, an attacker in the middle can sometimes force the connection to use TLS 1.2. You’ve added security, but you haven’t actually secured anything.
The fix is to disable the old cipher suites and key exchange methods after you’ve confirmed that your clients can handle the new ones. This requires knowing your client population. If you have old embedded devices or legacy clients that can’t be updated, you have a harder problem — but you need to know about it now, not after you’ve disabled RSA key exchange.
Cloudflare made this point explicitly: after old cryptography is turned off, secrets like passwords and access tokens may need to be rotated, which creates a dependency chain involving third parties, validation, and fraud monitoring. Plan for that chain. It’s longer than you think.
Step 7: Build a Migration Spec, Not Just a Migration Plan
A migration plan is a list of tasks. A migration spec is a document that captures the current state, the target state, the constraints, and the decision rationale — something you can hand to a new engineer in six months and have them understand what’s happening and why.
This matters because post-quantum migration is a multi-year project. People will leave. Context will be lost. The spec is the artifact that survives.
Your spec should include: the inventory from Step 1, the dependency gap list from Step 3, the priority order from Step 5, the rollback plan for each migration, and the criteria for declaring each system “done.” Done means: post-quantum algorithm deployed, old algorithm disabled, keys rotated, monitoring in place.
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
For teams managing complex multi-system migrations, it’s also worth thinking about how you’ll coordinate the work across engineers. MindStudio is an enterprise AI platform with 200+ models and 1,000+ integrations that lets you build visual workflows for orchestrating agents — useful if you want to automate parts of the migration tracking, alert routing, or dependency scanning without writing the orchestration layer from scratch.
Step 8: Set Up Monitoring for Cryptographic Failures
Once you start deploying post-quantum algorithms, you need to know when they fail. The new algorithms have different failure modes than the old ones — larger key sizes, different timing characteristics, different error conditions.
Instrument your TLS handshakes. Log negotiated cipher suites. Alert on fallback to classical-only key exchange. Track certificate expiration for any post-quantum certificates you issue.
This is also where you catch downgrade attacks in practice. If you see a sudden spike in connections negotiating with old cipher suites after you’ve disabled them, something is wrong — either a client you didn’t know about, or an active attacker.
For teams building AI-assisted security monitoring, the Claude Code source code leak analysis covering 8 hidden features is worth reading — it covers how Claude Code’s internal tooling handles context and instrumentation in ways that translate directly to building better monitoring pipelines for long-running engineering projects.
Step 9: Test Against the NIST Test Vectors
Before you ship anything, test your implementations against the official NIST test vectors. NIST publishes known-answer tests for all three standards — input/output pairs that any correct implementation must match exactly.
This sounds obvious, but it’s easy to skip when you’re using a library and assuming it’s correct. Libraries have bugs. Especially new libraries implementing new standards. Run the test vectors. If your library doesn’t pass them, don’t deploy it.
The NIST test vectors are available from the NIST post-quantum cryptography project page. Download them. Write a test that runs them. Make that test part of your CI pipeline.
Step 10: Track the Research, Because the Timeline Can Move
The 2029 deadline isn’t a law of physics. It’s an estimate based on current understanding of how fast quantum hardware is improving and how efficient the attack algorithms are getting.
That estimate just moved. Google published resource estimates suggesting a future superconducting quantum computer with fewer than 500,000 physical qubits could attack 256-bit elliptic curve cryptography — potentially in minutes. The Caltech/Atom Computing paper estimated around 26,000 physical qubits could attack P-256 in days, under plausible assumptions. And the AI-assisted optimization that produced those estimates — using a tool called OpenEvolve to search algorithm space through something like natural selection — reduced the required qubit count by roughly 1,000x compared to earlier approaches.
John Preskill, one of the paper’s authors and one of the most respected names in quantum computing, told Time he was surprised by how much the qubit count dropped. If the people doing the research are surprised, you should assume the timeline can surprise you too.
The practical implication: don’t treat 2029 as a hard deadline you can work backward from. Treat it as the latest acceptable date, and try to finish earlier. The teams that will be in trouble are the ones that start in 2027.
Remy is new. The platform isn't.
Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.
If you want to stay current on the research without reading every paper, the Claude Mythos vs Claude Opus 4.6 cybersecurity capability gap analysis covers how AI models are being used for security research — relevant context for understanding how AI-assisted cryptographic research is likely to keep accelerating. And if you’re thinking about how AI agents fit into your broader security workflows, 10 AI agents every marketing team needs in 2026 is a useful reference for how teams are structuring agent-based automation more generally, even outside the security domain.
Where You Should Be by End of Year
If you do nothing else this quarter, do these three things:
Complete the inventory from Step 1. Know every system that uses public-key cryptography and the lifetime of the keys involved.
Check your dependency support. Know which of your libraries and services already support ML-KEM and ML-DSA, and which don’t.
Enable hybrid key exchange on at least one public-facing service. Get the operational experience. Learn what breaks. Do it somewhere low-stakes first.
The migration is not a single event. It’s a sequence of decisions made over four years, and the decisions you make in the first year determine how much room you have in the last year. The harvest-now-decrypt-later threat means some of those decisions are already overdue.
The good news is that the NIST standards exist, the libraries are being built, and the industry is moving. The bad news is that “the industry is moving” and “your systems are migrated” are two very different things.
One of those is your job.