Skip to main content
MindStudio
Pricing
Blog About
My Workspace

Post-Quantum Cryptography: What Engineers Need to Do Before 2029 (And Why Waiting Is Already Too Late)

Governments are already storing encrypted traffic to decrypt once quantum computers arrive. Here's the engineer's checklist for PQC migration before 2029.

MindStudio Team RSS
Post-Quantum Cryptography: What Engineers Need to Do Before 2029 (And Why Waiting Is Already Too Late)

Your Encrypted Traffic From 2019 Is Already Sitting in a Government Database

Here’s the uncomfortable truth: the threat isn’t coming in 2029. Part of it already happened. Governments — US, China, Russia, others — have been running “store now, decrypt later” operations for years. They’re capturing encrypted traffic they can’t read today and archiving it with the explicit plan to decrypt it once fault-tolerant quantum computers arrive. If you sent anything sensitive over TLS in the last decade, assume it’s in a database somewhere waiting for the key to exist.

That’s the part most engineers miss when they think about post-quantum cryptography (PQC) migration. They imagine a future threat. The data exfiltration is already done. What’s coming in 2029 is the decryption.

Scott Aaronson — the Schlumberger Centennial Chair of Computer Science at UT Austin, co-founding director of UT Austin’s Quantum Information Center, and the person who spent years as the internet’s most trusted quantum skeptic — published a post in May 2026 titled “Will you heed my warnings?” His message: people whose judgment he trusts more than his own on quantum hardware and error correction are now telling him that a fault-tolerant quantum computer capable of breaking deployed cryptographic systems should be possible by around 2029. When the skeptic sounds the alarm, you pay attention.

This post is the engineer’s checklist. Not theory — what you actually need to audit, prioritize, and migrate before the window closes.


What You’re Actually Protecting Against (And What’s Already Gone)

Remy doesn't build the plumbing. It inherits it.

Other agents wire up auth, databases, models, and integrations from scratch every time you ask them to build something.

200+
AI MODELS
GPT · Claude · Gemini · Llama
1,000+
INTEGRATIONS
Slack · Stripe · Notion · HubSpot
MANAGED DB
AUTH
PAYMENTS
CRONS

Remy ships with all of it from MindStudio — so every cycle goes into the app you actually want.

Before you can build a migration plan, you need to be precise about the threat model. Quantum computers don’t break everything. They break specific things, and they break them completely.

Shor’s algorithm, published in 1994, shows that a sufficiently large fault-tolerant quantum computer can factor large integers and solve discrete logarithm problems efficiently. That breaks RSA and elliptic curve cryptography (ECC) — the two workhorses of public-key infrastructure. It does not break symmetric encryption like AES-256 in any practical sense (Grover’s algorithm halves the effective key length, so AES-256 becomes roughly AES-128 equivalent — annoying, not catastrophic).

So the threat is specific: anything that relies on RSA or ECC for key exchange, digital signatures, or authentication is vulnerable. That’s TLS handshakes, SSH key pairs, code signing certificates, JWT signing with RS256 or ES256, OAuth tokens, and yes, every blockchain that uses elliptic curve signatures.

The store-now-decrypt-later attack is what makes the 2029 timeline feel abstract but the actual risk feel immediate. Your historical TLS traffic — API calls, authentication flows, anything that crossed the wire — was protected by ephemeral keys that were negotiated using ECC or RSA. Those handshakes are logged. The encrypted payloads are archived. When Shor’s algorithm runs on a fault-tolerant machine, the session keys become recoverable, and the payloads become readable. Past secrets are already compromised in principle.

The 2029 deadline isn’t when the threat starts. It’s when the second phase begins.


What You Need Before You Start the Migration

You can’t migrate what you haven’t inventoried. Most organizations dramatically underestimate how many places they use asymmetric cryptography. Before touching a single config file, you need:

A complete cryptographic inventory. Every certificate, every key pair, every signing operation, every TLS termination point. This includes: your web-facing infrastructure, internal service-to-service communication, CI/CD pipeline signing, mobile app update signing, database connection strings that use certificate auth, and any third-party integrations that exchange signed tokens.

An understanding of your data sensitivity tiers. Not all data has the same retrospective value. Classify what was transmitted historically: credentials and session tokens (high priority — they may still be valid or reveal patterns), PII and financial data (high priority — regulatory exposure), internal communications (medium), and telemetry (low). This determines where you spend migration effort first.

Clarity on your dependency graph. Many engineers discover mid-migration that a critical vendor doesn’t support PQC yet. Map your external dependencies — CDNs, identity providers, payment processors, cloud KMS services — and get their PQC roadmaps in writing.

Familiarity with NIST’s finalized PQC standards. NIST standardized ML-KEM (formerly CRYSTALS-Kyber) for key encapsulation and ML-DSA (formerly CRYSTALS-Dilithium) for digital signatures in 2024. These are your primary targets. SLH-DSA (SPHINCS+) is the hash-based backup for signatures. Know which algorithm fits which use case before you start.


The Migration Checklist, Step by Step

Step 1: Audit Your TLS Configuration

Start with your public-facing TLS. This is where store-now-decrypt-later attacks harvest the most data, and it’s also where the migration path is most mature.

Other agents start typing. Remy starts asking.

YOU SAID "Build me a sales CRM."
01 DESIGN Should it feel like Linear, or Salesforce?
02 UX How do reps move deals — drag, or dropdown?
03 ARCH Single team, or multi-org with permissions?

Scoping, trade-offs, edge cases — the real work. Before a line of code.

Check whether your TLS library supports hybrid key exchange — specifically X25519Kyber768 (now standardized as ML-KEM-768 in hybrid mode). Cloudflare, which is targeting 2029 for full quantum security, has been running hybrid PQC in production since 2023. Google has had it in Chrome since version 116. If you’re running nginx or HAProxy, check your OpenSSL version — OpenSSL 3.x has experimental PQC support via the OQS provider.

Enable hybrid key exchange first. “Hybrid” means you run both classical ECDH and ML-KEM simultaneously — the session key is derived from both. If the quantum algorithm has an unknown weakness, classical security still holds. This is the right default posture for 2025-2029. For teams managing token budgets and context across long-running security audits, the same discipline applies to your tooling — managing context rot in long sessions is worth understanding if you’re using AI assistants to help automate parts of the inventory process.

Now you have: TLS connections that are forward-secret against both classical and quantum adversaries for all new traffic.

Step 2: Rotate Your Signing Infrastructure

Code signing, certificate signing, and token signing are higher-stakes than TLS because signatures are often long-lived and publicly verifiable.

Audit every place you sign something: release artifacts, container images, firmware updates, JWTs, SAML assertions. For each one, identify the algorithm (RS256, ES256, PS256, etc.) and the key size.

Start migrating signing keys to ML-DSA-65 (the NIST-standardized Dilithium variant). For JWT specifically, the JOSE working group has drafts for PQC algorithm identifiers — watch that space, but don’t wait for finalization to start testing. For code signing, Sigstore has been working on PQC integration; check their current roadmap.

One practical note: ML-DSA signatures are larger than ECDSA signatures. ML-DSA-65 produces ~3.3KB signatures vs ~64 bytes for ECDSA P-256. If you’re embedding signatures in constrained environments (firmware, mobile payloads), this matters. Plan for it.

Now you have: A signing key rotation plan with specific algorithm targets and a size budget for each signing context.

Step 3: Audit Your Key Management Infrastructure

Your KMS is the crown jewel. If you’re using AWS KMS, GCP Cloud KMS, or Azure Key Vault, check their PQC support timelines — all three have announced roadmaps but support is still rolling out. If you’re using HashiCorp Vault, the PQC plugin ecosystem is early but functional.

For any keys that protect long-lived secrets (database encryption keys, backup encryption, secrets management), prioritize migration here. These are the keys where a future quantum attacker gains the most leverage — not just reading one session, but decrypting your entire backup history.

This is also where you think about key wrapping. If your data encryption keys (DEKs) are wrapped with RSA or ECC key encryption keys (KEKs), the DEKs are vulnerable even if the underlying symmetric cipher is AES-256. Migrate your KEKs to ML-KEM first.

Now you have: A prioritized list of KMS migrations with the highest-leverage targets identified.

Step 4: Address Your Authentication Layer

SSH, mTLS, and certificate-based auth all need attention. For SSH, OpenSSH 9.0+ supports hybrid PQC key exchange by default (sntrup761x25519-sha512). Update your SSH server and client configurations and rotate host keys.

TIME SPENT BUILDING REAL SOFTWARE
5%
95%
5% Typing the code
95% Knowing what to build · Coordinating agents · Debugging + integrating · Shipping to production

Coding agents automate the 5%. Remy runs the 95%.

The bottleneck was never typing the code. It was knowing what to build.

For internal mTLS between services, this is where the migration gets operationally complex. You likely have a private CA issuing short-lived certificates. The good news: short-lived certificates reduce the store-now-decrypt-later window significantly. If your certs rotate every 24 hours, the historical exposure is bounded. Prioritize migrating long-lived certificates first.

For identity providers — Okta, Auth0, Entra ID — check their PQC roadmaps. Most are targeting 2026-2027 for initial support. Don’t wait for them to lead; push your vendor contacts for timelines.

Now you have: Authentication infrastructure that’s either migrated or has a concrete vendor-dependent timeline.

Step 5: Inventory Your Blockchain and Crypto Asset Exposure

If your organization holds or transacts in cryptocurrency, this deserves its own line item. Bitcoin uses elliptic curve cryptography — specifically secp256k1. The public key is exposed on-chain when coins are spent. Wallets that have never transacted (like Satoshi’s dormant wallet) haven’t exposed their public keys yet, but the moment those coins move, the public key is on-chain and permanently visible to future quantum attackers.

The Coinbase paper on quantum risk to blockchain — co-authored by Scott Aaronson, Dan Boneh (one of the world’s leading cryptographers), and Justin Drake from the Ethereum Foundation — lays out the specific attack vectors in detail. Read it. The governance question is as hard as the technical one: Bitcoin has no active governance mechanism to coordinate a migration, while Ethereum has Vitalik Buterin and an active research community that can at least attempt a coordinated upgrade.

For your organization: if you hold Bitcoin in a wallet that has previously transacted, your public key is already on-chain. Migrate those holdings to a fresh address now, before quantum computers arrive, and keep the new address’s public key unexposed until migration paths exist.

Now you have: A clear picture of your blockchain exposure and a plan for cold storage migration.

Step 6: Update Your Threat Model Documentation

This is the step engineers skip and then regret. Document what you’ve changed, what the residual risk is, and what your assumptions are about the 2029 timeline.

Specifically: document which historical data is at risk from store-now-decrypt-later attacks and what the business impact would be if that data were decrypted. This forces the right conversations with legal, compliance, and leadership. It also gives you a baseline for when the threat model needs to be updated — which it will, probably multiple times before 2029.

If you’re building internal tooling to automate parts of this audit, the spec-driven approach is worth considering. Remy is MindStudio’s spec-driven full-stack app compiler — you write a markdown spec with annotations and it compiles into a complete TypeScript application with backend, database, auth, and deployment included. It’s useful when you need a lightweight internal dashboard for tracking certificate expiry dates, algorithm inventories, and migration status across a large infrastructure without standing up a full project from scratch.

Now you have: A living threat model document that makes the residual risk legible to non-engineers.


The Failure Modes That Will Actually Get You

Thinking “we’ll migrate when the standards finalize.” NIST finalized ML-KEM and ML-DSA in 2024. The standards are done. The migration window is now.

VIBE-CODED APP
Tangled. Half-built. Brittle.
AN APP, MANAGED BY REMY
UIReact + Tailwind
APIValidated routes
DBPostgres + auth
DEPLOYProduction-ready
Architected. End to end.

Built like a system. Not vibe-coded.

Remy manages the project — every layer architected, not stitched together at the last second.

Migrating the perimeter but not the interior. Most organizations will get their public TLS right and then discover their internal service mesh is still running RSA-2048 everywhere. Inventory the interior.

Ignoring your CI/CD pipeline. Code signing is often an afterthought. If an attacker can forge a software update signature, they own every machine that auto-updates. Google’s Android ecosystem is actively integrating post-quantum digital signature protections for exactly this reason. Your pipeline should be on the same timeline.

Assuming your cloud provider handles it. AWS, GCP, and Azure are migrating their infrastructure, but that doesn’t mean your application-layer cryptography is covered. The cloud provider secures the channel; you’re responsible for what you put in it.

Underestimating the operational complexity of certificate rotation at scale. The actual hard part of PQC migration isn’t the algorithms — it’s the operational machinery for rotating thousands of certificates across a distributed system without downtime. Start building that machinery now, before you’re under pressure.

Waiting for vendors. Google is setting a 2029 internal deadline. Cloudflare is targeting 2029 for full quantum security. These are organizations with hundreds of engineers working on this full-time. If you start your migration in 2028, you will not finish in time.


Where to Take This Further

The NIST PQC standards are the foundation — read the actual specifications, not just summaries. ML-KEM (FIPS 203), ML-DSA (FIPS 204), and SLH-DSA (FIPS 205) are the documents your cryptography library choices should be validated against.

The Coinbase/Aaronson/Boneh/Drake paper on quantum risk to blockchain is worth reading even if you have no blockchain exposure — the threat modeling methodology applies broadly.

For teams building AI-assisted security workflows — automated certificate scanning, anomaly detection on cryptographic configurations, compliance reporting — MindStudio connects 200+ models to 1,000+ integrations via a visual builder, making it practical to wire together the monitoring layer without writing all the orchestration code from scratch. That kind of composable tooling matters when you’re trying to maintain visibility across a large, heterogeneous infrastructure during a multi-year migration.

The AI angle here is worth sitting with for a moment. AlphaQubit — Google DeepMind’s AI-based quantum error decoder — identified and corrected quantum computing errors with state-of-the-art accuracy. This is the same dynamic as AlphaFold and protein folding: a problem that seemed intractable until a neural network found the structure in the noise. AI didn’t just accelerate quantum computing in the abstract; it specifically solved the error correction bottleneck that was the main technical barrier to fault-tolerant quantum computers. The thing that’s coming for your encryption was partially built by the same class of tools you’re using to build your products. That’s not a reason to panic. It’s a reason to take the 2029 deadline seriously rather than treating it as someone else’s problem.

The cybersecurity capability gap between AI models is widening fast — which means both the offensive and defensive tooling available to engineers is changing rapidly. Understanding what the most capable models can do in security contexts matters when you’re thinking about what adversaries will have access to on the same timeline as fault-tolerant quantum hardware. The engineers who will handle this transition well are the ones who treat PQC migration as an infrastructure project that starts now, not a research topic that becomes urgent later.

For teams running AI-assisted workflows at any scale, AI agents are increasingly being embedded into security and compliance pipelines — the same orchestration patterns apply to certificate monitoring, cryptographic inventory automation, and migration status tracking.

Aaronson’s warning is simple: the timeline shifted. The people who know the most about quantum hardware now believe fault-tolerant systems capable of running Shor’s algorithm at scale are coming by around 2029. He spent years correcting quantum hype. He’s not correcting it now.

The checklist above won’t take you from zero to fully migrated in a weekend. But it will tell you where you actually stand — and that’s the thing most organizations are still avoiding finding out.

Presented by MindStudio

No spam. Unsubscribe anytime.