Ezra Klein Says the AI Job Apocalypse Probably Won't Happen — Here's the Economic Argument He's Making
Ezra Klein's NYT op-ed cites Alex Immis's Jevons' paradox framework to argue AI creates demand for labor rather than eliminating it. Here's the logic.
Ezra Klein Doesn’t Think AI Will Cause Mass Unemployment — Here’s the Economic Logic Behind That Claim
Ezra Klein published an op-ed in the New York Times titled “Why the AI Job Apocalypse Probably Won’t Happen,” and the argument he makes is not the one you’d expect from a prominent left-leaning commentator who has spent years covering technology’s social costs.
Klein isn’t saying AI is overhyped. He’s saying the people predicting mass unemployment might be misreading how technology and labor markets actually interact — and he’s leaning on a University of Chicago economist named Alex Immis to make the case.
If you build AI systems for a living, you should understand this argument. Not because it’s definitely right, but because it’s the most rigorous counterargument to the doom narrative, and you need to be able to engage with it seriously.
The Claim That Started This
The Immis essay is called “What Will Be Scarce.” It applies Jevons’ paradox to AI labor, and Klein’s op-ed essentially brings that framework to a mainstream audience.
Jevons’ paradox, originally observed in 19th-century coal consumption, says that when a resource becomes more efficient to use, total consumption of that resource tends to go up, not down. More efficient steam engines didn’t reduce coal usage — they made coal economically viable for more applications, so demand exploded. Immis’s argument is that AI is doing the same thing to cognitive labor.
Other agents ship a demo. Remy ships an app.
Real backend. Real database. Real auth. Real plumbing. Remy has it all.
Klein’s op-ed cites ASU professor Eldar Maximov: “In every major occupation group that adopted computers heavily, employment grew faster than in groups that did not. Computers eliminated specific tasks within jobs, but the resulting cost reductions created so much new demand that the occupations expanded overall.”
Then Klein makes it personal. He describes how, when he started his podcast ten years ago, he was its only researcher. Now he has a full team. Did the team make his job easier? No — he does more challenging episodes because he can, and he spends more time on research because there’s more to absorb. The capability expansion created more demand for the work, not less.
His most quotable line: “Every enthusiastic AI adopter I know is working harder than ever because there is more they can do.”
Why This Is Non-Obvious (and Why Silicon Valley Gets It Wrong)
The instinctive reaction from people building AI is: “Of course AI will take jobs. I’ve seen what Claude Code can do. I’ve watched it write in days what used to take weeks.” That reaction is understandable but structurally flawed in a specific way.
Klein points this out directly. The people closest to AI — the builders, the researchers, the founders — have two problems when predicting labor market effects. First, they’re observing AI doing the tasks they do every day, which makes it feel like AI can do everything. Second, Silicon Valley doesn’t have a great track record of understanding what happens outside Silicon Valley. It’s full of builders. It is not full of labor economists.
There’s also a cynical read Klein acknowledges: AI labs have strong incentives to tell a story of inevitable displacement. It excites investors. It justifies massive infrastructure spend. And as he notes, it conveniently provides cover for unwinding post-COVID hiring binges. A CEO who says “we’re cutting 14% because crypto trading revenue fell 47% year-over-year” is in a worse position than one who says “we’re cutting because AI is changing everything.” The AI narrative is a more comfortable story to tell.
The macro data, as Klein points out, doesn’t match the anecdotal doom. The US unemployment rate was 4.3% in March 2026. In March 2020, it was 4.4%. Average hourly earnings are stable. And Citadel Securities data shows software engineering job postings up 18% from the May 2025 inflection point — the Federal Reserve confirms software engineering jobs are at their highest since November 2023. These are the most AI-exposed roles in the economy, and demand for them is accelerating. If you want to understand why the WAT framework — Workflows, Agents, and Tools — has become such a common mental model for AI builders, this is part of the context: the tooling layer is expanding because the demand for AI-native work is expanding, not contracting.
The Elasticity Framework — This Is the Actual Argument
The Immis essay introduces a distinction that’s doing a lot of work in this debate: elastic versus inelastic demand for labor.
How Remy works. You talk. Remy ships.
Some work expands when costs fall. Software development is the canonical example — when it gets cheaper to build software, people build more software, not the same amount of software with fewer engineers. Legal discovery, semiconductor analysis, customer research, security monitoring — these are all categories where demand is elastic. Lower the cost of doing the work, and you discover there was always more work to do than you could afford.
Other work is more capped. Payroll processing, month-end close, basic compliance filing, routine reporting — the demand for these tasks doesn’t expand much when they get cheaper. You still only need to run payroll once a month. These are inelastic.
The doom narrative implicitly assumes most cognitive work is inelastic. The Immis/Klein argument is that most valuable cognitive work is actually highly elastic — and that the savings from AI in one sector flow into expanded demand in another.
For Immis, the sector where demand is most elastic is what he calls the relational sector: bespoke, human-mediated, high-touch experiences. As AI handles more routine cognitive work, people will want more of the things that AI can’t provide — genuine human judgment, accountability, relationships. The savings from AI-automated tasks become purchasing power for human-intensive services.
This is not a guarantee. It’s a historical pattern that has held across previous technology transitions, and the argument is that there’s no obvious reason AI breaks it. But the burden of proof, Klein argues, should be on those claiming this time is different — not on those pointing to 200 years of technology-driven labor market expansion.
What the Data Actually Shows Right Now
Klein’s op-ed is careful to say the macro data isn’t matching the doom narrative yet. Here’s what’s actually in the numbers.
The Wall Street Journal, analyzing LinkedIn job posting data, found that AI created 640,000 jobs between 2023 and 2025 in the US. These include new white-collar positions that didn’t exist before: head of AI, AI engineer, AI implementation specialist. New college graduate hiring is up 5.6% year-over-year. Unemployment for ages 20-24 with a college degree fell from roughly 9% to roughly 5%.
Stripe Atlas data adds another angle. Q1 2026 saw a 130% year-over-year increase in new startup incorporations, hitting 100,000 all-time incorporations. Stripe’s data also shows AI-sector startups growing revenue faster than historical norms. The argument here is that AI is creating entrepreneurs faster than it’s eliminating jobs — at least so far. If you’re building agents and workflows, platforms like MindStudio give you the infrastructure to do that without writing orchestration code from scratch — 200+ models, 1,000+ integrations, a visual builder for chaining agents. That kind of tooling is part of why the barrier to starting an AI-native company has dropped so dramatically.
Palantir’s Q1 2026 earnings are worth looking at here too. Revenue grew 85% year-over-year — their fastest pace since their 2020 IPO. Net income was up 4x to $870M. Government revenue growth accelerated from 66% to 84%. CTO Shyam Sankar’s framing: “Tokens are the new coal. Palantir is the train.” The point being that the infrastructure for deploying AI into real enterprise workflows is itself a massive growth industry.
None of this proves the doom narrative is wrong. It proves the doom narrative is premature.
The Harder Version of Klein’s Argument
Everyone else built a construction worker.
We built the contractor.
One file at a time.
UI, API, database, deploy.
Klein’s most interesting point isn’t the optimistic one. It’s this: “What’s likelier is that AI doesn’t take all or most of the jobs, but it does take some. And that, strangely, is the possibility we’re least prepared for.”
He draws a comparison to the China trade shock. The best estimates put job losses from Chinese import competition at around 2 million — small in the context of a US economy where roughly 5 million people are hired and 5 million leave or lose jobs every month. But those 2 million losses were devastating for specific communities, and the policy response was almost nothing.
A world where AI displaces 8 million workers might be harder to handle than a world where it displaces 80 million. Mass unemployment would force a wholesale restructuring. Partial displacement gets ignored until it’s too late for the people experiencing it.
This is where Klein’s argument is genuinely uncomfortable, and it’s the part that tends to get lost when people summarize his piece as “AI won’t take jobs.” That’s not quite what he’s saying. He’s saying the apocalyptic framing is probably wrong, but the moderate version of the problem — specific job categories hollowed out, specific communities devastated, inadequate policy response — is both more likely and harder to address.
What This Means If You’re Building AI Systems
If you’re an AI builder, the Jevons’ paradox framework has practical implications for how you think about what you’re building.
The question isn’t “will this automate a task?” It’s “is the demand for this task elastic or inelastic?” If you’re automating something where demand expands when costs fall — code generation, data analysis, content production, customer research — you’re probably not eliminating jobs. You’re enabling more of that work to get done. The engineers using Claude and AI agent tooling aren’t being replaced; they’re doing more ambitious projects than they could have justified before. It’s worth understanding how Anthropic’s compute constraints are shaping Claude’s availability if you’re building on top of these models — capacity limits affect which workloads are practical to run at scale, which in turn shapes which elastic-demand categories you can actually serve.
If you’re automating something with genuinely capped demand — a specific compliance report that only needs to exist once a quarter — you might actually be reducing headcount in that specific function. The distinction matters for how you think about the economic effects of what you ship.
The Atlassian Rovo case is instructive here. Rovo is an AI search tool built on top of Jira and Confluence’s existing knowledge graph — 20 years of structured relationships between work, teams, people, and code. Instead of using token-hungry RAG to retrieve context, Rovo does graph lookups. Customers using Rovo are growing their ARR at twice the pace of non-Rovo customers. Atlassian’s stock was up roughly 30% on earnings day, with revenue growing 32% year-over-year, up from 23% the prior quarter. This is not a story of AI replacing Atlassian’s customers’ workers. It’s a story of AI making those workers more productive, which made the companies more valuable, which accelerated growth. The demand for the underlying work expanded.
Plans first. Then code.
Remy writes the spec, manages the build, and ships the app.
The same logic applies to how you think about building AI tools. If you’re building something that helps knowledge workers do more ambitious work — rather than automating a fixed-demand task — you’re probably on the elastic side of the ledger. That’s where the Jevons’ paradox argument says growth happens. When you’re thinking about how to structure the spec for a new tool, Remy takes a different approach than traditional scaffolding: you write annotated markdown as the source of truth, and it compiles into a complete TypeScript backend, database, auth, and deployment. The spec is what you reason about; the code is derived output. That’s a meaningful shift in where the human judgment lives in the development process. Builders who have experimented with personal knowledge base approaches like Andrej Karpathy’s LLM Wiki will recognize the same instinct — make the human-authored artifact the thing that matters, and let the tooling derive everything else from it.
The Honest Uncertainty
Sam Altman tweeted on May 1st: “jobs doomerism is likely long-term wrong.” That’s a significant rhetorical shift from a company whose stated mission used to be building artificial general intelligence that would, by definition, be able to do most things humans do for money.
But Klein’s framework doesn’t require you to be bullish on AI to find it compelling. The historical pattern is clear: technology that reduces the cost of cognitive work has, repeatedly, expanded demand for cognitive work rather than eliminating it. The burden of proof is on those claiming AI breaks that pattern.
That doesn’t mean the transition is painless. It doesn’t mean every job category survives. It doesn’t mean the communities most exposed to displacement will be adequately supported. Klein is explicit that the moderate version of the problem — partial displacement, inadequate policy response — is the one we’re least prepared for.
What it does mean is that the apocalyptic framing — mass unemployment, permanent underclass, the end of work — is probably wrong in the same way the apocalyptic framings around previous technology transitions were wrong. Not because the technology isn’t as capable as advertised, but because human demand for things to do with capable tools turns out to be nearly unlimited.
The engineers I know who’ve gone deep on AI coding tools aren’t working less. They’re building things they couldn’t have justified building before. That’s not a coincidence. That’s Jevons.