AI Job Apocalypse Narrative Is Cracking: 7 Data Points That Tell a Different Story
Software eng jobs up 18%, new grad hiring up 5.6%, Stripe incorporations up 130%. Seven data points that complicate the AI unemployment narrative.
Software Engineering Job Postings Are Up 18% Since Last Year. The Doom Narrative Hasn’t Caught Up.
The AI job apocalypse story has a data problem. Software engineering job postings are up 18% from the inflection point in May 2025, and according to Federal Reserve data, software engineering employment is now at its highest level since November 2023. You’ve probably heard the opposite narrative — the one where AI is quietly hollowing out the tech labor market, where new grads can’t find work, where the only people hiring are the AI labs themselves. The data doesn’t support it.
That doesn’t mean nothing is changing. It means the story is more complicated than the doom-posters are willing to admit. Here are seven data points that complicate the narrative — not to replace one oversimplification with another, but because the specifics actually matter.
Software Engineering Jobs Are at a Two-Year High
Start with the most direct counter-evidence. Citadel Securities published analysis showing that demand for software engineers — the occupation most directly exposed to AI coding tools — has been accelerating since May 2025, not declining. The 18% increase in job postings isn’t a rounding error. It’s a directional signal that contradicts the displacement story at its most basic level.
The Federal Reserve data corroborates it. Software engineering employment in early 2026 is at its highest since November 2023. That’s the period before Claude 3, before GPT-4o, before the agentic coding wave that supposedly made human engineers redundant. If AI coding tools were net job destroyers, you’d expect to see this number moving in the other direction.
The explanation that holds up is Jevons’ paradox: when a resource gets cheaper, total consumption of that resource often goes up, not down. Code is cheaper to produce now. So companies are building more software. More software means more engineers. The logic isn’t complicated, but it runs against the intuitive story that “AI does the coding, therefore fewer coders.”
Sequoia partner Konstantin Beuler flagged this as a “narrative violation” when the Citadel data dropped — which is a polite way of saying the evidence is embarrassing for the doom camp.
New College Grad Hiring Is Up 5.6% Year-Over-Year
The apocalypse narrative has a specific victim in mind: young workers, especially new graduates entering a market where AI can do entry-level tasks. The data on this is also moving the wrong direction for the doom thesis.
New college grad hiring is up 5.6% year-over-year. Unemployment for workers aged 20-24 with a college degree has fallen from roughly 9% to approximately 5%. That’s a substantial drop in a cohort that should, according to the displacement narrative, be getting crushed right now.
Anthony Pompliano, who had previously believed AI would systematically eliminate entry-level roles and work its way up the org chart, publicly changed his position when this data emerged. His reasoning: if AI makes employees more productive, companies want more productive employees, not fewer. That’s not a guarantee that every job category survives intact — it’s a statement about aggregate demand for labor when productivity increases.
The nuance here matters. Some tasks within jobs are getting automated. That’s real. But the jobs themselves — the bundles of tasks, relationships, judgment calls, and context that constitute actual employment — are proving more durable than the task-level analysis suggests.
AI Created 640,000 Jobs Between 2023 and 2025
The Wall Street Journal published an analysis of LinkedIn job posting data showing that AI created 640,000 jobs in the US between 2023 and 2025. These aren’t just AI engineer roles at frontier labs. They include positions like Head of AI at companies across industries — white-collar roles that didn’t exist before and now do.
This is the part of the story that gets systematically underweighted in the doom narrative. The displacement side gets counted carefully. The creation side gets hand-waved as speculative or insufficient. But 640,000 jobs in two years is not a rounding error, and it’s a floor, not a ceiling — the analysis is based on job postings, which lag actual hiring.
The harder question is whether these new jobs are accessible to the workers displaced by AI. That’s a legitimate concern, and it’s where the doom narrative has its most defensible ground. A 45-year-old paralegal whose document review work has been automated doesn’t automatically become a Head of AI. The transition costs are real, and the distribution of new opportunities is uneven.
Hire a contractor. Not another power tool.
Cursor, Bolt, Lovable, v0 are tools. You still run the project.
With Remy, the project runs itself.
But “the transition is painful and uneven” is a very different claim than “AI is destroying the labor market.” The first requires policy responses and support systems. The second requires a different economic model entirely.
Stripe Atlas: 130% More Startups in Q1 2026
Stripe Atlas hit 100,000 all-time startup incorporations in early 2026, and Q1 2026 was up 130% year-over-year. That’s not a small uptick — it’s a doubling-plus in the rate at which new companies are being formed through Stripe’s incorporation service.
Stripe’s own data shows that AI-sector startups are showing faster revenue growth than historical norms. Derek Thompson, who co-authored Abundance with Ezra Klein, summarized it cleanly: “AI agents are better at creating firms than destroying jobs.”
The mechanism here is important. AI is dramatically lowering the cost of building software, running operations, and doing the kind of analytical work that previously required a team. That means the threshold for starting a company has dropped. Work that previously required five people to execute can now be done by one person with the right tools. That’s not just a productivity story — it’s an entrepreneurship story.
If you’re building one of those companies and need to go from a product spec to a deployed application, tools like Remy take a different approach than traditional development: you write an annotated markdown spec, and it compiles into a complete TypeScript backend, SQLite database, auth, and frontend — the spec is the source of truth, the code is derived output. The point isn’t that code disappears; it’s that the abstraction layer has moved up, which is exactly what’s been happening with every generation of programming tools.
The 130% increase in incorporations suggests that at least some of the workers displaced from traditional employment are becoming founders. Whether that’s a good thing depends heavily on whether those founders succeed — but the data doesn’t support the picture of a workforce sitting idle while AI takes everything.
Atlassian Revenue Up 32% — And Their AI Tool Is Why
Atlassian reported 32% year-over-year revenue growth in their most recent quarter, up from 23% the quarter before. The stock jumped roughly 30% on earnings day. The driver wasn’t just general enterprise software demand — it was their AI search tool, Rovo.
CEO Mike Cannon-Brookes disclosed that customers using Rovo were growing their own ARR at twice the pace of customers who weren’t. That’s a meaningful signal: companies that adopted Atlassian’s AI tool are growing faster than those that didn’t.
The technical story behind Rovo is worth understanding. Rather than using token-hungry RAG (retrieval-augmented generation) to search across documents, Rovo taps into the existing knowledge graph that Jira and Confluence have been building for 20 years — structured relationships between work, teams, people, code, and knowledge. A graph lookup is dramatically cheaper than a vector dump. In a world where token supply is constrained and costs matter, that architectural choice is a competitive advantage.
Analyst commentary on the earnings call made the point directly: Atlassian isn’t reducing token costs through clever prompting. They’re reducing them because their customers have spent two decades capturing structured data that makes brute-force retrieval unnecessary. That’s a moat that’s hard to replicate quickly. If you’re thinking about how to build AI workflows that connect to your existing business data without burning tokens on unstructured retrieval, platforms like MindStudio handle this kind of orchestration — 200+ models, 1,000+ integrations, and a visual builder for chaining agents against structured data sources rather than just dumping everything into context.
The Atlassian story also pushes back against the SaaS apocalypse narrative — the idea that AI agents will replace enterprise software subscriptions. The evidence from this earnings report is the opposite: the best-performing customers are adopting built-in AI tools rather than building replacements.
Palantir: 85% Revenue Growth, Government Acceleration
Palantir reported 85% year-over-year revenue growth in Q1 2026 — their fastest pace since their 2020 IPO. Net income hit $870 million, up 4x year-over-year. Government revenue growth accelerated from 66% in Q4 to 84% in Q1.
CTO Shyam Sankar’s framing was direct: “Tokens are the new coal. Palantir is the train.”
What Palantir represents in this data set is the enterprise deployment story. The reason AI hasn’t already restructured every knowledge workflow isn’t that the technology isn’t capable — it’s that deploying it in complex, high-stakes environments requires what Palantir calls forward deployed engineers: people who embed in client organizations, understand the specific requirements, and build the harness that makes the model actually work in that context.
The deployment gap is real. Getting AI to work in a hospital or a government agency or a bank isn’t the same as getting it to work in a demo. The Palantir model — take your best engineers and put them inside the customer’s operation — is one answer to that gap. The revenue numbers suggest it’s working.
The broader implication for the jobs narrative: if AI deployment at scale requires significant human expertise to execute, then the transition period creates demand for a specific kind of labor even as it automates other kinds. The people who can bridge the model capability and the real-world deployment context are not being displaced. They’re being hired aggressively.
For builders thinking about what this looks like at smaller scale — connecting AI capabilities to specific business contexts without a Palantir-sized engineering team — the AI agents for research and analysis use case is one concrete entry point where the deployment gap is narrower and the tools are more accessible.
The CapEx Backlog Is Diverging Upward
The final data point is structural. Morgan Stanley raised its CapEx forecast for the five major hyperscalers to $805 billion for 2026 and $1.1 trillion for 2027. But the more telling number is the backlog: the hyperscalers have roughly $1.3 trillion in reported and projected customer demand against approximately $400 billion in Q1 CapEx spend. The gap between what customers want to buy and what exists to sell them is getting wider, not narrower.
Larry Fink, BlackRock’s CEO, said at Milken: “There is not an AI bubble. There is the opposite. We’re short power. We’re short compute. We’re short chips.” He went further, arguing that AI compute will become a financialized commodity traded on futures markets like oil or wheat.
The Cerebrus IPO made this concrete in a way that’s hard to dismiss. The company planned to raise $3.5 billion at a $26.6 billion valuation. Private investors submitted requests for $10 billion in allocations. The pre-sale turned into an auction — investors were asked to submit their desired allocation and maximum price, which is a break from standard IPO protocol. Excess demand in an IPO is normally resolved by trimming allocations. When demand is so far beyond supply that you have to run an auction, that’s a different kind of signal.
This matters for the jobs narrative because infrastructure investment at this scale creates employment directly (construction, manufacturing, operations) and indirectly (the software and services built on top of the infrastructure). David Sacks estimated that hyperscaler CapEx alone represents a 2.5% tailwind to GDP growth in 2026, rising to over 3% in 2027. That’s before counting the economic activity generated by what happens inside the infrastructure.
The Anthropic-Google deal is the most striking single example. Anthropic committed to $200 billion in Google Cloud spending over five years — a number that represents over 40% of Google’s entire $462 billion reported backlog. That backlog figure is what sent Google’s stock to an all-time high. One AI company’s infrastructure commitment is a meaningful fraction of one of the largest companies in the world’s total forward revenue visibility.
What the Data Actually Says
None of this means the transition is painless. Ezra Klein’s point in his New York Times piece is worth taking seriously: a world where AI displaces 8 million workers might be harder to handle than one where it displaces 80 million, because mass displacement forces systemic response while moderate displacement gets ignored. The communities that absorbed the worst of China trade shock in the 2000s — roughly 2 million jobs lost — got very little help. That’s the real risk.
But “the transition will be uneven and some workers will need support” is not the same claim as “AI is destroying the labor market.” The macro data — software engineering jobs at a two-year high, new grad unemployment falling, 640,000 new AI-related jobs created, startup formation up 130% — doesn’t support the apocalypse framing.
The people closest to AI development have strong incentives to tell a maximalist story about its impact. IPOs are coming. Investors need to be excited. The post-COVID hiring binge needs to be unwound and AI is a convenient explanation. That doesn’t mean the technology isn’t significant — it clearly is, and the Claude Mythos benchmark results and the coding agent performance numbers are real. But capability at the frontier and labor market impact are different questions, and the people best positioned to answer the first question are not necessarily the most reliable sources on the second.
The economists are more skeptical of mass unemployment than the AI builders are. The macro data is more optimistic than the anecdotal data from San Francisco. Sam Altman tweeted on May 1st that “jobs doomerism is likely long-term wrong.” That’s a significant messaging shift from a company whose stated mission was, for years, to build artificial general intelligence — with all the displacement implications that implies.
Built like a system. Not vibe-coded.
Remy manages the project — every layer architected, not stitched together at the last second.
The honest read of the data in mid-2026 is this: AI is creating significant structural change, some categories of work are getting automated, and the labor market is, so far, absorbing it better than the doom narrative predicted. That could change. The models keep improving. The deployment gap keeps closing. But if you’re making decisions — about what to build, what to learn, what to hire for — the data you have right now does not support the apocalypse.