Skip to main content
MindStudio
Pricing
Blog About
My Workspace

AI Is Already Doing 25% of Tasks in Half of All Jobs: 6 Data Points That Reframe the Displacement Debate

Anthropic's Economic Index found 49% of jobs have had a quarter of their tasks done by Claude. Here's what the full data picture actually shows.

MindStudio Team RSS
AI Is Already Doing 25% of Tasks in Half of All Jobs: 6 Data Points That Reframe the Displacement Debate

Half of All Jobs Have Already Changed. Here’s the Data.

Anthropic’s Economic Index found that 49% of jobs have already had at least 25% of their tasks performed using Claude. Not “could be affected.” Not “might see disruption.” Already performed. Past tense. That number landed quietly, without the fanfare you’d expect from a finding that says half the workforce has materially changed how it operates.

You probably didn’t see a headline about it. That’s the problem.

The displacement debate has been dominated by two camps: the doomers who say AI will hollow out the economy, and the dismissers who say it’s all hype and the jobs data is fine. Both camps are selectively reading the evidence. The actual picture is more complicated, more interesting, and more actionable than either side admits.

Here are six data points that, taken together, reframe what’s actually happening.


The Anthropic Number Is the One That Should Unsettle You

Start with the Economic Index finding again, because it deserves more than a passing mention.

49% of jobs. 25% of tasks. Already. Using one model.

This isn’t a projection from a research paper modeling hypothetical LLM capabilities. This is observed usage data from people who chose to bring their work to Claude and did so at scale. The methodology matters here: Anthropic looked at what tasks people were actually completing with the model, not what tasks the model could theoretically do.

Remy is new. The platform isn't.

Remy
Product Manager Agent
THE PLATFORM
200+ models 1,000+ integrations Managed DB Auth Payments Deploy
BUILT BY MINDSTUDIO
Shipping agent infrastructure since 2021

Remy is the latest expression of years of platform work. Not a hastily wrapped LLM.

The distinction is significant. A lot of AI capability research asks “what could this model do?” Anthropic’s index asks “what are people actually doing with it?” The answer is that nearly half the workforce has already integrated AI into at least a quarter of their work, and they did it without waiting for their company to mandate it, without a formal rollout, and without showing up in most organizations’ official AI adoption metrics.

The implication isn’t that 49% of jobs are about to disappear. The implication is that the economic value of a large chunk of tasks inside those jobs has already shifted. The work is still getting done. The question is who — or what — is doing it, and what that means for how roles get priced and bundled when the next budget cycle forces the question. For context on how capable the underlying models have become, what Claude is and how it’s being used for AI agents gives a useful baseline for understanding why the usage numbers are this large.


OpenAI and UPenn Put a Number on the Exposure

Before Anthropic’s index, the most-cited estimate came from OpenAI and University of Pennsylvania researchers: roughly 80% of US workers could have at least 10% of their tasks affected by LLMs, and about one in five workers could see 50% or more of their tasks affected.

That’s a wide range, and the 80% figure gets quoted a lot. But the more interesting number is the one in five at 50%-plus exposure. That’s not marginal disruption. That’s a structural change to what the job is.

The OpenAI/UPenn research was modeling exposure based on task composition — essentially asking which occupational tasks overlap with what LLMs can do. Anthropic’s index is measuring actual usage. The fact that both approaches converge on “this is already affecting a large fraction of the workforce” is the signal. When your theoretical exposure model and your observed usage data point in the same direction, you’re probably not looking at noise.


Microsoft Looked at 200,000 Conversations and Found Something Boring

Microsoft researchers analyzed 200,000 Bing Copilot conversations to understand what work people actually bring to AI. The answer was not exotic. The most common tasks were gathering information and writing.

Not code generation. Not complex analysis. Not legal research or financial modeling. Gathering information and writing.

This is the part of the displacement debate that gets underweighted. People imagine AI disruption as a dramatic event — the model that can do the thing the expert does. But the Microsoft data suggests the actual disruption is happening at the most mundane layer of knowledge work: the email rewrite, the summary, the research synthesis, the status update. These tasks are not glamorous. They are also not rare. They are what a significant fraction of office hours actually consist of.

The implication of the Microsoft data, read alongside the Anthropic index, is that AI isn’t primarily attacking the high-skill peaks of knowledge work. It’s eroding the base — the routine information-handling and writing tasks that quietly make up the majority of most people’s weeks. That’s a different kind of disruption than the “AI replaces the expert” narrative, and it’s harder to see coming because the tasks being absorbed were never the ones people built their professional identity around.


Software Engineering Jobs Are Up 18% Since May 2025

Here’s where the data gets genuinely counterintuitive.

Citadel Securities published data showing that software engineering job postings — the occupation most directly exposed to AI coding tools — are up 18% from an inflection point in May 2025. The Federal Reserve’s data puts software engineering employment at its highest point since November 2023. This is not a rounding error or a seasonal blip. The most AI-exposed occupation in the economy is seeing accelerating demand.

Sequoia partner Konstantin Beuler flagged this as a “narrative violation.” It is. If you believe the simple displacement story — AI gets good at X, demand for X workers falls — software engineering should be the canary. It’s the opposite.

The explanation that holds up is Jevons’ paradox applied to software: when the cost of producing code falls dramatically, the demand for software expands faster than the supply savings. You don’t use fewer builders when bricks get cheaper. You build things that were previously too expensive to justify. University of Chicago economist Alex Imms made this argument in his essay “What Will Be Scarce,” which Ezra Klein later brought to a mainstream audience in the New York Times. The core claim: AI doesn’t reduce demand for the underlying capability. It reduces the cost of the capability, which expands the market for it.

This is also why the Wall Street Journal’s finding — that AI created 640,000 jobs between 2023 and 2025 in the US, according to LinkedIn’s analysis of job posting data — doesn’t contradict the Anthropic index. Both can be true simultaneously. AI is absorbing tasks inside existing jobs while creating new roles around the infrastructure, coordination, and judgment work that AI can’t do. For builders thinking about where to focus their agent and workflow development, understanding how Claude and GPT-5 compare on agentic tasks matters precisely because different models have different strengths in the task categories that are expanding versus contracting. The capability gap between model generations is also relevant here — how large the jump is between Claude Mythos and Opus 4.6 gives a sense of how quickly the ceiling on automatable tasks is rising.


Startup Incorporations Are Up 130% Year-Over-Year

Stripe Atlas just hit 100,000 all-time incorporations. Patrick Collison announced in May that Q1 2026 was up 130% year-over-year.

That number is doing something important in this debate: it suggests that the people being displaced from traditional employment aren’t disappearing into unemployment. A meaningful fraction of them are incorporating. The tools that make it possible to build a product with a small team — or a team of one — have gotten dramatically better, and the evidence is showing up in formation data.

The unemployment rate in March 2026 was 4.3%. In March 2020, it was 4.4%. The macro data is not matching the doom narrative. That doesn’t mean the doom narrative is entirely wrong — it means the disruption is happening in a way that aggregate statistics are bad at capturing. Specific roles, specific task bundles, specific communities are absorbing real pain while the headline numbers stay stable. Ezra Klein made this point directly: a world where AI displaces 8 million workers might be harder to handle than a world where it displaces 80 million, because mass unemployment forces a policy response, while targeted displacement gets ignored the way the China trade shock was ignored.

The Stripe data is interesting because it suggests the floor on displacement may be higher than expected — not because the jobs aren’t being affected, but because the barrier to creating new economic activity has dropped. If you can build a working product with a fraction of the resources it used to require, the calculus on starting something versus finding a new job changes. MindStudio makes this more concrete: it’s an enterprise AI platform with 200+ models and 1,000+ integrations, with a visual builder for orchestrating agents and workflows — meaning someone who understands a business problem deeply can build and deploy a working solution without needing to hire a development team. That’s a real change in who can be a founder, and it’s part of why the incorporation data looks the way it does.


The Travel Agent Pattern Is Already Repeating

The travel agent analogy is worth taking seriously, because it’s not an analogy about a profession that got destroyed. It’s an analogy about a profession that got restructured slowly, then suddenly, in a way that was invisible until it wasn’t.

Expedia didn’t erase travel agents overnight. The routine booking layer eroded first. The economics of the simple transaction changed. But the employment data didn’t show a dramatic drop immediately — the visible break came later, during downturns, when companies were forced to ask whether the role was still bundled the right way. The agents who survived moved toward complex itineraries, corporate travel, emergency handling, and judgment-intensive work that the booking interface couldn’t do. The ones who didn’t move were fine until they weren’t.

This is the pattern that the Anthropic index data suggests is already underway in knowledge work. The 49% figure isn’t a warning about what’s coming. It’s a description of what’s already happened. The tasks being absorbed by Claude aren’t the hard ones — they’re the information-gathering and writing tasks that Microsoft’s data identified as the most common AI use cases. The routine layer is already eroding. The question is whether the roles built around that routine layer get restructured before the next downturn forces the question.

The six-step response to this — stop performing theater, don’t reinvest recovered time into commodity work, build a private track record of judgment calls, use that record to refuse commodity work, make durable work partially legible, and move roles if there’s no durable work path — is a reasonable individual response. But it requires seeing the data clearly first, which is why the statistics matter.

For builders and engineers thinking about where to build, the implication is that the most durable work involves holding ambiguous questions, not answering specified ones. The work that compounds to the individual rather than to the organization. Claude’s benchmark results and what they reveal about task coverage are relevant here not as a threat but as a map — understanding what the model is genuinely good at tells you something about which task categories are moving toward commodity and which ones still require a person in the loop.


What the Six Data Points Actually Add Up To

Taken individually, each of these numbers can be explained away. The Anthropic index is just one model. The OpenAI/UPenn research is theoretical. The Microsoft data is about Bing Copilot users, not the whole workforce. Software engineering jobs going up could be a lag effect. Startup incorporations could reflect other factors. The unemployment rate is a lagging indicator.

Taken together, they tell a coherent story that’s more nuanced than either the doom narrative or the dismissal narrative.

Remy doesn't write the code. It manages the agents who do.

R
Remy
Product Manager Agent
Leading
Design
Engineer
QA
Deploy

Remy runs the project. The specialists do the work. You work with the PM, not the implementers.

AI is not replacing jobs wholesale. It is absorbing tasks — specifically the information-handling and writing tasks that make up the base layer of most knowledge work. This is happening at scale (49% of jobs, 25% of tasks, already). The macro economy is absorbing it without a headline unemployment event, partly because new roles are being created, partly because some displaced workers are becoming entrepreneurs, and partly because the disruption is concentrated enough that it doesn’t show up in aggregate statistics even when it’s devastating for specific communities.

The travel agent pattern — slow erosion, then sudden restructuring during a downturn — is the most honest model for what’s coming. The data doesn’t tell you when the downturn happens. It tells you that the routine layer is already exposed, and the roles built primarily around that layer are sitting on thinner ice than the calendar and the performance review suggest.

For builders working on AI tools and agents, the practical question is which task categories are elastic (demand expands as cost falls) versus inelastic (demand is capped regardless of cost). Software is elastic. Routine reporting is not. The 18% increase in software engineering job postings is evidence that building things — real things, with real backends and real deployment — is in the elastic category. Remy is built around this insight: you write a spec in annotated markdown, and it compiles into a complete TypeScript stack with backend, database, auth, and deployment. The spec is the source of truth; the code is derived output. That’s a bet that the demand for working software is elastic enough that lowering the cost of production expands the market rather than shrinking the workforce.

The data points don’t resolve into a clean verdict. They resolve into a more honest question: which parts of the work you’re doing right now would still need you if the routine layer disappeared tomorrow? The Anthropic index says that for half the workforce, that question is no longer hypothetical.

Presented by MindStudio

No spam. Unsubscribe anytime.