What Is the AI Backlash? Why Public Sentiment Toward AI Is Now Worse Than ICE
AI now ranks among the most negatively perceived technologies in the US. Here's what the data shows and what it means for builders and businesses.
Public Trust in AI Has Collapsed — Here’s the Data
Something unusual is happening in the world of technology. AI is everywhere: in business tools, consumer apps, hiring pipelines, and customer service queues. Billions of dollars are flowing into AI companies. And yet, public trust in artificial intelligence has cratered.
The AI backlash isn’t a fringe reaction. It’s showing up in polling data, brand reputation surveys, and consumer behavior studies across the US and Europe. In some measures, AI companies now rank below government agencies that have historically been among the most controversial institutions in the country — including US Immigration and Customs Enforcement (ICE).
That’s not a rhetorical flourish. It reflects a real and measurable shift in how ordinary people perceive AI, the companies building it, and what they think it means for their lives.
If you’re building AI products, deploying AI at work, or advising organizations on technology strategy, this matters. Public sentiment shapes regulation, adoption rates, and whether people actually use the tools you build.
Here’s what the data shows, why it happened, and what it means.
What the Numbers Actually Show
The Harris Poll Finding
The comparison that sparked widespread attention came from the Axios Harris Poll 100, an annual survey that ranks the reputations of the 100 most visible companies in America. In its 2024 edition, several prominent AI companies scored lower than ICE — an agency that has faced sustained public criticism over immigration enforcement practices, detention conditions, and civil liberties concerns.
For context: ICE routinely ranks near the bottom of any favorability survey of US government agencies. The fact that AI companies are now in that same territory says something significant about how fast and hard the public opinion shift has been.
Pew Research: Fear Outweighs Excitement
Pew Research Center has tracked public attitudes toward AI for several years. The trend is consistent: concern is rising, enthusiasm is falling.
By 2023, Pew found that 52% of Americans felt more concerned than excited about AI in daily life — up from 38% in 2022. Only 10% said they were more excited than concerned. The largest group (36%) said they felt equally excited and concerned.
Among the specific concerns people cited:
- Loss of jobs (most commonly mentioned)
- Privacy and surveillance
- AI making decisions about their lives (credit, hiring, healthcare)
- Misinformation and deepfakes
- Loss of human connection
Gallup: Workers Are Worried
Gallup’s workplace data shows a similar picture. In its 2024 work and wellbeing survey, nearly 3 in 5 workers said they were worried about AI’s impact on their job. That number was significantly higher among workers in white-collar roles that have traditionally felt insulated from automation — writers, analysts, customer service professionals, paralegals, and marketers.
This is a notable shift from earlier AI waves (robotics, manufacturing automation), where job anxiety was concentrated in manual labor. Now it’s spread to knowledge workers, who make up a large portion of survey respondents and public discourse.
Global Edelman Trust Barometer
Edelman’s annual trust research shows declining trust in technology companies overall, with AI specifically emerging as a flashpoint. Their data shows a trust gap: people trust AI in narrow, lower-stakes contexts (recommendation algorithms, spam filtering) but distrust it in high-stakes ones (medical diagnosis, hiring, law enforcement, news).
The Edelman findings are particularly notable because they show the divide isn’t just between “tech-savvy” and “non-tech-savvy” people. Educated, high-income respondents show nearly the same concern levels as lower-income respondents on issues like AI in hiring and healthcare.
How Did We Get Here?
The Hype Cycle Backfired
When ChatGPT launched in late 2022, public reaction was a mix of genuine awe and anxiety. But the companies and media coverage that followed leaned heavily into utopian framing: AI was going to cure cancer, eliminate poverty, write all your emails, and free humanity from drudgery.
That kind of hype creates a specific kind of backlash. When the cures don’t materialize and the emails still need editing, people don’t feel neutral — they feel deceived.
The gap between what was promised and what people actually experienced in their daily lives became a source of cynicism. And when AI did intrude in unexpected ways — a resume screened out, a customer service bot that couldn’t help, an AI-generated news story that got facts wrong — it confirmed the skepticism.
High-Profile Failures and Controversies
A string of visible AI failures kept sentiment negative throughout 2023 and 2024:
- Google’s Gemini image generation produced historically inaccurate images that went viral and drew widespread ridicule
- AI-generated misinformation circulated before elections in multiple countries
- AI voice clones were used in robocall fraud targeting voters
- Customer service AI at major companies repeatedly frustrated users and became fodder for viral complaints on social media
- AI hiring tools at companies like Amazon were found to have discriminatory biases
- Copyright lawsuits against OpenAI, Stability AI, and others raised public questions about whether AI companies were stealing from creators
Each of these incidents got media coverage. Each one reinforced a narrative: that AI was being deployed recklessly, that companies were prioritizing speed over safety, and that ordinary people were bearing the costs.
The Jobs Question Got Real
For most of 2022 and into 2023, job displacement was somewhat theoretical. Companies were experimenting with AI. Layoffs were happening, but they were attributed to broader economic conditions.
By 2024, AI-related layoffs and job restructurings became more direct. Content agencies cut writing staff and replaced them with AI workflows. Customer service departments announced headcount reductions tied explicitly to AI deployment. Entry-level roles in several fields — paralegal work, data entry, basic coding tasks — started disappearing or paying significantly less.
This is different from past automation waves in an important way: it’s hitting people who are educated, vocal, and active on social media. Their frustration about job displacement is visible and amplified in ways that factory worker displacement often wasn’t.
Trust in AI Labs Specifically Collapsed
The sentiment problem isn’t just about AI in the abstract — it’s about specific companies.
OpenAI’s internal governance crisis in late 2023 (when the board briefly fired Sam Altman) gave the public a window into how chaotic AI development actually is. The story that emerged — of a nonprofit mission being subordinated to commercial interests, of safety researchers being sidelined — matched exactly what critics had been saying.
Subsequent revelations about AI labs training on scraped data without consent, lobbying against regulation, and making safety commitments they didn’t keep added to the picture. The people building the most powerful AI systems started looking less like cautious scientists and more like fast-moving companies trying to outrun oversight.
The Gap Between Enterprise Adoption and Public Perception
Here’s the paradox at the center of the AI backlash: enterprise adoption of AI is accelerating while public trust is declining. These two things are happening simultaneously.
McKinsey’s annual global survey on AI shows adoption rates among businesses roughly doubling between 2023 and 2024. Companies are deploying AI in operations, customer experience, HR, and marketing at scale. Productivity gains are real and documented.
But most of that deployment is invisible to consumers. They don’t see the internal workflow automation that saved an analyst two hours a week. They see the chatbot that couldn’t resolve their billing dispute. They see the AI-generated marketing email that felt hollow. They see headlines about layoffs.
This creates a trust asymmetry. Businesses experience AI’s upside. Consumers disproportionately experience its downside — or at least its rough edges.
What This Means for Builders
If you’re building AI-powered products or deploying AI in customer-facing contexts, you’re operating in an environment where your users start with skepticism.
That has practical implications:
Transparency matters more than it used to. Telling users when they’re interacting with AI, and giving them options to escalate to a human, reduces friction and builds trust incrementally. Hiding AI involvement tends to backfire when users notice.
Over-promising accelerates distrust. If your AI onboarding claims it will “handle everything,” users will remember that promise when it fails. Undersell and overdeliver.
High-stakes decisions need human oversight. Deploying AI in hiring, healthcare, financial decisions, or legal contexts without meaningful human review isn’t just an ethical problem — it’s a brand problem. The lawsuits and headlines are predictable.
Speed of deployment ≠ quality of experience. Many AI-related trust failures come from companies rushing tools to production before they’re reliable. The backlash from a bad AI experience can be worse than no AI experience at all.
Who’s Driving the Backlash?
The AI backlash isn’t monolithic. Different groups have different concerns, and conflating them leads to bad strategy.
Labor Activists and Unions
The SAG-AFTRA strikes of 2023 were partly about AI — specifically about studios using AI to replicate actors’ likenesses and voices. The Writers Guild of America also won protections around AI in their negotiations. These were the first major labor actions explicitly targeting AI, and they won public sympathy.
Organized labor is increasingly building AI into its bargaining agenda. This is a structural force, not a passing mood.
Artists and Creators
Visual artists, writers, musicians, and photographers have organized against AI companies they see as profiting from their work without compensation. Groups like the Human Artistry Campaign have lobbied for legislation and filed lawsuits. The emotional resonance of this argument — that AI is taking something from creators — is high.
Privacy Advocates
AI systems require enormous amounts of data, often scraped or collected without explicit consent. Privacy advocates have focused on data sourcing, surveillance applications, and the use of AI in law enforcement. This group is technically sophisticated and well-organized, with real influence on regulatory outcomes in the EU and, increasingly, the US.
General Consumers
The general public’s concerns are more diffuse but equally real. They don’t trust AI to be accurate (hallucinations), fair (bias), or honest (companies not disclosing AI use). They’re worried about deepfakes affecting elections and public discourse. And many simply don’t want more automation in interactions where they want human contact.
The Regulation Response
Public sentiment drives regulation, and regulation is coming. The EU AI Act, signed into law in 2024, is the most comprehensive AI regulation in the world. It bans certain uses outright (social scoring, real-time biometric surveillance in public spaces) and imposes strict requirements on high-risk AI applications.
In the US, progress has been slower, but state-level action is accelerating. California, Colorado, Illinois, and Texas have all passed or proposed legislation targeting AI bias in employment, disclosure requirements for AI-generated content, and restrictions on AI in healthcare.
The regulatory trajectory matters to anyone building AI products. What’s permissible today in terms of data sourcing, automated decision-making, and disclosure may not be permissible in 18 months.
Regulatory Risk Is Now a Product Risk
For enterprise teams deploying AI, regulatory exposure is a concrete concern. Using AI in hiring? Illinois’s Artificial Intelligence Video Interview Act already requires disclosure and candidate consent. Building AI into credit or insurance decisions? Fair lending laws are increasingly being interpreted to cover algorithmic decision-making.
Keeping up with the regulatory landscape isn’t optional anymore — it’s a product and legal requirement.
What Responsible AI Adoption Actually Looks Like
The backlash doesn’t mean AI is bad or that businesses should avoid it. It means the bar for responsible deployment is higher than many companies initially assumed.
A few practices that differentiate companies seeing genuine adoption from those creating backlash:
1. Be honest about what AI can and can’t do. Don’t call it “AI-powered” if it’s just a rules-based system. Don’t claim AI can make nuanced human judgments if it can’t.
2. Design for graceful failure. Every AI system makes mistakes. The question is whether users have a clear path when things go wrong. A human escalation option is basic product hygiene, not a nice-to-have.
3. Build feedback loops. AI improves when users can flag errors. Products that make it easy to report problems both improve the product and signal to users that the company is paying attention.
4. Separate automation from replacement. There’s a meaningful difference between “AI handles the routine stuff so humans can focus on higher-value work” and “AI replaces humans to cut costs.” The first builds trust; the second destroys it. Even if your internal motivation is cost reduction, what you communicate externally about AI’s role matters.
5. Engage affected stakeholders before deploying. Rolling out AI in HR without talking to employees, or deploying AI in customer service without understanding what customers actually want, tends to create avoidable problems.
Where MindStudio Fits in This Landscape
One reason AI deployments go wrong is the implementation layer. Companies rush to build AI tools using fragile, cobbled-together setups — multiple API accounts, custom code, no monitoring, no reliability guarantees. When something breaks or behaves unexpectedly, there’s no clean way to fix it.
MindStudio is built around the idea that AI agents should be easy to build, easy to audit, and easy to adjust. Its visual no-code builder lets teams create AI-powered workflows in hours rather than weeks — and because the logic is visual, non-technical stakeholders can see and review what the AI is actually doing.
That transparency matters in the current environment. When a manager can look at an AI workflow and understand the logic, they can catch problems before they become customer-facing incidents. When a compliance team needs to audit how AI is being used in a business process, a visual workflow is far easier to review than a dense block of Python code.
MindStudio also connects to 200+ AI models out of the box, which means teams can switch to a different model if one starts producing problematic outputs — without rebuilding their entire stack. That flexibility is practically valuable in a moment when AI models themselves are changing rapidly and controversially.
If you’re deploying AI in a way that needs to hold up to scrutiny — from regulators, from users, from your own leadership — having clean, auditable infrastructure matters. You can try MindStudio free at mindstudio.ai.
Frequently Asked Questions
What is the AI backlash?
The AI backlash refers to a measurable decline in public trust and favorable perception of artificial intelligence technologies and the companies building them. It includes concerns about job displacement, privacy, misinformation, algorithmic bias, and the perceived recklessness of AI development. It’s reflected in polling data, brand reputation surveys, labor actions, and growing regulatory pressure across the US and Europe.
Why do people distrust AI companies?
Several factors have contributed to distrust in AI companies specifically. High-profile product failures and inaccurate outputs have undermined confidence. Revelations about training data being scraped without creator consent raised ethical concerns. Internal governance crises at major AI labs (particularly OpenAI) showed instability. And many companies made sweeping promises about AI’s capabilities that haven’t been delivered in everyday use.
Is the AI backlash affecting business adoption?
Not directly — enterprise AI adoption continues to grow rapidly. But the backlash is shaping how businesses deploy AI, especially in customer-facing contexts. Companies are increasingly focused on transparency, disclosure, and human oversight as they roll out AI tools, partly in response to user pushback and partly to get ahead of regulatory requirements.
How does public sentiment toward AI affect AI regulation?
Public sentiment is a key input into the regulatory process. When the public expresses concern about a technology — through surveys, social media, media coverage, and political organizing — legislators and regulators respond. The EU AI Act and a growing body of US state-level AI legislation are direct responses to public concern about how AI is being used in high-stakes contexts like employment, healthcare, and law enforcement.
What is the Axios Harris Poll 100 finding about AI?
The Axios Harris Poll 100 is an annual survey measuring the reputations of the 100 most visible companies in the US. In recent editions, several prominent AI companies scored poorly enough to rank below institutions like ICE, which itself consistently ranks low in public perception. This was widely cited as a sign that AI companies — despite their cultural and financial prominence — have a significant trust deficit with the general public.
What can AI builders do to address the backlash?
The most effective responses involve transparency (telling users when they’re interacting with AI), honesty about limitations (not over-promising), designing for graceful failure (clear escalation paths when AI doesn’t work), and meaningful human oversight in high-stakes contexts. Companies that treat responsible deployment as a product requirement — not an afterthought — tend to build more durable user trust.
Key Takeaways
- Public trust in AI has declined sharply since 2022, with major surveys showing concern outpacing enthusiasm by wide margins.
- Some AI companies now rank below historically unpopular institutions in brand reputation surveys — a sign of how fast sentiment has shifted.
- The backlash is driven by a combination of high-profile failures, job displacement anxiety, data privacy concerns, and a gap between AI hype and real-world performance.
- Enterprise adoption of AI continues to grow, but the gap between business upside and consumer experience is a structural trust problem that won’t self-correct.
- Responsible deployment — transparent, auditable, human-reviewed where stakes are high — is no longer optional. It’s the bar for building anything that lasts.
- Tools that make AI workflows visible and adjustable, like MindStudio, reduce the gap between what AI does and what teams actually intended to build.
The AI backlash isn’t a reason to stop building. It’s a reason to build more carefully — with real users, real feedback loops, and honest communication about what AI can and can’t do. That’s how you end up on the right side of where public sentiment is heading.