Make crappy moves around AI and face voter backlash, govts warned

The Register / 4/16/2026

💬 OpinionIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • The article warns governments that poor or overly self-interested AI policy moves could trigger public and voter backlash over perceived “whose side you are on” issues.
  • It frames AI governance decisions as politically consequential, arguing that taxpayers expect AI to serve broader public goals rather than narrow interests.
  • The piece suggests that authorities should align AI regulation and procurement with outcomes that are demonstrably beneficial to society to maintain legitimacy.
  • It highlights a risk management theme: the political cost of missteps in AI strategy can outweigh technical or administrative rationales.

Make crappy moves around AI and face voter backlash, govts warned

When the taxpayers are wondering whose side you are on...

Thu 16 Apr 2026 // 14:17 UTC

Britain's government faces a public backlash against AI unless it can show ordinary people that they stand to benefit from its push to inject the technology into every area of the UK in the name of growth.

London, England, UK - February 28, 2026: Protestors hold a banner during the March Against The Machines Protest Editorial credit: Loredana Sangiuliano / Shutterstock.com

London, England, UK - Activists hold up a banner during the February 2026 "March Against The Machines" protest – Pic credit: Loredana Sangiuliano / Shutterstock

However, the same lessons will likely apply to other governments including the US, where increasing opposition to AI is already apparent.

The Institute for Public Policy Research (IPPR), an independent think tank based in London, warns that the great unwashed are increasingly worried about AI, now perceived as one of the biggest global risks to humanity, alongside climate change and the threat of war.

This is hardly surprising, given forecasts that glibly talk of millions of workers losing their jobs to AI-based automation of their roles. Forrester recently forecast that 6.1 percent of jobs in the US could be wiped out by 2030, equating to 10.4 million people being laid off.

AI biz Anthropic recently boasted that its latest model, Mythos, is so effective at finding security flaws in systems that it would wreak havoc on the internet if it was made publicly available. These claims are disputed, but they add to the perception that AI is like a ticking bomb that will impact many people's lives when it goes off.

At the same time, the UK government has gone all-in on AI, announcing its AI Opportunities Action Plan last year that will see datacenters peppered across the land, especially in dedicated "AI Growth Zones," declaring its intention to put AI to work across public services, and unveiling Barnsley as the country's first "Tech Town", shoehorning AI in every aspect of local life.

Governments must stand ready to both protect people from the risks of AI and deliberately steer any transformation towards delivering public value, the IPPR says, but there has been little sign of this so far.

Efforts to show the public what AI is for do not go far enough, considering the current pace of change, while efforts to rein in potential abuses by the big technology companies have been modest, and any attempts to redistribute the benefits, by giving the public a stake in AI's economic upside, for example, are non-existent.

Or so the IPPR says in its newly published report, "Acceleration is Not a Strategy," which it presents as "a framework for directing AI towards public value before it's too late."

Corporate cull

A paper from the University of Pennsylvania and Boston University shows that AI-driven layoffs can lead to a no-win situation. If human workers are displaced by automation faster than they can find other work, it will eventually undermine the economy by eroding the purchasing power all companies depend on.

However, in a competitive market, firms become trapped in an automation arms race with their rivals, displacing workers beyond a level that is optimal. The end result harms both workers and company owners.

The signs of a growing backlash are there, from opposition to AI datacenters being built and arguments over copyright theft to train models, plus increasingly heated debates about children's safety, the report states.

It claims there is now a growing coalition of people with strong anti-AI sentiment, and a real risk that justified concerns will harden into blanket opposition to anything AI-related before long.

Governments, especially in the UK and EU, are caught between two modes: pushing on the AI accelerator, or stressing AI safety and governance. Neither approach concerns itself with addressing AI's societal impact.

Instead, governments need policies that steer AI development towards specific public outcomes so that people can clearly see what AI is for, the report says, and market forces alone will not do this. The UK's Department for Science, Innovation and Technology (DSIT) found that AI adoption is concentrated on the low-hanging fruit rather than on transformative, high-impact challenges, according to the IPPR.

It also recommends that priority sectors are assisted in adopting AI systems properly. What often hinders adoption is not technology readiness but infrastructure readiness. If this is not addressed, even well-designed AI tools will fail to deliver public benefit at scale. One barrier to AI in health services is the lack of underlying community health infrastructure, the report cites as an example.

Who controls the bots controls the narrative

Perhaps more contentiously, the IPPR states there is a need to shift the balance of power in the AI economy, as this currently rests with a handful of massive tech corporations, and the big three cloud operators in particular. These are increasingly shaping the application layer and determining which AI products reach consumers.

Without intervention, these megacorps will continue to be the ones that shape AI adoption, risking a market with fewer choices, higher prices, less privacy and ever-larger models with more severe environmental impacts, the report claims.

Sadly, the UK's regulator, the Competition and Markets Authority (CMA) doesn't seem in any rush to do much about it.

The IPPR says if governments want AI to deliver public benefit, politicians must act now to ensure the benefits of AI are broadly distributed.

As a first step, governments should begin to rebalance tax and subsidy schemes so businesses are rewarded for raising worker productivity rather than automating away jobs. Currently, tax breaks are a fiscal incentive to automate rather than augment workers, it claims.

The report argues 2026 should be the year when governments adjust their policy approach, in recognition that simply growing the AI sector and hoping for spillover benefits is not a winning strategy.

Governments need to become more interventionist in steering AI towards delivering clear public value, and confronting the extreme concentration of power that currently exists in the AI economy to ensure any benefits from AI are broadly felt. ®

More about

TIP US OFF

Send us news