The votes are in: AI will hurt elections and relationships

The Register / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • A new Stanford AI research report warns that unsafe AI usage practices could worsen harms in high-stakes areas such as elections and personal relationships.
  • The report highlights widespread public anxiety about AI’s societal impacts, alongside evidence of ongoing unsafe or risky behavior.
  • It also notes shifting geopolitical dynamics, with China reportedly “catching up” to the USA in AI capabilities.
  • The article frames these findings as a pressing call to address security, governance, and responsible deployment to reduce real-world harm.

The votes are in: AI will hurt elections and relationships

Latest report from Stanford's AI boffins finds unsafe usage practices, widespread anxiety about impacts, and China catching up to the USA

Tue 14 Apr 2026 // 00:05 UTC

Artificial intelligence has achieved mass adoption faster than the personal computer or the internet, reaching 53 percent of the population in just three years. The number of harmful AI incidents has increased correspondingly. And both experts and laypeople believe the impact will be felt in two areas: Elections and relationships.

According to the 2026 AI Index Report [PDF], from Stanford University's Institute for Human-Centered Artificial Intelligence (HAI), "Responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply."

Documented AI incidents – defined as "harms or near harms realized in the real world by the deployment of artificial intelligence systems" by the AI Incident Database – reached 362 in 2025, up from 233 in 2024, the report says.

That coincides with an increase in AI adoption: 88 percent of organizations say they're using AI and about 80 percent of university students admit as much.

One possible explanation for that finding is that AI models have become quite good at programming, with scores on the SWE-bench test of success tackling real-world GitHub issues rising from 60 percent to close to 100 percent in the space of a year.

High scores on a particular benchmark don't tell the full story because all AI models tend to be deficient in different areas. On the AA-Omniscient Index, designed to assess whether models will admit when they're unsure about something instead of just guessing, hallucination rates across 26 models varied from 22 percent to 94 percent.

When attorneys use AI models to make "over two dozen fake citations and misrepresentations of fact," and get called out for it by the US Sixth Circuit Court of Appeals, that's an example of what the Stanford HAI researchers mean when they say responsible AI hasn't kept pace with usage.

And despite all the talk about AI superintelligence, AI lags behind people when it comes to telling time – OpenAI's GPT-5.4 High managed to read analog clocks correctly just 50.6 percent of the time as of March 2026, compared to about 90 percent for "unspecialized humans," as described in the ClockBench benchmark [PDF].

Robots demonstrate even less competence, succeeding in only 12 percent of household tasks, based on the BEHAVIOR-1K simulation benchmark.

The HAI report, at 423 pages, represents the Stanford group’s summary of the current state of AI research and its impact on society. Written by human researchers with help from ChatGPT and Claude, not to mention financial support from Google, OpenAI, and others, the report's findings extend beyond the scarcity of "responsible AI" to touch on various aspects of the AI industry.

In terms of public opinion, the report finds "AI experts and the US public disagree on nearly everything about AI's future, except that it will hurt elections and personal relationships."

Sixty-four percent of the American public expect AI will reduce the number of jobs available to humans over the next two decades, while five percent foresee AI creating more jobs. Only 39 percent of experts anticipate fewer jobs while 19 percent of experts project more employment. Experts, however, believe that generative AI will contribute to 80 percent of US work hours by 2030, compared to the public's prediction of 10 percent.

Just 31 percent of US respondents said they trust in their government to regulate AI responsibly, the lowest level of any country. With OpenAI backing an Illinois state bill that would limit the liability of AI companies in the event their models cause catastrophic harm, and the White House pursuing an "industry-friendly AI policy," it's not difficult to see how Americans might have doubts about their government's interest in protecting them.

The HAI report observes that Chinese AI models have closed the performance gap with US AI models. As of March 2026, the top US model, Claude Opus 4.6 scored 1,503 on the Arena benchmark, just 2.7 percentage points above ByteDance's Dola-Seed Preview at 1,464. That lead had narrowed as of April 9, 2026, with Claude Opus 4.6 Thinking at 1,548, closely followed by Z.ai's GLM-5.1 at 1,530.

The US continues to lead in AI investment, said to have reached $285.9 billion in 2025. That's 23 times more than the $12.4 billion invested in China, though the report notes it may have under-counted government funding. Even so, the US is losing technical talent. "The number of AI researchers and developers moving to the US has dropped 89 percent since 2017, with an 80 percent decline in the last year alone," the report finds. ®

More like these
×

More about

More like these
×

TIP US OFF

Send us news