Why Do LLMs Struggle in Strategic Play? Broken Links Between Observations, Beliefs, and Actions
arXiv cs.AI / 5/4/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines why LLMs can struggle at strategic decision-making under incomplete information, focusing on two internal mechanism gaps uncovered via experiments.
- It finds an “observation–belief gap”: LLMs form internal beliefs about hidden game states that are more accurate than their spoken explanations, but those beliefs become brittle and degrade with multi-hop reasoning.
- It reports “bias and coherence” issues in those beliefs, including primacy and recency effects as well as drift away from Bayesian coherence during longer interactions.
- It identifies a “belief–action gap”: LLMs’ conversion of internal beliefs into actions appears weaker than the influence of beliefs provided in prompts, and belief-conditioning does not reliably improve payoffs.
- The authors conclude that analyzing LLM internal processes reveals systematic vulnerabilities, suggesting caution when deploying LLMs in strategic domains without strong guardrails.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

You Are Right — You Don't Need CLAUDE.md
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to