AI Navigate

Evaluating Counterfactual Strategic Reasoning in Large Language Models

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors evaluate Large Language Models in repeated Prisoner's Dilemma and Rock-Paper-Scissors to determine whether strategic performance reflects genuine reasoning or memorized patterns.
  • They introduce counterfactual variants that alter payoff structures and action labels, breaking symmetries and dominance relations to test incentive sensitivity.
  • A multi-metric evaluation framework compares default and counterfactual instantiations, highlighting LLM limitations in incentive sensitivity, structural generalization, and strategic reasoning within counterfactual environments.
  • The work highlights implications for evaluating AI strategic reasoning and suggests directions to improve model evaluation and robustness in strategic contexts.

Abstract

We evaluate Large Language Models (LLMs) in repeated game-theoretic settings to assess whether strategic performance reflects genuine reasoning or reliance on memorized patterns. We consider two canonical games, Prisoner's Dilemma (PD) and Rock-Paper-Scissors (RPS), upon which we introduce counterfactual variants that alter payoff structures and action labels, breaking familiar symmetries and dominance relations. Our multi-metric evaluation framework compares default and counterfactual instantiations, showcasing LLM limitations in incentive sensitivity, structural generalization and strategic reasoning within counterfactual environments.