Thinking Fast, Thinking Wrong: Intuitiveness Modulates LLM Counterfactual Reasoning in Policy Evaluation

arXiv cs.AI / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a benchmark of 40 peer-reviewed economics/social-science policy evaluation cases, labeled by whether empirical results are obvious, ambiguous, or counter-intuitive relative to common priors.
  • In experiments using four frontier LLMs, the authors find that chain-of-thought prompting strongly boosts performance on “obvious” cases but largely fails on counter-intuitive ones (OR = 0.053, p < 0.001), indicating a “CoT paradox.”
  • The study finds that “intuitiveness” of the target outcome is the dominant driver of accuracy, explaining more variance than either model choice or prompting strategy (ICC = 0.537).
  • A “knowledge-reasoning dissociation” is reported: citation/familiarity signals do not correlate with accuracy (p = 0.53), suggesting LLMs may know relevant facts but struggle to reason when evidence conflicts with intuition.
  • The results are interpreted using dual-process theory (System 1 vs. System 2), arguing that current LLM “slow thinking” may reflect more of slow narration than substantive counterfactual reasoning reliability for policy evaluation.

Abstract

Large language models (LLMs) are increasingly used for causal and counterfactual reasoning, yet their reliability in real-world policy evaluation remains underexplored. We construct a benchmark of 40 empirical policy evaluation cases drawn from economics and social science, each grounded in peer-reviewed evidence and classified by intuitiveness -- whether the empirical finding aligns with (obvious), is unclear relative to (ambiguous), or contradicts (counter-intuitive) common prior expectations. We evaluate four frontier LLMs across five prompting strategies with 2,400 experimental trials and analyze the results using mixed-effects logistic regression. Our findings reveal three key results: (1) a chain-of-thought (CoT) paradox, where chain-of-thought prompting dramatically improves performance on obvious cases but this benefit is nearly eliminated on counter-intuitive ones (interaction OR = 0.053, p < 0.001); (2) intuitiveness as the dominant factor, explaining more variance than model choice or prompting strategy (ICC = 0.537); and (3) a knowledge-reasoning dissociation, where citation-based familiarity is unrelated to accuracy (p = 0.53), suggesting models possess relevant knowledge but fail to reason with it when findings contradict intuition. We frame these results through the lens of dual-process theory (System 1 vs. System 2) and argue that current LLMs' "slow thinking" may be little more than "slow talking" -- they produce the form of deliberative reasoning without the substance.