When Does LLM Self-Correction Help? A Control-Theoretic Markov Diagnostic and Verify-First Intervention

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper models LLM iterative self-correction as a cybernetic feedback loop and uses a two-state Markov framework over {Correct, Incorrect} to decide when to iterate versus stop.
  • It proposes a deployment diagnostic based on an ECR/EIR stability condition (iterate only when ECR/EIR > Acc/(1-Acc)), interpreting EIR as a stability margin and prompting as a lightweight controller design.
  • Experiments across 7 models and 3 datasets (GSM8K, MATH, StrategyQA) identify a sharp near-zero EIR threshold (≤0.5%) that separates beneficial self-correction from harmful refinement.
  • A verify-first intervention via prompt ablation provides causal evidence that crossing this threshold is actionable: on GPT-4o-mini it cuts EIR from 2% to 0% and flips degradation into improvement, while making little difference for already safe models.
  • The authors argue self-correction should be treated as a control decision rather than a default agent behavior, balancing error dynamics against the cost of stopping/refinement trade-offs.

Abstract

Iterative self-correction is widely used in agentic LLM systems, but when repeated refinement helps versus hurts remains unclear. We frame self-correction as a cybernetic feedback loop in which the same language model serves as both controller and plant, and use a two-state Markov model over {Correct, Incorrect} to operationalize a simple deployment diagnostic: iterate only when ECR/EIR > Acc/(1 - Acc). In this view, EIR functions as a stability margin and prompting functions as lightweight controller design. Across 7 models and 3 datasets (GSM8K, MATH, StrategyQA), we find a sharp near-zero EIR threshold (<= 0.5%) separating beneficial from harmful self-correction. Only o3-mini (+3.4 pp, EIR = 0%), Claude Opus 4.6 (+0.6 pp, EIR ~ 0.2%), and o4-mini (+/-0 pp) remain non-degrading; GPT-5 degrades by -1.8 pp. A verify-first prompt ablation provides causal evidence that this threshold is actionable through prompting alone: on GPT-4o-mini it reduces EIR from 2% to 0% and turns -6.2 pp degradation into +0.2 pp (paired McNemar p < 10^-4), while producing little change on already-sub-threshold models. ASC further illustrates the stopping trade-off: it halts harmful refinement but incurs a 3.8 pp confidence-elicitation cost. Overall, the paper argues that self-correction should be treated not as a default behavior, but as a control decision governed by measurable error dynamics.