Scaling Reasoning Hop Exposes Weaknesses: Demystifying and Improving Hop Generalization in Large Language Models

arXiv cs.CL / 5/4/2026

💬 OpinionModels & Research

Key Points

  • The paper studies why large language models (LLMs) experience sharp performance drops when the required number of reasoning “hops” exceeds what they saw during training, even though the reasoning algorithm is unchanged.
  • It finds that errors are not evenly spread across generated tokens; instead, they cluster at specific token positions tied to a few critical error types.
  • The authors identify “erroneous processing heads” (ep heads) in the model’s internal attention/competition mechanisms that amplify incorrect reasoning trajectories while suppressing correct ones.
  • By removing specific ep heads at inference time, the model can often recover correct predictions, motivating a lightweight test-time correction method that dynamically deactivates ep heads during reasoning.
  • Experiments across multiple tasks and LLMs show the proposed test-time correction consistently improves reasoning hop generalization, suggesting a practical path to more robust multi-step reasoning.

Abstract

Chain-of-thought (CoT) reasoning has become the standard paradigm for enabling Large Language Models (LLMs) to solve complex problems. However, recent studies reveal a sharp performance drop in reasoning hop generalization scenarios, where the required number of reasoning steps exceeds training distributions while the underlying algorithm remains unchanged. The internal mechanisms driving this failure remain poorly understood. In this work, we conduct a systematic study on tasks from multiple domains, and find that errors concentrate at token positions of a few critical error types, rather than being uniformly distributed. Closer inspection reveals that these token-level erroneous predictions stem from internal competition mechanisms: certain attention heads, termed erroneous processing heads (ep heads), tip the balance by amplifying incorrect reasoning trajectories while suppressing correct ones. Notably, removing individual ep heads during inference can often restore the correct predictions. Motivated by these insights, we propose test-time correction of reasoning, a lightweight intervention method that dynamically identifies and deactivates ep heads in the reasoning process. Extensive experiments across different tasks and LLMs show that it consistently improves reasoning hop generalization, highlighting both its effectiveness and potential.