Mitigating Shortcut Reasoning in Language Models: A Gradient-Aware Training Approach

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that large language models can solve reasoning tasks using shortcut strategies like surface-pattern matching and memorization instead of true logical inference.
  • It introduces Shortcut-Aware Reasoning Training (SART), a gradient-aware training framework that detects shortcut-promoting samples using metrics such as ShortcutScore, gradient misalignment with validation objectives, and answer-token concentration.
  • SART mitigates shortcut reliance by modifying training dynamics through techniques including gradient surgery, reducing the influence of detected shortcut signals.
  • On controlled reasoning benchmarks, SART reports significant gains versus the strongest baseline, including +16.5% accuracy and +40.2% robustness and improved generalization under distribution shifts.
  • The authors provide accompanying code for reproducing and applying the approach via the linked GitHub repository.

Abstract

Large language models exhibit strong reasoning capabilities, yet often rely on shortcuts such as surface pattern matching and answer memorization rather than genuine logical inference. We propose Shortcut-Aware Reasoning Training (SART), a gradient-aware framework that detects and mitigates shortcut-promoting samples via ShortcutScore and gradient surgery. Our method identifies shortcut signals through gradient misalignment with validation objectives and answer-token concentration, and modifies training dynamics accordingly. Experiments on controlled reasoning benchmarks show that SART achieves +16.5% accuracy and +40.2% robustness over the strongest baseline, significantly improving generalization under distribution shifts. Code is available at: https://github.com/fuyanjie/short-cut-aware-data-centric-reasoning.