AtManRL: Towards Faithful Reasoning via Differentiable Attention Saliency

arXiv cs.CL / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper introduces AtManRL, a reinforcement-learning method aimed at making LLM chain-of-thought (CoT) reasoning more faithful to what drives the final answer.
  • AtManRL trains an additive, differentiable attention mask to pinpoint which CoT tokens are crucial for correct predictions, producing a saliency-based reward signal.
  • The saliency reward is combined with outcome (correctness) rewards using the GRPO framework to jointly optimize accuracy and interpretability.
  • Experiments on GSM8K and MMLU using Llama-3.2-3B-Instruct show that the method can identify influential reasoning tokens and help train more transparent reasoning models.

Abstract

Large language models (LLMs) increasingly rely on chain-of-thought (CoT) reasoning to solve complex tasks. Yet ensuring that the reasoning trace both contributes to and faithfully reflects the processes underlying the model's final answer, rather than merely accompanying it, remains challenging. We introduce AtManRL, a method that leverages differentiable attention manipulation to learn more faithful reasoning through reinforcement learning. By training an additive attention mask that identifies tokens in the CoT crucial for producing correct answers, we derive a saliency reward signal that encourages the model to generate reasoning traces that genuinely influence its final predictions. We integrate this saliency reward with outcome-based rewards within the GRPO framework to jointly optimize for correctness and interpretability. Experiments on GSM8K and MMLU with Llama-3.2-3B-Instruct demonstrate that our approach can identify influential reasoning tokens and enable training more transparent reasoning models.