TAMTRL: Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning in Long-Context Compression

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a temporal credit-assignment problem in multi-turn reinforcement learning for long-context compression, where supervision is only available at the final outcome rather than at each memory update step.
  • It proposes TAMTRL, which reshapes rewards by using relevant documents as teacher signals aligned to each turn of the model input, providing fine-grained per-turn learning signals.
  • TAMTRL assigns rewards via normalized probabilities in a self-supervised manner, aiming to reduce both computational overhead and estimation noise seen in prior methods like LLM-as-a-judge or process reward models.
  • Experiments across multiple model sizes and seven long-context benchmarks show TAMTRL consistently outperforming strong baselines, supporting its effectiveness for long-context processing.
  • The authors provide released code via a public repository link for reproducing and extending the approach.

Abstract

The rapid progress of large language models (LLMs) has led to remarkable performance gains across a wide range of tasks. However, when handling long documents that exceed the model's context window limit, the entire context cannot be processed in a single pass, making chunk-wise processing necessary. This requires multiple turns to read different chunks and update memory. However, supervision is typically provided only by the final outcome, which makes it difficult to evaluate the quality of memory updates at each turn in the multi-turn training setting. This introduces a temporal credit assignment challenge. Existing approaches, such as LLM-as-a-judge or process reward models, incur substantial computational overhead and suffer from estimation noise. To better address the credit assignment problem in multi-turn memory training, we propose Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning (TAMTRL). TAMTRL leverages relevant documents as teacher signals by aligning them with each turn of model input and assigns rewards through normalized probabilities in a self-supervised manner. This provides fine-grained learning signals for each memory update and improves long-context processing. Experiments with multiple models of varying scales across seven long-context benchmarks show that TAMTRL consistently outperforms strong baselines, demonstrating its effectiveness. Our code is available at https://anonymous.4open.science/r/TAMTRL-F1F8.