Learning from the Right Rollouts: Data Attribution for PPO-based LLM Post-Training

arXiv cs.LG / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard PPO post-training can be harmed by noisy or unfaithful episodes in the rollout buffer, which weakens optimization signals and slows training.
  • It introduces Influence-Guided PPO (I-PPO), which uses gradient-based influence scoring to remove episodes whose trajectories are anti-aligned with a validation gradient.
  • The filtering is designed to reduce unfaithful chain-of-thought (CoT) reasoning while improving overall model quality.
  • Experiments reported in the study show I-PPO outperforms both SFT and PPO baselines, and the episode filtering functions as an intrinsic early-stopping mechanism to accelerate training efficiency.

Abstract

Traditional RL algorithms like Proximal Policy Optimization (PPO) typically train on the entire rollout buffer, operating under the assumption that all generated episodes provide a beneficial optimization signal. However, these episodes frequently contain noisy or unfaithful reasoning, which can degrade model performance and slow down training. In this paper, we propose \textbf{Influence-Guided PPO (I-PPO)}, a novel framework that integrates data attribution into the RL post-training loop. By calculating an influence score for each episode using a gradient-based approximation, I-PPO identifies and eliminates episodes that are anti-aligned with a validation gradient. Our experiments demonstrate that I-PPO consistently outperforms SFT and PPO baselines. We show that our filtering process acts as an intrinsic early stopping mechanism, accelerating training efficiency while effectively reducing unfaithful CoT reasoning.