Why Does Reinforcement Learning Generalize? A Feature-Level Mechanistic Study of Post-Training in Large Language Models

arXiv cs.CL / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why reinforcement learning (RL) post-training can improve large language model (LLM) reasoning across domains, while supervised fine-tuning (SFT) often causes general capability forgetting.
  • Using a controlled experimental setup (RL- and SFT-tuned models trained from the same base model on identical data) and a feature-level mechanistic analysis, the authors align internal activations across models to track how features change during post-training.
  • The results show SFT rapidly creates many specialized features that stabilize early, whereas RL makes more restrained, continuously evolving feature changes that largely preserve the base model’s representations.
  • For cases where RL succeeds but the base model fails, the authors identify a compact, task-agnostic set of features that mediates generalization, and causal experiments (feature disabling/amplifying) confirm these features’ direct role.
  • An accompanying interpretability methodology and released code enable others to probe and manipulate feature-level mechanisms behind RL generalization (https://github.com/danshi777/RL-generalization).

Abstract

Reinforcement learning (RL)-based post-training often improves the reasoning performance of large language models (LLMs) beyond the training domain, while supervised fine-tuning (SFT) frequently leads to general capabilities forgetting. However, the mechanisms underlying this contrast remain unclear. To bridge this gap, we present a feature-level mechanistic analysis methodology to probe RL generalization using a controlled experimental setup, where RL- and SFT-tuned models are trained from the same base model on identical data. Leveraging our interpretability framework, we align internal activations across models within a shared feature space and analyze how features evolve during post-training. We find that SFT rapidly introduces many highly specialized features that stabilize early in training, whereas RL induces more restrained and continually evolving feature changes that largely preserve base models' representations. Focusing on samples where RL succeeds but the base model fails, we identify a compact, task-agnostic set of features that directly mediate generalization across diverse tasks. Feature-level interventions confirm their causal role: disabling these features significantly degrades RL models' generalization performance, while amplifying them improves base models' performance. The code is available at https://github.com/danshi777/RL-generalization.