LongAct: Harnessing Intrinsic Activation Patterns for Long-Context Reinforcement Learning

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reports that, when LLMs process long contexts, query and key vectors show high-magnitude activation patterns that appear important for effective long-context reasoning.
  • It introduces LongAct, a long-context reinforcement learning strategy that replaces uniform parameter updates with saliency-guided sparse updates targeting weights tied to these salient activations.
  • LongAct delivers about an 8% improvement on LongBench v2 and improves generalization on the RULER benchmark.
  • The approach is described as broadly compatible, providing performance gains across multiple RL algorithms (including GRPO and DAPO) and supported by ablation studies that highlight the role of salient features.
  • The work reframes long-context RL training by leveraging intrinsic representation characteristics rather than relying primarily on reward engineering or data synthesis.

Abstract

Reinforcement Learning (RL) has emerged as a critical driver for enhancing the reasoning capabilities of Large Language Models (LLMs). While recent advancements have focused on reward engineering or data synthesis, few studies exploit the model's intrinsic representation characteristics to guide the training process. In this paper, we first observe the presence of high-magnitude activations within the query and key vectors when processing long contexts. Drawing inspiration from model quantization -- which establishes the criticality of such high-magnitude activations -- and the insight that long-context reasoning inherently exhibits a sparse structure, we hypothesize that these weights serve as the pivotal drivers for effective model optimization. Based on this insight, we propose LongAct, a strategy that shifts from uniform to saliency-guided sparse updates. By selectively updating only the weights associated with these significant activations, LongAct achieves an approximate 8% improvement on LongBench v2 and enhances generalization on the RULER benchmark. Furthermore, our method exhibits remarkable universality, consistently boosting performance across diverse RL algorithms such as GRPO and DAPO. Extensive ablation studies suggest that focusing on these salient features is key to unlocking long-context potential.