ResRL: Boosting LLM Reasoning via Negative Sample Projection Residual Reinforcement Learning

arXiv cs.LG / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ResRL, a new reinforcement learning method for LLM reasoning that improves performance without sacrificing generation diversity.
  • It argues that prior approaches like Negative Sample Reinforcement (NSR) can distort shared semantic distributions between positive and negative responses, and proposes a way to decouple them.
  • ResRL uses a theoretical analysis linking Lazy Likelihood Displacement (LLD) to gradient interference, enabling a single-forward proxy to perform conservative advantage reweighting.
  • Practically, ResRL projects negative-token hidden states onto an SVD-based low-rank positive subspace and uses projection residuals to modulate negative gradients.
  • Across twelve benchmarks covering math, code, agent tasks, and function calling, ResRL outperforms strong baselines on average and beats NSR in math reasoning by 9.4% (Avg@16) and 7.0% (Pass@128).

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) enhances reasoning of Large Language Models (LLMs) but usually exhibits limited generation diversity due to the over-incentivization of positive rewards. Although methods like Negative Sample Reinforcement (NSR) mitigate this issue by upweighting penalty from negative samples, they may suppress the semantic distributions shared between positive and negative responses. To boost reasoning ability without losing diversity, this paper proposes negative sample projection Residual Reinforcement Learning (ResRL) that decouples similar semantic distributions among positive and negative responses. We theoretically link Lazy Likelihood Displacement (LLD) to negative-positive head-gradient interference and derive a single-forward proxy that upper-bounds representation alignment to guide conservative advantage reweighting. ResRL then projects negative-token hidden representations onto an SVD-based low-rank positive subspace and uses projection residuals to modulate negative gradients, improving reasoning while preserving diversity and outperforming strong baselines on average across twelve benchmarks spanning Mathematics, Code, Agent Tasks, and Function Calling. Notably, ResRL surpasses NSR on mathematical reasoning by 9.4\% in Avg@16 and 7.0\% in Pass@128. Code is available at https://github.com/1229095296/ResRL.git.