ResRL: Boosting LLM Reasoning via Negative Sample Projection Residual Reinforcement Learning
arXiv cs.LG / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces ResRL, a new reinforcement learning method for LLM reasoning that improves performance without sacrificing generation diversity.
- It argues that prior approaches like Negative Sample Reinforcement (NSR) can distort shared semantic distributions between positive and negative responses, and proposes a way to decouple them.
- ResRL uses a theoretical analysis linking Lazy Likelihood Displacement (LLD) to gradient interference, enabling a single-forward proxy to perform conservative advantage reweighting.
- Practically, ResRL projects negative-token hidden states onto an SVD-based low-rank positive subspace and uses projection residuals to modulate negative gradients.
- Across twelve benchmarks covering math, code, agent tasks, and function calling, ResRL outperforms strong baselines on average and beats NSR in math reasoning by 9.4% (Avg@16) and 7.0% (Pass@128).
Related Articles
A very basic litmus test for LLMs "ok give me a python program that reads my c: and put names and folders in a sorted list from biggest to small"
Reddit r/LocalLLaMA

ALM on Power Platform: ADO + GitHub, the best of both worlds
Dev.to

Experiment: Does repeated usage influence ChatGPT 5.4 outputs in a RAG-like setup?
Dev.to

Find 12 high-volume, low-competition GEO content topics Topify.ai should rank on
Dev.to

When a memorized rule fits your bug too well: a meta-trap of agent workflows
Dev.to