Beyond Distribution Sharpening: The Importance of Task Rewards

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines whether reinforcement learning with task rewards actually creates new capabilities in frontier models or mainly sharpens the model’s existing output distribution.
  • It provides a first-principles analysis showing that “distribution sharpening” has inherent limitations, with unfavorable optima and fundamentally unstable behavior.
  • The study implements both paradigms using RL as an underlying mechanism to enable a controlled, explicit comparison.
  • Experiments on math datasets with Llama-3.2-3B-Instruct and Qwen variants find that distribution sharpening produces only limited gains, while task-reward-based training yields much larger improvements and more stable learning.
  • The results support using task-reward signals to turn reasoning models into more capable agents, rather than relying primarily on distribution-sharpening effects.

Abstract

Frontier models have demonstrated exceptional capabilities following the integration of task-reward-based reinforcement learning (RL) into their training pipelines, enabling systems to evolve from pure reasoning models into sophisticated agents. However, debate persists regarding whether RL genuinely instills new skills within a base model or merely sharpens its existing distribution to elicit latent capabilities. To address this dichotomy, we present an explicit comparison between distribution sharpening and task-reward-based learning, utilizing RL as a tool to implement both paradigms. Our analysis reveals the inherent limitations of distribution sharpening, demonstrating from first principles how and why the optima can be unfavorable and the approach fundamentally unstable. Furthermore, our experiments using Llama-3.2-3B-Instruct, Qwen2.5-3B-Instruct and Qwen3-4B-Instruct-2507 on math datasets confirm that sharpening yields limited gains, whereas incorporating task-based reward signal can greatly help achieve robust performance improvements and stable learning.