Easy Samples Are All You Need: Self-Evolving LLMs via Data-Efficient Reinforcement Learning

arXiv cs.AI / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that prior LLM-based reinforcement learning approaches often suffer from poor results due to high annotation costs and failure modes like model collapse and reward hacking.
  • It proposes EasyRL, a self-evolving LLM framework that simulates human cognitive learning by transferring reliable knowledge from small amounts of easy labeled data and progressively tackling harder unlabeled data.
  • EasyRL starts with a warm-up supervised-RL stage using few-shot labeled examples, then performs divide-and-conquer pseudo-labeling using consistency-based selection for low-uncertainty inputs and reflection-based resolution for medium-uncertainty cases.
  • The method concludes with difficulty-progressive self-training via iterative pseudo-labeling and additional RL to strengthen the model’s reasoning.
  • Experiments on mathematical and scientific benchmarks show that using only 10% of easy labeled data, EasyRL outperforms existing state-of-the-art baselines consistently.

Abstract

Previous LLMs-based RL studies typically follow either supervised learning with high annotation costs, or unsupervised paradigms using voting or entropy-based rewards. However, their performance remains far from satisfactory due to the substantial annotation cost and issues such as model collapse or reward hacking. To address these issues, we introduce a new perspective inspired by cognitive learning theory and propose a novel approach called EasyRL. The core of EasyRL is to simulate the human cognitive acquisition curve by integrating reliable knowledge transfer from easy labeled data with a progressive divide-and-conquer strategy that tackles increasingly difficult unlabeled data. Specifically, we initialize a warm-up model using supervised RL with few-shot labeled data. This is followed by a divide-and-conquer pseudo-labeling strategy on difficult unlabeled data, combining consistency-based selection for low-uncertainty cases and reflection-based resolution for medium-uncertainty cases. Finally, difficulty-progressive self-training with iterative pseudo-labeling and RL further strengthens the model's reasoning capability. EasyRL provides a unified self-evolving framework that facilitates data-efficient post-training of LLMs. Experimental results on mathematical and scientific benchmarks demonstrate that EasyRL, using only 10% of easy labeled data, consistently outperforms state-of-the-art baselines.