AI Navigate

DeReason: A Difficulty-Aware Curriculum Improves Decoupled SFT-then-RL Training for General Reasoning

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • DeReason introduces a difficulty-aware data decoupling strategy that splits training data into reasoning-intensive and non-reasoning-intensive subsets using LLM-based scoring to tailor SFT and RL training.
  • The paper finds that applying RL directly to base models is sample-inefficient for general STEM and often outperformed by SFT on moderate-quality responses, but that sequential SFT followed by RL can yield additional gains.
  • By assigning broad, non-reasoning-intensive problems to SFT to build foundational knowledge and reserving difficult problems for RL, DeReason achieves better performance than SFT-only, RL-only, or randomly split baselines.
  • Extensive experiments on general STEM and mathematical benchmarks demonstrate the effectiveness and generality of this decoupled curriculum as a practical post-training recipe for enhancing general reasoning in LLMs.

Abstract

Reinforcement learning with Verifiable Rewards (RLVR) has emerged as a powerful paradigm for eliciting reasoning capabilities in large language models, particularly in mathematics and coding. While recent efforts have extended this paradigm to broader general scientific (STEM) domains, the complex interplay between supervised fine-tuning (SFT) and RL in these contexts remains underexplored. In this paper, we conduct controlled experiments revealing a critical challenge: for general STEM domains, RL applied directly to base models is highly sample-inefficient and is consistently surpassed by supervised fine-tuning (SFT) on moderate-quality responses. Yet sequential SFT followed by RL can further improve performance, suggesting that the two stages play complementary roles, and that how training data is allocated between them matters. Therefore, we propose DeReason, a difficulty-based data decoupling strategy for general reasoning. DeReason partitions training data by reasoning intensity estimated via LLM-based scoring into reasoning-intensive and non-reasoning-intensive subsets. It allocates broad-coverage, non-reasoning-intensive problems to SFT to establish foundational domain knowledge, and reserves a focused subset of difficult problems for RL to cultivate complex reasoning. We demonstrate that this principled decoupling yields better performance than randomly splitting the data for sequential SFT and RL. Extensive experiments on general STEM and mathematical benchmarks demonstrate that our decoupled curriculum training significantly outperforms SFT-only, RL-only, and random-split baselines. Our work provides a systematic study of the interplay between SFT and RL for general reasoning, offering a highly effective and generalized post-training recipe.