Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision

arXiv cs.CL / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Self-Distillation Zero (SD-Zero), which converts sparse binary rewards from verifiable tasks into dense token-level supervision without needing an external teacher or high-quality demonstrations.
  • SD-Zero uses a single model in two roles—a Generator to produce an initial answer and a Reviser that conditions on the generator’s response plus its binary reward to produce an improved response.
  • It then performs on-policy self-distillation to transfer the Reviser’s token distributions back into the Generator, effectively training the model to localize and correct key tokens based on reward.
  • Experiments on math and code reasoning benchmarks (using Qwen3-4B-Instruct and Olmo-3-7B-Instruct) show at least a 10% improvement over base models and better results than baselines like RFT, GRPO, and SDFT under the same training sample budget.
  • Ablations highlight two distinctive behaviors: token-level self-localization of which response tokens to revise, and iterative self-evolution via regular teacher synchronization.

Abstract

Current post-training methods in verifiable settings fall into two categories. Reinforcement learning (RLVR) relies on binary rewards, which are broadly applicable and powerful, but provide only sparse supervision during training. Distillation provides dense token-level supervision, typically obtained from an external teacher or using high-quality demonstrations. Collecting such supervision can be costly or unavailable. We propose Self-Distillation Zero (SD-Zero), a method that is substantially more training sample-efficient than RL and does not require an external teacher or high-quality demonstrations. SD-Zero trains a single model to play two roles: a Generator, which produces an initial response, and a Reviser, which conditions on that response and its binary reward to produce an improved response. We then perform on-policy self-distillation to distill the reviser into the generator, using the reviser's token distributions conditioned on the generator's response and its reward as supervision. In effect, SD-Zero trains the model to transform binary rewards into dense token-level self-supervision. On math and code reasoning benchmarks with Qwen3-4B-Instruct and Olmo-3-7B-Instruct, SD-Zero improves performance by at least 10% over the base models and outperforms strong baselines, including Rejection Fine-Tuning (RFT), GRPO, and Self-Distillation Fine-Tuning (SDFT), under the same question set and training sample budget. Extensive ablation studies show two novel characteristics of our proposed algorithm: (a) token-level self-localization, where the reviser can identify the key tokens that need to be revised in the generator's response based on reward, and (b) iterative self-evolution, where the improving ability to revise answers can be distilled back into generation performance with regular teacher synchronization.