Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates when self-distillation improves LLM performance but can also degrade mathematical reasoning, particularly when responses become shorter with worse overall accuracy.
  • The authors attribute the degradation to the suppression of epistemic verbalization, i.e., the model’s expression of uncertainty during reasoning.
  • Experiments show that conditioning the teacher on richer information reduces uncertainty expression, which helps fast in-domain optimization with limited task coverage but harms out-of-distribution (OOD) generalization.
  • Across several models (Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct), the study reports performance drops of up to 40%.
  • The findings emphasize that robust reasoning requires preserving appropriate levels of uncertainty, not just reinforcing correct answer traces during post-training.

Abstract

Self-distillation has emerged as an effective post-training paradigm for LLMs, often improving performance while shortening reasoning traces. However, in mathematical reasoning, we find that it can reduce response length while degrading performance. We trace this degradation to the suppression of epistemic verbalization - the model's expression of uncertainty during reasoning. Through controlled experiments varying conditioning context richness and task coverage, we show that conditioning the teacher on rich information suppresses uncertainty expression, enabling rapid in-domain optimization with limited task coverage but harming OOD performance, where unseen problems benefit from expressing uncertainty and adjusting accordingly. Across Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct, we observe performance drops of up to 40%. Our findings highlight that exposing appropriate levels of uncertainty is crucial for robust reasoning and underscore the importance of optimizing reasoning behavior beyond merely reinforcing correct answer traces.