Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?
arXiv cs.CL / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates when self-distillation improves LLM performance but can also degrade mathematical reasoning, particularly when responses become shorter with worse overall accuracy.
- The authors attribute the degradation to the suppression of epistemic verbalization, i.e., the model’s expression of uncertainty during reasoning.
- Experiments show that conditioning the teacher on richer information reduces uncertainty expression, which helps fast in-domain optimization with limited task coverage but harms out-of-distribution (OOD) generalization.
- Across several models (Qwen3-8B, DeepSeek-Distill-Qwen-7B, and Olmo3-7B-Instruct), the study reports performance drops of up to 40%.
- The findings emphasize that robust reasoning requires preserving appropriate levels of uncertainty, not just reinforcing correct answer traces during post-training.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to