Explain in Your Own Words: Improving Reasoning via Token-Selective Dual Knowledge Distillation
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Token-Selective Dual Knowledge Distillation (TSD-KD) is proposed to focus distillation on tokens important for reasoning and to let the student explain its reasoning in its own words.
- The method combines indirect feedback through preference ranking and direct distillation via selective distribution matching based on the relative confidence of teacher and student.
- An entropy regularization term is added to maintain the student’s confidence during distillation.
- Experiments show state-of-the-art performance on 10 challenging reasoning benchmarks, with up to 54.4% accuracy gains over baselines and cases where the student surpasses its teacher by up to 20.3%.
- The authors provide the source code at the linked GitHub repository.
Related Articles
[R] Combining Identity Anchors + Permission Hierarchies achieves 100% refusal in abliterated LLMs — system prompt only, no fine-tuning
Reddit r/MachineLearning
How I Built an AI SDR Agent That Finds Leads and Writes Personalized Cold Emails
Dev.to
Complete Guide: How To Make Money With Ai
Dev.to
I Analyzed My Portfolio with AI and Scored 53/100 — Here's How I Fixed It to 85+
Dev.to
The Demethylation
Dev.to