Reliability Gated Multi-Teacher Distillation for Low Resource Abstractive Summarization

arXiv cs.CL / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes reliability-aware multi-teacher knowledge distillation for low-resource abstractive summarization, introducing EWAD (entropy-weighted agreement routing) and CPDP (capacity-proportional divergence preservation) to better combine teacher and gold supervision.
  • Experiments on Bangla datasets and multiple BanglaT5/Qwen2.5 settings find that logit-level KD yields the most consistent gains, while more complex distillation can improve semantic similarity for short summaries but harm longer outputs.
  • Cross-lingual pseudo-label KD across 10 languages is reported to retain 71–122% of teacher ROUGE-L performance while achieving 3.2× compression, indicating efficient student learning.
  • Human-validated multi-judge LLM evaluation suggests that single-judge pipelines can introduce calibration bias, motivating more robust evaluation protocols for summarization quality.

Abstract

We study multiteacher knowledge distillation for low resource abstractive summarization from a reliability aware perspective. We introduce EWAD (Entropy Weighted Agreement Aware Distillation), a token level mechanism that routes supervision between teacher distillation and gold supervision based on inter teacher agreement, and CPDP (Capacity Proportional Divergence Preservation), a geometric constraint on the student position relative to heterogeneous teachers. Across two Bangla datasets, 13 BanglaT5 ablations, and eight Qwen2.5 experiments, we find that logit level KD provides the most reliable gains, while more complex distillation improves semantic similarity for short summaries but degrades longer outputs. Cross lingual pseudo label KD across ten languages retains 71-122 percent of teacher ROUGE L at 3.2x compression. A human validated multi judge LLM evaluation further reveals calibration bias in single judge pipelines. Overall, our results show that reliability aware distillation helps characterize when multi teacher supervision improves summarization and when data scaling outweighs loss engineering.