Residual-as-Teacher: Mitigating Bias Propagation in Student--Teacher Estimation

arXiv stat.ML / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies student–teacher statistical estimation and argues that standard student soft matching (SM), which trains the student to mimic the teacher outputs, can propagate the teacher’s systematic bias into the student.
  • It proposes “residual-as-teacher” (RaT), where the teacher is used to estimate residuals in the student’s predictions rather than directly matching outputs.
  • The authors show theoretically that RaT can emulate a proximal gradient-style optimization process and provides non-asymptotic excess risk bounds plus convergence guarantees for an iterative student–teacher scheme.
  • For kernel-based student–teacher pairs, RaT is proven to reach minimax-optimal performance while SM suffers constant prediction error regardless of sample size.
  • Experiments on synthetic data and ImageNette classification under covariate shift support the theory, indicating RaT mitigates bias propagation in practical settings.

Abstract

We study statistical estimation in a student--teacher setting, where predictions from a pre-trained teacher are used to guide a student model. A standard approach is to train the student to directly match the teacher's outputs, which we refer to as student soft matching (SM). This approach directly propagates any systematic bias or mis-specification present in the teacher, thereby degrading the student's predictions. We propose and analyze an alternative scheme, known as residual-as-teacher (RaT), in which the teacher is used to estimate residuals in the student's predictions. Our analysis shows how the student can thereby emulate a proximal gradient scheme for solving an oracle optimization problem, and this provably reduces the effect of teacher bias. For general student--teacher pairs, we establish non-asymptotic excess risk bounds for any RaT fixed point, along with convergence guarantees for the student-teacher iterative scheme. For kernel-based student--teacher pairs, we prove a sharp separation: the RaT method achieves the minimax-optimal rate, while the SM method incurs constant prediction error for any sample size. Experiments on both synthetic data and ImageNette classification under covariate shift corroborate our theoretical findings.
広告