AI Navigate

Task-Specific Knowledge Distillation via Intermediate Probes

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The proposed method, named Method, distills knowledge by training lightweight probes on frozen teacher hidden states and using the probe's predictions as supervision for the student instead of the teacher's output logits.
  • Probes on intermediate representations provide cleaner labels and effectively denoise the distillation signal, bypassing the brittle vocabulary projection and answer-token selection issues.
  • The approach yields consistent improvements across four reasoning benchmarks (AQuA-RAT, ARC Easy/Challenge, and MMLU), with gains most pronounced when data is scarce.
  • It requires no architectural changes to either student or teacher, is architecture-agnostic, and adds minimal compute since probe training is cheap and teacher representations can be cached.

Abstract

Knowledge distillation from large language models (LLMs) assumes that the teacher's output distribution is a high-quality training signal. On reasoning tasks, this assumption is frequently violated. A model's intermediate representations may encode the correct answer, yet this information is lost or distorted through the vocabulary projection, where prompt formatting and answer-token choices creates brittle, noisy outputs. We introduce \method{}, a distillation framework that bypasses this bottleneck by training lightweight probes on frozen teacher hidden states and using the probe's predictions, rather than output logits, as supervision for student training. This simple change yields consistent improvements across four reasoning benchmarks (AQuA-RAT, ARC Easy/Challenge, and MMLU), with gains most pronounced under limited data. Probes trained on intermediate representations provide cleaner labels than the teacher's own outputs, effectively denoising the distillation signal. \method{} requires no architectural changes to student or teacher, is architecture-agnostic, and adds minimal compute since probe training is cheap and teacher representations can be cached. By exploiting internal representations, \method{} enables practitioners to extract more value from large teacher models without additional training data or architectural complexity.