MTA: Multi-Granular Trajectory Alignment for Large Language Model Distillation

arXiv cs.CL / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Multi-Granular Trajectory Alignment (MTA) to improve knowledge distillation by aligning how teacher and student representations evolve across Transformer depth, not just at fixed layers or token-level outputs.
  • MTA uses a layer-adaptive scheme: it aligns lower layers at the word level to preserve lexical information, while aligning higher layers at phrase-level spans to capture compositional semantics.
  • It introduces a Dynamic Structural Alignment loss that matches the relative geometric structure among semantic units within each layer, aiming to transfer internal relational knowledge more effectively.
  • An additional Hidden Representation Alignment loss is used to directly align selected teacher and student layers, and experiments report consistent gains over prior distillation baselines with ablation studies validating each component.
  • The method is motivated by the observation that Transformer representations become more abstract with depth and by linguistic theories that higher-level meaning is built compositionally from lower-level units.

Abstract

Knowledge distillation is a key technique for compressing large language models (LLMs), but most existing methods align representations at fixed layers or token-level outputs, ignoring how representations evolve across depth. As a result, the student is only weakly guided to capture the teacher's internal relational structure during distillation, which limits knowledge transfer. To address this limitation, we propose Multi-Granular Trajectory Alignment (MTA), a framework that aligns teacher and student representations along their layer-wise transformation trajectory. MTA adopts a layer-adaptive strategy: lower layers are aligned at the word level to preserve lexical information, while higher layers operate on phrase-level spans (e.g., noun and verb phrases) to capture compositional semantics. We instantiate this idea through a Dynamic Structural Alignment loss that matches the relative geometry among semantic units within each layer. This design is motivated by empirical findings that Transformer representations become increasingly abstract with depth, and is also consistent with linguistic views in which higher-level meaning emerges through the composition of lower-level lexical units. We further incorporate a Hidden Representation Alignment loss to directly align selected teacher-student layers. Experiments show that MTA consistently outperforms state-of-the-art baselines on standard benchmarks, with ablations confirming the contribution of each component.