LEAP: Layer-wise Exit-Aware Pretraining for Efficient Transformer Inference

arXiv cs.LG / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper shows that common transformer efficiency methods—layer-aligned distillation and convergence-based early exit—are systematically incompatible under standard deployment conditions, making early-exit ineffective on distilled models.
  • It introduces LEAP (Layer-wise Exit-Aware Pretraining), which adds an auxiliary training constraint to reconcile distillation with early-exit behavior without any architectural changes.
  • Experiments indicate that LEAP-MiniLM achieves measurable wall-clock speedups (1.61× on NVIDIA L4, batch=1) and enables early exiting for 91.9% of samples by layer 7, unlike standard distilled models that yield zero effective speedup.
  • The work validates performance on sentence similarity (STS-B) and retrieval (BEIR) tasks, and provides practical deployment guidance such as latency measurements and decision thresholds.
  • 5

Abstract

Layer-aligned distillation and convergence-based early exit represent two predominant computational efficiency paradigms for transformer inference; yet we establish that they exhibit systematic incompatibility under standard deployment conditions for convergence-based early exit. Distillation objectives that align intermediate student layers to teacher representations suppress the representational convergence that early-exit mechanisms exploit, rendering such mechanisms ineffective on distilled models. We introduce LEAP (Layer-wise Exit-Aware Pretraining), an auxiliary training objective that reconciles this incompatibility. LEAP requires no architectural modifications; it augments standard distillation with a single constraint ensuring intermediate layers approximate final-layer representations. LEAP-MiniLM achieves 1.61\times measured wall-clock speedup (batch=1, NVIDIA L4) at \theta=0.95, with 91.9% of samples exiting by layer 7 and 1.80\times theoretical layer reduction, where standard distilled models achieve zero effective speedup. We validate across sentence similarity (STS-B: 0.760 \pm 0.006) and retrieval benchmarks (BEIR), providing operational guidance including latency measurements, decision thresholds, and deployment criteria.