Multi-Token Prediction via Self-Distillation

arXiv cs.CL / 4/27/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes converting a pretrained autoregressive LLM from single-token prediction into a fast multi-token predictor using a simple online self-distillation objective.
  • Unlike speculative decoding, the method avoids training auxiliary speculator models and does not require complex multi-component inference pipelines.
  • The resulting multi-token model keeps the exact same implementation as the original pretrained checkpoint, enabling straightforward deployment.
  • Experiments show decoding speeds over 3× faster with less than a 5% accuracy drop on GSM8K compared with the single-token decoding performance of the same checkpoint.

Abstract

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained initial checkpoint and is deployable without the addition of any auxiliary verifier or other specialized inference code. Our method produces models that decode more than 3\times faster at <5\% drop in accuracy on GSM8K relative to the single token decoding performance of the same checkpoint.