AI Navigate

Token Distillation: Attention-aware Input Embeddings For New Tokens

arXiv cs.CL / 3/16/2026

📰 NewsModels & Research

Key Points

  • The paper identifies the limitations of static vocabularies in language models and the high cost of adding new tokens through retraining or extra modules.
  • It introduces Token Distillation, a method to learn high-quality input embeddings for new tokens by distilling representations from the original tokenization.
  • The approach enables rapid initialization of new embeddings and reduces training time while maintaining strong performance across open-weight models.
  • Experimental results show Token Distillation outperforms strong baselines across a wide range of models, indicating practical benefits for adapting NLP systems.

Abstract

Current language models rely on static vocabularies determined at pretraining time, which can lead to decreased performance and increased computational cost for domains underrepresented in the original vocabulary. New tokens can be added to solve this problem, when coupled with a good initialization for their new embeddings. However, existing embedding initialization methods require expensive further training or pretraining of additional modules. In this paper, we propose Token Distillation and show that by distilling representations obtained using the original tokenization, we can quickly learn high-quality input embeddings for new tokens. Experimental results with a wide range of open-weight models show that Token Distillation outperforms even strong baselines.