Token Distillation: Attention-aware Input Embeddings For New Tokens
arXiv cs.CL / 3/16/2026
📰 NewsModels & Research
Key Points
- The paper identifies the limitations of static vocabularies in language models and the high cost of adding new tokens through retraining or extra modules.
- It introduces Token Distillation, a method to learn high-quality input embeddings for new tokens by distilling representations from the original tokenization.
- The approach enables rapid initialization of new embeddings and reduces training time while maintaining strong performance across open-weight models.
- Experimental results show Token Distillation outperforms strong baselines across a wide range of models, indicating practical benefits for adapting NLP systems.
Related Articles
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA
Qwen3.5 Knowledge density and performance
Reddit r/LocalLLaMA
I think I made the best general use System Prompt for Qwen 3.5 (OpenWebUI + Web search)
Reddit r/LocalLLaMA