From Where Words Come: Efficient Regularization of Code Tokenizers Through Source Attribution

arXiv cs.CL / 4/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that code-tokenizer quality strongly affects LLM efficiency and safety, including defenses against jailbreaks and reductions in hallucination risk.
  • It identifies a key problem in tokenizer training: imbalanced repository/language diversity can cause many unused or under-trained tokens, while repetitive source-specific tokens may be unusable during future inference.
  • The proposed solution, Source-Attributed BPE (SA-BPE), modifies the BPE training objective and introduces merge skipping to regularize training and reduce overfitting to specific sources.
  • The authors claim SA-BPE substantially lowers the number of under-trained tokens while keeping the same inference procedure as standard BPE, making it suitable for production deployment.

Abstract

Efficiency and safety of Large Language Models (LLMs), among other factors, rely on the quality of tokenization. A good tokenizer not only improves inference speed and language understanding but also provides extra defense against jailbreak attacks and lowers the risk of hallucinations. In this work, we investigate the efficiency of code tokenization, in particular from the perspective of data source diversity. We demonstrate that code tokenizers are prone to producing unused, and thus under-trained, tokens due to the imbalance in repository and language diversity in the training data, as well as the dominance of source-specific, repetitive tokens that are often unusable in future inference. By modifying the BPE objective and introducing merge skipping, we implement different techniques under the name Source-Attributed BPE (SA-BPE) to regularize BPE training and minimize overfitting, thereby substantially reducing the number of under-trained tokens while maintaining the same inference procedure as with regular BPE. This provides an effective tool suitable for production use.