Decoupling the Benefits of Subword Tokenization for Language Model Training via Byte-level Simulation

arXiv cs.CL / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how subword tokenization affects both training efficiency and model performance by isolating its contributions in a controlled byte-level pretraining setup.
  • It evaluates multiple factors—such as sample throughput, vocabulary scaling, and the linguistic prior for where subword boundaries should occur—to test specific hypotheses.
  • Experiments show that subword models can outperform raw byte models, and the authors attribute this advantage particularly to higher training throughput.
  • The study also emphasizes that incorporating subword boundaries—either as explicit priors or as inductive biases—is important for better performance.
  • The findings provide guidance for improving the pretraining of future byte-level and subword-based language models.

Abstract

Subword tokenization is an essential part of modern large language models (LLMs), yet its specific contributions to training efficiency and model performance remain poorly understood. In this work, we decouple the effects of subword tokenization by isolating them within a controlled byte-level pretraining pipeline. We formulate and test hypotheses across various dimensions, including sample throughput, vocabulary scaling, and the linguistic prior of subword boundaries. By simulating these effects in a byte-level setting, we refine our understanding of why subword models outperform raw byte models and offer insights to improve the pretraining of future byte-level and subword models. Specifically, our experiments highlight the critical role of increased training throughput and the integration of subword boundaries as either explicit priors or inductive biases.