KazByte: Adapting Qwen models to Kazakh via Byte-level Adapter

arXiv cs.CL / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that mainstream LLM tokenizers create a “tokenizer tax” for Kazakh, increasing token counts, shrinking effective context, and reducing modeling of Kazakh morphology.
  • It proposes “ByteKaz,” which bypasses the tokenizer by sending raw bytes through a small trainable adapter to interface with a frozen Qwen2.5-7B model.
  • After training the byte-level adapter, the method freezes the adapter and fine-tunes only Qwen’s attention layers on Kazakh data to adapt the model more efficiently.
  • The authors’ hypothesis is that this two-stage approach (interface learning then attention adaptation) can match or outperform the original Qwen2.5-7B on standard Kazakh benchmarks.
  • This arXiv version primarily documents the ByteKaz architecture and training protocol, with empirical validation reported as ongoing.

Abstract

Large language models fragment Kazakh text into many more tokens than equivalent English text, because their tokenizers were built for high-resource languages. This tokenizer tax inflates compute, shortens the effective context window, and weakens the model's grip on Kazakh morphology. We propose to bypass the tokenizer entirely by feeding raw bytes through a small adapter that learns to speak the internal language of a frozen Qwen2.5-7B. Once the adapter is trained, we freeze it and fine-tune only the attention layers of Qwen on Kazakh text. Our central hypothesis is that this two-stage process -- first teach the interface, then adapt the model -- should match or exceed the accuracy of the original Qwen2.5-7B on standard Kazakh benchmarks. This report describes the ByteKaz architecture and training protocol. Empirical validation is ongoing; this version stakes the design and hypotheses for the record.

KazByte: Adapting Qwen models to Kazakh via Byte-level Adapter | AI Navigate