SozKZ: Training Efficient Small Language Models for Kazakh from Scratch

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces SozKZ, a family of Llama-architecture small language models (50M–600M parameters) trained from scratch specifically on Kazakh using a dedicated 50K BPE tokenizer optimized for the language’s agglutinative morphology.
  • SozKZ is trained on 9 billion Kazakh tokens and is evaluated on three Kazakh benchmarks (cultural MC QA, reading comprehension, and topic classification) as well as multilingual baselines up to 3B parameters.
  • The 600M model reaches 30.3% accuracy on Kazakh cultural QA, closely approaching Llama-3.2-1B (32.0%) despite being much smaller, and achieves 25.5% on topic classification while outperforming evaluated multilingual models up to 2B.
  • The authors report consistent scaling behavior across 50M to 600M parameters, with cultural QA accuracy improving from 22.8% to 30.3%, indicating that additional scaling may further help.
  • All model weights and the tokenizer are released under open licenses, positioning the approach as a computationally efficient path for low-resource language technology.

Abstract

Kazakh, a Turkic language spoken by over 22 million people, remains underserved by existing multilingual language models, which allocate minimal capacity to low-resource languages and employ tokenizers ill-suited to agglutinative morphology. We present SozKZ, a family of Llama-architecture language models (50M-600M parameters) trained entirely from scratch on 9 billion tokens of Kazakh text with a dedicated 50K BPE tokenizer. We evaluate all models on three Kazakh benchmarks -- multiple-choice cultural QA, reading comprehension (Belebele), and topic classification (SIB-200) -- alongside five multilingual baselines ranging from 500M to 3B parameters. Our 600M model achieves 30.3% accuracy on Kazakh cultural QA, approaching the 32.0% of Llama-3.2-1B (2x larger), and 25.5% on SIB-200 topic classification, surpassing all evaluated multilingual models up to 2B parameters. We observe consistent scaling from 50M to 600M, with MC QA accuracy rising from 22.8% to 30.3%, suggesting that further scaling remains beneficial. These results demonstrate that small, dedicated models trained from scratch with a language-appropriate tokenizer offer a viable path for low-resource language technology, achieving competitive performance at a fraction of the computational cost. All models and the tokenizer are released under open licenses.