Diffutron: A Masked Diffusion Language Model for Turkish Language

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Diffutron, a masked diffusion language model tailored to Turkish, aiming to address the gap in using masked diffusion approaches for morphologically rich languages.
  • Diffutron is built with a resource-efficient pipeline: LoRA-based continual pre-training of a multilingual encoder on a large-scale corpus.
  • To make the model generative, the authors use a progressive instruction-tuning strategy that adapts the model in stages using general then task-specific instruction sets.
  • Benchmark experiments show that, even with a compact model size, Diffutron delivers competitive results versus much larger multi-billion-parameter autoregressive baselines.
  • Overall, the work argues that combining masked diffusion modeling with multi-stage tuning is effective for non-autoregressive text generation in Turkish.

Abstract

Masked Diffusion Language Models (MDLMs) have emerged as a compelling non-autoregressive alternative to standard large language models; however, their application to morphologically rich languages remains limited. In this paper, we introduce \textit{Diffutron}, a masked diffusion language model specifically designed for Turkish. Our approach leverages a resource-efficient training pipeline, starting with LoRA-based continual pre-training of a multilingual encoder on a large-scale corpus. To enable generative capabilities, we employ a progressive instruction-tuning strategy, sequentially adapting the model on general and task-specific instruction sets. Experimental results across comprehensive benchmarks demonstrate that, despite its compact size, our model achieves competitive performance compared to existing multi-billion-parameter baselines. These findings validate the effectiveness of masked diffusion modeling combined with multi-stage tuning for non-autoregressive text generation in Turkish.