OmniVoice: Towards Omnilingual Zero-Shot Text-to-Speech with Diffusion Language Models

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • OmniVoice is a large multilingual, zero-shot text-to-speech (TTS) model designed to cover 600+ languages using a diffusion-style discrete non-autoregressive architecture.
  • Instead of a two-stage text-to-semantic-to-acoustic pipeline, it directly maps input text to multi-codebook acoustic tokens to avoid bottlenecks on complex setups.
  • The model’s training and performance are improved via a full-codebook random masking strategy and by initializing from a pre-trained LLM to boost intelligibility.
  • Trained on a fully open-source-curated 581k-hour multilingual dataset, OmniVoice reports state-of-the-art results across Chinese, English, and multilingual benchmarks.
  • The authors provide the code and pre-trained models publicly on GitHub, enabling researchers and developers to evaluate and build upon the approach.

Abstract

We present OmniVoice, a massive multilingual zero-shot text-to-speech (TTS) model that scales to over 600 languages. At its core is a novel diffusion language model-style discrete non-autoregressive (NAR) architecture. Unlike conventional discrete NAR models that suffer from performance bottlenecks in complex two-stage (text-to-semantic-to-acoustic) pipelines, OmniVoice directly maps text to multi-codebook acoustic tokens. This simplified approach is facilitated by two key technical innovations: (1) a full-codebook random masking strategy for efficient training, and (2) initialization from a pre-trained LLM to ensure superior intelligibility. By leveraging a 581k-hour multilingual dataset curated entirely from open-source data, OmniVoice achieves the broadest language coverage to date and delivers state-of-the-art performance across Chinese, English, and diverse multilingual benchmarks. Our code and pre-trained models are publicly available at https://github.com/k2-fsa/OmniVoice.