Scaling Properties of Continuous Diffusion Spoken Language Models

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates speech-only spoken language models (SLMs) and asks whether continuous diffusion (CD) SLMs can be more practical than discrete autoregressive (AR) SLMs for matching text-model performance.
  • It introduces the phoneme Jensen-Shannon divergence (pJSD) metric to quantify linguistic quality, and finds that CD SLMs follow scaling laws for validation loss and pJSD similar to AR models.
  • The study shows that the optimal token-to-parameter ratio decreases as compute scales, but the loss can become insensitive to dataset and model size under certain regimes, suggesting potential for faster inference.
  • Scaling CD SLMs up to 16B parameters with tens of millions of hours of conversational data enables emotive, prosodic, multi-speaker, multilingual speech, while long-form coherence remains a major unsolved problem.
  • Overall, the work provides guidance on how to scale diffusion-based SLMs and highlights both promising efficiency signals and remaining challenges for long-duration dialogue generation.

Abstract

Speech-only spoken language models (SLMs) lag behind text and text-speech models in performance, with recent discrete autoregressive (AR) SLMs indicating significant computational and data demands to match text models. Since discretizing continuous speech for AR creates bottlenecks, we explore whether continuous diffusion (CD) SLM is more viable. To quantify the SLMs linguistic quality, we introduce the phoneme Jensen-Shannon divergence (pJSD) metric. Our analysis reveals CD SLMs, mirroring AR behavior, exhibit scaling laws for validation loss and pJSD, and show optimal token-to-parameter ratios decreasing as compute scales. However, for the latter, loss becomes insensitive to choice of data and model sizes, showing potential for fast inference. Scaling CD SLMs to 16B parameters with tens of millions of hours of conversational data enables generation of emotive, prosodic, multi-speaker, multilingual speech, though achieving long-form coherence remains a significant challenge.