LoASR-Bench: Evaluating Large Speech Language Models on Low-Resource Automatic Speech Recognition Across Language Families

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • LoASR-Bench is introduced as a comprehensive benchmark to evaluate low-resource automatic speech recognition (ASR) performance of the latest SpeechLMs across 25 languages from 9 language families, including both Latin and non-Latin scripts.
  • The work emphasizes that existing SpeechLMs struggle with real-world low-resource languages and with generalization across diverse language families and scripts.
  • It enables cross-linguistic and cross-script assessment to measure generalizability of SpeechLM-based ASR beyond high-resource languages.
  • Experimental results highlight limitations of current SpeechLMs in handling real-world low-resource languages, pointing to areas for future improvement and more robust multilingual ASR.

Abstract

Large language models (LLMs) have driven substantial advances in speech language models (SpeechLMs), yielding strong performance in automatic speech recognition (ASR) under high-resource conditions. However, existing benchmarks predominantly focus on high-resource languages, leaving the ASR behavior of SpeechLMs in low-resource languages insufficiently understood. This gap is critical, as practical ASR systems must reliably support low-resource languages and generalize across diverse language families, and it directly hinders the deployment of SpeechLM-based ASR in real-world multilingual scenarios. As a result, it is essential to evaluate SpeechLMs on low-resource languages to ensure their generalizability across different language families. To address this problem, we propose \textbf{LoASR-Bench}, a comprehensive benchmark designed to evaluate \textbf{lo}w-resource \textbf{a}utomatic \textbf{s}peech \textbf{r}ecognition (\textbf{ASR}) of the latest SpeechLMs across diverse language families. LoASR-Bench comprises 25 languages from 9 language families, featuring both Latin and non-Latin scripts, enabling cross-linguistic and cross-script assessment of ASR performance of current SpeechLMs. Experimental results highlight the limitations of the latest SpeechLMs in handling real-world low-resource languages.