Benchmarking Multilingual Speech Models on Pashto: Zero-Shot ASR, Script Failure, and Cross-Domain Evaluation

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents the first reproducible, multi-model benchmarking of multilingual ASR on public Pashto datasets, using both the FLEURS Pashto test set and a filtered Common Voice 24 subset.
  • In zero-shot ASR, Whisper and other multilingual models show very high error rates overall (e.g., Whisper medium collapsing to 461% WER on Common Voice 24), with SeamlessM4T achieving the best reported zero-shot result at 39.7% WER on Common Voice 24.
  • Script failure is highlighted via a language-identification audit: Whisper outputs Pashto-script text in <0.8% of utterances, while MMS-1B, SeamlessM4T, and OmniASR exceed 93% Pashto-script fidelity, demonstrating that WER alone can miss critical failure modes.
  • Cross-domain testing of fine-tuned Pashto ASR models shows substantial out-of-distribution degradation (published ~14% WER rising to 32.5–59%), while one augmented approach achieves matched performance across both domains (35.1% WER on both) with no observed cross-domain degradation.
  • Character-class error analysis points to Pashto-specific phonemes (retroflex series and lateral fricatives) as major contributors to error, and the authors propose structural impediments plus ordered research priorities for cumulative progress.

Abstract

Pashto is spoken by approximately 60--80 million people but has no published benchmarks for multilingual automatic speech recognition (ASR) on any shared public test set. This paper reports the first reproducible multi-model evaluation on public Pashto data, covering zero-shot ASR, script-level failure, and cross-domain evaluation of fine-tuned models. For zero-shot ASR, ten models (all seven Whisper sizes, MMS-1B, SeamlessM4T-v2-large, and OmniASR-CTC-300M) are evaluated on the FLEURS Pashto test set and a filtered Common Voice~24 subset; zero-shot Whisper WER ranges from 90% to 297%, with the medium model collapsing to 461% on Common Voice~24 consistent with decoder looping. SeamlessM4T achieves 39.7% WER on Common Voice~24 (the best zero-shot result reported to date, as of submission); MMS-1B achieves 43.8% on FLEURS. For script failure, a language-identification audit shows that no Whisper model produces Pashto-script output in more than 0.8% of utterances, while MMS-1B, SeamlessM4T, and OmniASR each exceed 93% Pashto-script fidelity; WER alone does not reveal this failure, since a model generating Arabic-script output on Pashto audio has not achieved ASR in any interpretable sense. For cross-domain evaluation, five fine-tuned Pashto ASR models are evaluated on both test sets: published WER figures of 14% degrade to 32.5--59% on out-of-distribution sets, while one augmented model achieves 35.1% on both sets with zero cross-domain degradation. Character-class error stratification confirms that Pashto-unique phonemes (the retroflex series and lateral fricatives) account for disproportionate error mass. All evaluations cover read speech only. Five structural impediments to cumulative progress are identified and five ordered research priorities are argued.