Benchmarking Multilingual Speech Models on Pashto: Zero-Shot ASR, Script Failure, and Cross-Domain Evaluation
arXiv cs.CL / 4/7/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents the first reproducible, multi-model benchmarking of multilingual ASR on public Pashto datasets, using both the FLEURS Pashto test set and a filtered Common Voice 24 subset.
- In zero-shot ASR, Whisper and other multilingual models show very high error rates overall (e.g., Whisper medium collapsing to 461% WER on Common Voice 24), with SeamlessM4T achieving the best reported zero-shot result at 39.7% WER on Common Voice 24.
- Script failure is highlighted via a language-identification audit: Whisper outputs Pashto-script text in <0.8% of utterances, while MMS-1B, SeamlessM4T, and OmniASR exceed 93% Pashto-script fidelity, demonstrating that WER alone can miss critical failure modes.
- Cross-domain testing of fine-tuned Pashto ASR models shows substantial out-of-distribution degradation (published ~14% WER rising to 32.5–59%), while one augmented approach achieves matched performance across both domains (35.1% WER on both) with no observed cross-domain degradation.
- Character-class error analysis points to Pashto-specific phonemes (retroflex series and lateral fricatives) as major contributors to error, and the authors propose structural impediments plus ordered research priorities for cumulative progress.
Related Articles

Meta Superintelligence Lab Releases Muse Spark: A Multimodal Reasoning Model With Thought Compression and Parallel Agents
MarkTechPost

Chatbots are great at manipulating people to buy stuff, Princeton boffins find
The Register

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
v0.20.5
Ollama Releases

Charades-Ego: A Large-Scale Dataset of Paired Third and First Person Videos
Dev.to