PARSA-Bench: A Comprehensive Persian Audio-Language Model Benchmark
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- PARSA-Bench is introduced as the first benchmark to evaluate large audio-language models on Persian language and culture, with 16 tasks and over 8,000 samples across speech understanding, paralinguistic analysis, and cultural audio understanding.
- Ten tasks are newly introduced, including poetry meter and style detection, traditional Persian music understanding, and code-switching detection, expanding evaluation beyond existing benchmarks.
- The study finds that text-only baselines outperform audio models, suggesting current systems rely more on transcription than audio signals.
- Culturally-grounded tasks reveal distinct failure modes, such as near-random Vazn detection across model scales, and the dataset is publicly available on HuggingFace.




