KoALa-Bench: Evaluating Large Audio Language Models on Korean Speech Understanding and Faithfulness

arXiv cs.CL / 4/23/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces KoALa-Bench, a new benchmark focused on evaluating large audio language models (LALMs) for Korean speech understanding and faithfulness.
  • KoALa-Bench contains six tasks: four targeting core understanding (ASR, speech translation, speech QA, and instruction following) and two targeting whether models accurately use and reflect the speech modality (faithfulness).
  • The benchmark incorporates Korea-specific elements by including listening-style questions based on the Korean CSAT (college scholastic ability test) and content from Korean cultural domains.
  • The authors run extensive experiments across six LALM models using both white-box and black-box evaluation, and they release the benchmark, evaluation code, and a public leaderboard.
  • Public resources for KoALa-Bench are provided via https://ksbench.github.io/Korean-Benchmark/, aiming to fill the gap in non-English LALM evaluation benchmarks.

Abstract

Recent advances in large audio language models (LALMs) have enabled multilingual speech understanding. However, benchmarks for evaluating LALMs remain scarce for non-English languages, with Korean being one such underexplored case. In this paper, we introduce KoALa-Bench, a comprehensive benchmark for evaluating Korean speech understanding and speech faithfulness of LALMs. In particular, KoALa-Bench comprises six tasks. Four tasks evaluate fundamental speech understanding capabilities, including automatic speech recognition, speech translation, speech question answering, and speech instruction following, while the remaining two tasks evaluate speech faithfulness, motivated by our observation that several LALMs often fail to fully leverage the speech modality. Furthermore, to reflect Korea-specific knowledge, our benchmark incorporates listening questions from the Korean college scholastic ability test as well as content covering Korean cultural domains. We conduct extensive experiments across six models, including both white-box and black-box ones. Our benchmark, evaluation code, and leaderboard are publicly available at https://ksbench.github.io/Korean-Benchmark/.