AI Navigate

Words at Play: Benchmarking Audio Pun Understanding in Large Audio-Language Models

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • APUN-Bench is introduced as the first benchmark specifically for assessing large audio-language models on understanding spoken puns.
  • The benchmark includes 4,434 audio samples annotated for pun recognition, pun location, and pun meaning inference.
  • The paper evaluates 10 state-of-the-art LALMs and finds substantial gaps in recognizing, localizing, and interpreting audio puns.
  • It identifies challenges such as positional biases in pun location and errors in meaning inference, offering actionable guidance for advancing humor-aware audio intelligence.

Abstract

Puns represent a typical linguistic phenomenon that exploits polysemy and phonetic ambiguity to generate humour, posing unique challenges for natural language understanding. Within pun research, audio plays a central role in human communication except text and images, while datasets and systematic resources for spoken puns remain scarce, leaving this crucial modality largely underexplored. In this paper, we present APUN-Bench, the first benchmark dedicated to evaluating large audio language models (LALMs) on audio pun understanding. Our benchmark contains 4,434 audio samples annotated across three stages: pun recognition, pun word location and pun meaning inference. We conduct a deep analysis of APUN-Bench by systematically evaluating 10 state-of-the-art LALMs, uncovering substantial performance gaps in recognizing, localizing, and interpreting audio puns. This analysis reveals key challenges, such as positional biases in audio pun location and error cases in meaning inference, offering actionable insights for advancing humour-aware audio intelligence.