AI Navigate

Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities

arXiv cs.CL / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Daily-Omni is a new cross-modal audio-visual QA benchmark featuring 684 real-world videos and 1,197 questions that require cross-modal temporal reasoning across audio and video.
  • The authors develop a semi-automatic pipeline for annotation, cross-modal consistency refinement, temporal alignment elicitation, and leakage filtering, followed by human verification to enable scalable benchmark construction.
  • They evaluate 24 foundation models across 37 model–modality settings (Audio+Video / Audio-only / Video-only / Text-only) and provide a training-free modular diagnostic baseline composed from off-the-shelf unimodal models.
  • Results show that many end-to-end multimodal LLMs struggle on alignment-critical questions, highlighting robust cross-modal temporal alignment as a still-open challenge for multimodal AI.

Abstract

Recent Multimodal Large Language Models (MLLMs) achieve promising performance on visual and audio benchmarks independently. However, the ability of these models to process cross-modal information synchronously remains largely unexplored. We introduce Daily-Omni, a multiple-choice Audio-Visual QA benchmark featuring 684 real-world videos and 1,197 questions spanning 6 task families that explicitly require cross-modal temporal reasoning. To support scalable benchmark construction, we develop a semi-automatic pipeline for annotation, cross-modal consistency refinement, temporal alignment elicitation, and text-only leakage filtering, followed by human verification. We further provide a diagnostic evaluation suite and extensively evaluate 24 foundation models under 37 model--modality settings (Audio+Video / Audio-only / Video-only / Text-only). Finally, we include a training-free modular diagnostic baseline that composes off-the-shelf unimodal models to serve as a diagnostic baseline and to illustrate how explicit temporal alignment signals affect performance. Results indicate that many end-to-end MLLMs still struggle on alignment-critical questions, suggesting that robust cross-modal temporal alignment remains an important open challenge.