AI Navigate

MMOU: A Massive Multi-Task Omni Understanding and Reasoning Benchmark for Long and Complex Real-World Videos

arXiv cs.CL / 3/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MMOU introduces a large-scale benchmark (15,000 questions and 9,038 real-world videos) to evaluate multimodal understanding and reasoning across visual, audio, and textual signals in long-form content.
  • The benchmark spans 13 skill categories that require integrating evidence across modalities and time, with professionally annotated, multi-turn questions to ensure high reasoning fidelity.
  • Evaluation across 20+ models shows substantial performance gaps, with the best closed-source model at 64.2% accuracy and the top open-source model at 46.8%, highlighting the difficulty of long-form omni-modal reasoning.
  • The analysis identifies systematic failure modes and provides actionable insights into where current models break, outlining directions for future research and model improvements.

Abstract

Multimodal Large Language Models (MLLMs) have shown strong performance in visual and audio understanding when evaluated in isolation. However, their ability to jointly reason over omni-modal (visual, audio, and textual) signals in long and complex videos remains largely unexplored. We introduce MMOU, a new benchmark designed to systematically evaluate multimodal understanding and reasoning under these challenging, real-world conditions. MMOU consists of 15,000 carefully curated questions paired with 9038 web-collected videos of varying length, spanning diverse domains and exhibiting rich, tightly coupled audio-visual content. The benchmark covers 13 fundamental skill categories, all of which require integrating evidence across modalities and time. All questions are manually annotated across multiple turns by professional annotators, ensuring high quality and reasoning fidelity. We evaluate 20+ state-of-the-art open-source and proprietary multimodal models on MMOU. The results expose substantial performance gaps: the best closed-source model achieves only 64.2% accuracy, while the strongest open-source model reaches just 46.8%. Our results highlight the challenges of long-form omni-modal understanding, revealing that current models frequently fail to apply even fundamental skills in long videos. Through detailed analysis, we further identify systematic failure modes and provide insights into where and why current models break.