M$^3$-VQA: A Benchmark for Multimodal, Multi-Entity, Multi-Hop Visual Question Answering

arXiv cs.CV / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The article introduces M$^3$-VQA, a new benchmark aimed at evaluating multimodal large language models on fine-grained, multi-entity visual question answering and multi-hop reasoning.
  • Unlike prior VQA datasets that emphasize coarse categories and single-entity questions, M$^3$-VQA includes diverse questions spanning multiple distinct entities sourced from both images and text.
  • The benchmark requires sequential and parallel multi-hop reasoning over multiple documents and provides traceable, detailed evidence via a curated multimodal knowledge base.
  • Experiments with 16 leading MLLMs show major gaps in knowledge acquisition and reasoning: performance is low without external knowledge, but improves substantially with gold evidence.
  • Retrieval results indicate that reasoning-aware agentic retrieval outperforms heuristic retrieval methods, suggesting structured reasoning is critical for complex multimodal understanding.

Abstract

We present M^3-VQA, a novel knowledge-based Visual Question Answering (VQA) benchmark, to enhance the evaluation of multimodal large language models (MLLMs) in fine-grained multimodal entity understanding and complex multi-hop reasoning. Unlike existing VQA datasets that focus on coarse-grained categories and simple reasoning over single entities, M^3-VQA introduces diverse multi-entity questions involving multiple distinct entities from both visual and textual sources. It requires models to perform both sequential and parallel multi-hop reasoning across multiple documents, supported by traceable, detailed evidence and a curated multimodal knowledge base. We evaluate 16 leading MLLMs under three settings: without external knowledge, with gold evidence, and with retrieval-augmented input. The poor results reveal significant challenges for MLLMs in knowledge acquisition and reasoning. Models perform poorly without external information but improve markedly when provided with precise evidence. Furthermore, reasoning-aware agentic retrieval surpasses heuristic methods, highlighting the importance of structured reasoning for complex multimodal understanding. M^3-VQA presents a more challenging evaluation for advancing the multimodal reasoning capabilities of MLLMs. Our code and dataset are available at https://github.com/CASIA-IVA-Lab/M3VQA.