Do MLLMs Understand Pointing? Benchmarking and Enhancing Referential Reasoning in Egocentric Vision

arXiv cs.CV / 4/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that even advanced multimodal LLMs often struggle to ground the spatial meaning of pointing in egocentric (first-person) vision, instead relying on misleading cues like object proximity or saliency.
  • It introduces EgoPoint-Bench, a new QA benchmark (11k+ simulated and real samples) to measure and improve referential reasoning about pointing across multiple evaluation dimensions and reference-complexity levels.
  • Experiments show that both proprietary and open-source state-of-the-art models have difficulty with egocentric pointing tasks.
  • Fine-tuning models on the authors’ synthetic data leads to substantial accuracy improvements and strong sim-to-real generalization, supporting the value of spatially aware supervision.
  • The work argues that better spatial grounding is key for building precise egocentric AI assistants and provides a scalable evaluation path for future systems.

Abstract

Egocentric AI agents, such as smart glasses, rely on pointing gestures to resolve referential ambiguities in natural language commands. However, despite advancements in Multimodal Large Language Models (MLLMs), current systems often fail to precisely ground the spatial semantics of pointing. Instead, they rely on spurious correlations with visual proximity or object saliency, a phenomenon we term "Referential Hallucination." To address this gap, we introduce EgoPoint-Bench, a comprehensive question-answering benchmark designed to evaluate and enhance multimodal pointing reasoning in egocentric views. Comprising over 11k high-fidelity simulated and real-world samples, the benchmark spans five evaluation dimensions and three levels of referential complexity. Extensive experiments demonstrate that while state-of-the-art proprietary and open-source models struggle with egocentric pointing, models fine-tuned on our synthetic data achieve significant performance gains and robust sim-to-real generalization. This work highlights the importance of spatially aware supervision and offers a scalable path toward precise egocentric AI assistants. Project page: https://guyyyug.github.io/EgoPoint-Bench/

Do MLLMs Understand Pointing? Benchmarking and Enhancing Referential Reasoning in Egocentric Vision | AI Navigate