AI Navigate

See, Symbolize, Act: Grounding VLMs with Spatial Representations for Better Gameplay

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates three state-of-the-art VLMs across Atari games, VizDoom, and AI2-THOR, comparing frame-only, frame with self-extracted symbols, frame with ground-truth symbols, and symbol-only pipelines.
  • It finds that symbolic grounding helps all models when the symbolic information is accurate, improving grounding and action selection in interactive environments.
  • When symbols are extracted by the model, performance becomes dependent on model capability and scene complexity, highlighting symbol extraction reliability as a bottleneck.
  • The study concludes that perception quality is a central bottleneck for VLM-based agents and calls for improving symbol extraction robustness to enable better gameplay.

Abstract

Vision-Language Models (VLMs) excel at describing visual scenes, yet struggle to translate perception into precise, grounded actions. We investigate whether providing VLMs with both the visual frame and the symbolic representation of the scene can improve their performance in interactive environments. We evaluate three state-of-the-art VLMs across Atari games, VizDoom, and AI2-THOR, comparing frame-only, frame with self-extracted symbols, frame with ground-truth symbols, and symbol-only pipelines. Our results indicate that all models benefit when the symbolic information is accurate. However, when VLMs extract symbols themselves, performance becomes dependent on model capability and scene complexity. We further investigate how accurately VLMs can extract symbolic information from visual inputs and how noise in these symbols affects decision-making and gameplay performance. Our findings reveal that symbolic grounding is beneficial in VLMs only when symbol extraction is reliable, and highlight perception quality as a central bottleneck for future VLM-based agents.