Using Machine Mental Imagery for Representing Common Ground in Situated Dialogue

arXiv cs.CL / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses a key weakness in situated dialogue: conversational agents often fail to maintain persistent shared context, leading to “representational blur” where distinct entities become indistinguishable in text.
  • It proposes an “active visual scaffolding” framework that incrementally turns dialogue state into a persistent visual history, retrievable later to generate more grounded responses.
  • Experiments on the IndiRef benchmark show that incremental externalization improves performance over full-dialog reasoning, and visual scaffolding further reduces representational blur and forces more concrete scene commitments.
  • The authors find that text still performs better for non-depictable information, and the best results come from a hybrid multimodal setup combining visual depictive and textual propositional representations.

Abstract

Situated dialogue requires speakers to maintain a reliable representation of shared context rather than reasoning only over isolated utterances. Current conversational agents often struggle with this requirement, especially when the common ground must be preserved beyond the immediate context window. In such settings, fine-grained distinctions are frequently compressed into purely textual representations, leading to a critical failure mode we call \emph{representational blur}, in which similar but distinct entities collapse into interchangeable descriptions. This semantic flattening creates an illusion of grounding, where agents appear locally coherent but fail to track shared context persistently over time. Inspired by the role of mental imagery in human reasoning, and based on the increased availability of multimodal models, we explore whether conversational agents can be given an analogous ability to construct some depictive intermediate representations during dialogue to address these limitations. Thus, we introduce an active visual scaffolding framework that incrementally converts dialogue state into a persistent visual history that can later be retrieved for grounded response generation. Evaluation on the IndiRef benchmark shows that incremental externalization itself improves over full-dialog reasoning, while visual scaffolding provides additional gains by reducing representational blur and enforcing concrete scene commitments. At the same time, textual representations remain advantageous for non-depictable information, and a hybrid multimodal setting yields the best overall performance. Together, these findings suggest that conversational agents benefit from an explicitly multimodal representation of common ground that integrates depictive and propositional information.