Point What You Mean: Visually Grounded Instruction Policy

arXiv cs.RO / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Point-VLA, a plug-and-play policy for Vision-Language-Action models that augments language instructions with explicit visual grounding cues (e.g., bounding boxes) to improve object referring in cluttered or out-of-distribution scenes.
  • It addresses referential ambiguity that persists in text-only instruction VLA setups by enabling pixel-level object localization for more precise, object-level embodied control.
  • The authors introduce an automatic, low-human-effort data annotation pipeline to scale visually grounded datasets efficiently.
  • Across diverse real-world referring tasks, Point-VLA delivers consistently stronger performance than text-only instruction VLAs, with robust generalization to unseen-object scenarios.

Abstract

Vision-Language-Action (VLA) models align vision and language with embodied control, but their object referring ability remains limited when relying solely on text prompt, especially in cluttered or out-of-distribution (OOD) scenes. In this study, we introduce the Point-VLA, a plug-and-play policy that augments language instructions with explicit visual cues (e.g., bounding boxes) to resolve referential ambiguity and enable precise object-level grounding. To efficiently scale visually grounded datasets, we further develop an automatic data annotation pipeline requiring minimal human effort. We evaluate Point-VLA on diverse real-world referring tasks and observe consistently stronger performance than text-only instruction VLAs, particularly in cluttered or unseen-object scenarios, with robust generalization. These results demonstrate that Point-VLA effectively resolves object referring ambiguity through pixel-level visual grounding, achieving more generalizable embodied control.