SpatialPoint: Spatial-aware Point Prediction for Embodied Localization

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “embodied localization,” defined as predicting executable 3D points from visual observations plus language instructions for embodied agents acting in 3D space.
  • It distinguishes two target types for the task—touchable (surface-grounded) 3D points for physical interaction and air (free-space) 3D points for placement, navigation, and geometric/directional constraints.
  • SpatialPoint is proposed as a spatial-aware vision-language framework that explicitly integrates structured depth into a VLM and outputs camera-frame 3D coordinates rather than relying on implicit geometric reconstruction from RGB.
  • The authors build a large 2.6M-sample RGB-D dataset with QA pairs covering both touchable and air points for training and evaluation.
  • Experiments and real-robot deployment across grasping, object placement, and mobile navigation show that incorporating depth into VLMs significantly improves embodied localization performance.

Abstract

Embodied intelligence fundamentally requires a capability to determine where to act in 3D space. We formalize this requirement as embodied localization -- the problem of predicting executable 3D points conditioned on visual observations and language instructions. We instantiate embodied localization with two complementary target types: touchable points, surface-grounded 3D points enabling direct physical interaction, and air points, free-space 3D points specifying placement and navigation goals, directional constraints, or geometric relations. Embodied localization is inherently a problem of embodied 3D spatial reasoning -- yet most existing vision-language systems rely predominantly on RGB inputs, necessitating implicit geometric reconstruction that limits cross-scene generalization, despite the widespread adoption of RGB-D sensors in robotics. To address this gap, we propose SpatialPoint, a spatial-aware vision-language framework with careful design that integrates structured depth into a vision-language model (VLM) and generates camera-frame 3D coordinates. We construct a 2.6M-sample RGB-D dataset covering both touchable and air points QA pairs for training and evaluation. Extensive experiments demonstrate that incorporating depth into VLMs significantly improves embodied localization performance. We further validate SpatialPoint through real-robot deployment across three representative tasks: language-guided robotic arm grasping at specified locations, object placement to target destinations, and mobile robot navigation to goal positions.