SG-VLA: Learning Spatially-Grounded Vision-Language-Action Models for Mobile Manipulation

arXiv cs.RO / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SG-VLA, a vision-language-action learning framework aimed at improving robotic performance in complex household settings where standard imitation learning falls short.
  • SG-VLA enhances spatial grounding by using multi-view RGB, depth cues, and short temporal history to capture both global scene layout and local manipulation context for mobile manipulation.
  • It targets a challenging 13-dimensional continuous action space covering coordinated base motion, arm articulation, and gripper control.
  • The method improves representation quality via auxiliary task co-training with decoders that reconstruct interpretable intermediate signals such as robot pose, joint states, grasp affordances, relative object pose, and segmentation masks.
  • On home rearrangement benchmarks spanning picking, placing, opening, and closing, SG-VLA delivers consistent gains over direct imitation learning, suggesting a scalable path toward more general-purpose domestic robots.

Abstract

Vision-Language-Action (VLA) models show promise for robotic control, yet performance in complex household environments remains sub-optimal. Mobile manipulation requires reasoning about global scene layout, fine-grained geometry, and high-dimensional continuous actions, making standard imitation learning insufficient. We introduce a framework for learning spatially-grounded VLA models that strengthens perception and representation through auxiliary task co-training and multi-modal input enhancement. Our method addresses the challenge of controlling a 13-dimensional action space involving coordinated base motion, arm articulation, and gripper actuation. To enrich spatial understanding, the model incorporates multi-view RGB observations, depth cues, and short temporal history, providing perspectives of both global scene structure and local manipulation context. To improve representation quality, we co-train auxiliary decoders that reconstruct interpretable intermediate signals - including global robot position, joint configurations, grasp affordances, target-object relative pose, and segmentation masks - from shared visual-language features. These objectives provide dense supervision that encourages the backbone to develop spatially grounded, manipulation-aware latent representations. Through extensive evaluation on home rearrangement tasks, our approach achieves consistent improvements across picking, placing, opening, and closing operations, substantially outperforming direct imitation learning. Our findings suggest that spatial grounding through auxiliary and multi-modal learning provides a strong direction for scaling VLA models toward general-purpose domestic robots.