XEmbodied: A Foundation Model with Enhanced Geometric and Physical Cues for Large-Scale Embodied Environments

arXiv cs.RO / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • XEmbodied is a cloud-side foundation model for Vision-Language-Action systems that targets a gap in current pipelines caused by 2D image-text pretraining lacking geometric reasoning and domain semantics.
  • The approach integrates intrinsic 3D geometric awareness by using a structured 3D Adapter and injects physical cues (such as occupancy grids and 3D boxes) through an Efficient Image-Embodied Adapter that produces context tokens.
  • Rather than using geometry as auxiliary input, XEmbodied distills physical signals into the model’s representation to improve embodied understanding.
  • Training uses a progressive domain curriculum and reinforcement learning post-training to preserve general capabilities while boosting performance.
  • The model reports strong results on 18 public benchmarks, improving spatial reasoning, traffic semantics, embodied affordances, and out-of-distribution generalization for large-scale scenario mining and embodied VQA.

Abstract

Vision-Language-Action (VLA) models drive next-generation autonomous systems, but training them requires scalable, high-quality annotations from complex environments. Current cloud pipelines rely on generic vision-language models (VLMs) that lack geometric reasoning and domain semantics due to their 2D image-text pretraining. To address this mismatch, we propose XEmbodied, a cloud-side foundation model that endows VLMs with intrinsic 3D geometric awareness and interaction with physical cues (e.g., occupancy grids, 3D boxes). Instead of treating geometry as auxiliary input, XEmbodied integrates geometric representations via a structured 3D Adapter and distills physical signals into context tokens using an Efficient Image-Embodied Adapter. Through progressive domain curriculum and reinforcement learning post-training, XEmbodied preserves general capabilities while demonstrating robust performance across 18 public benchmarks. It significantly improves spatial reasoning, traffic semantics, embodied affordance, and out-of-distribution generalization for large-scale scenario mining and embodied VQA.