Robotic Manipulation is Vision-to-Geometry Mapping ($f(v) \rightarrow G$): Vision-Geometry Backbones over Language and Video Models

arXiv cs.RO / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reframes robotic manipulation as a core vision-to-geometry mapping problem, arguing that effective generalizable control should align with 3D geometric properties rather than relying primarily on semantic language or 2D video priors.
  • It introduces the Vision-Geometry-Action (VGA) model, which generates actions by conditioning on pretrained native 3D world representations instead of using conventional vision-language or video backbones.
  • To improve geometric consistency, VGA adds a Progressive Volumetric Modulation module and uses a joint training strategy tailored to the vision-to-geometry objective.
  • Experiments on simulation benchmarks show VGA outperforms strong VLA baselines (including π0.5 and GeoVLA) on precise manipulation tasks.
  • The model also demonstrates strong zero-shot generalization to unseen viewpoints in real-world deployments, indicating the proposed 3D backbone approach may better support transferable physical intelligence.

Abstract

At its core, robotic manipulation is a problem of vision-to-geometry mapping (f(v) \rightarrow G). Physical actions are fundamentally defined by geometric properties like 3D positions and spatial relationships. Consequently, we argue that the foundation for generalizable robotic control should be a vision-geometry backbone, rather than the widely adopted vision-language or video models. Conventional VLA and video-predictive models rely on backbones pretrained on large-scale 2D image-text or temporal pixel data. While effective, their representations are largely shaped by semantic concepts or 2D priors, which do not intrinsically align with the precise 3D geometric nature required for physical manipulation. Driven by this insight, we propose the Vision-Geometry-Action (VGA) model, which directly conditions action generation on pretrained native 3D representations. Specifically, VGA replaces conventional language or video backbones with a pretrained 3D world model, establishing a seamless vision-to-geometry mapping that translates visual inputs directly into physical actions. To further enhance geometric consistency, we introduce a Progressive Volumetric Modulation module and adopt a joint training strategy. Extensive experiments validate the effectiveness of our approach. In simulation benchmarks, VGA outperforms top-tier VLA baselines including \pi_{0.5} and GeoVLA, demonstrating its superiority in precise manipulation. More importantly, VGA exhibits remarkable zero-shot generalization to unseen viewpoints in real-world deployments, consistently outperforming \pi_{0.5}. These results highlight that operating on native 3D representations-rather than translating through language or 2D video priors-is a highly promising direction for achieving generalizable physical intelligence.