3D-Mix for VLA: A Plug-and-Play Module for Integrating VGGT-based 3D Information into Vision-Language-Action Models

arXiv cs.RO / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • VLA(Vision-Language-Action)モデルは主に2Dデータで学習されるため空間知能が限定的で、操作に必要な3D知覚が不足しがちだと指摘されている。
  • VGGTベースの3D情報をVLAに統合する9種類の融合方式を標準ベンチマークで比較し、タスク文脈に応じて2Dのセマンティクスと3D幾何特徴の寄与を動的に調整する「semantic-conditioned gated fusion」が最良だった。
  • この知見を踏まえ、VGGTベース3D情報を多様なVLAアーキテクチャ(GR00T-style/π-style)に差し込めるプラグ&プレイモジュール「3D-Mix」を提案し、既存のMLLMやアクションエキスパートを改変せずに統合できる設計としている。
  • SIMPLERとLIBEROで、複数のMLLM系列(2B〜8B、全9バリアント)に対して評価した結果、OODのSIMPLERベンチマークで平均+7.0%の一貫した性能向上が報告されている。

Abstract

Vision-Language-Action (VLA) models leverage Multimodal Large Language Models (MLLMs) for robotic control, but recent studies reveal that MLLMs exhibit limited spatial intelligence due to training predominantly on 2D data, resulting in inadequate 3D perception for manipulation tasks. While recent approaches incorporate specialized 3D vision models such as VGGT to enhance spatial understanding, they employ diverse integration mechanisms without systematic investigation, leaving the optimal fusion strategy unclear. We conduct a comprehensive pilot study comparing nine VGGT integration schemes on standardized benchmarks and find that semantic-conditioned gated fusion, which adaptively balances 2D semantic and 3D geometric features based on task context, achieved the strongest performance among all nine evaluated fusion schemes in our pilot study. We present 3D-Mix, a plug-and-play module that integrates into diverse VLA architectures (GR00T-style and \pi-style) without modifying existing MLLM or action expert components. Experiments across six MLLM series (nine model variants, 2B--8B parameters) on SIMPLER and LIBERO show that 3D-Mix delivers consistent performance gains, averaging +7.0% on the out-of-domain (OOD) SIMPLER benchmark across all nine GR00T-style variants, establishing a principled approach for enhancing spatial intelligence in VLA systems.