AI Navigate

GST-VLA: Structured Gaussian Spatial Tokens for 3D Depth-Aware Vision-Language-Action Models

arXiv cs.CV / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • GST-VLA introduces the Gaussian Spatial Tokenizer (GST) that converts depth and semantic patch features into structured 3D Gaussian primitives with geometric parameters, enhancing spatial representation over traditional 2D patch tokens.
  • The model uses 3D Depth-Aware Chain-of-Thought (DA-CoT) reasoning to supervise spatial intermediate tasks such as 3D object grounding, grasp affordance geometry, metric distances, and spatial waypoints, improving interpretability and spatial reasoning.
  • GST-VLA incorporates a dual cross-attention mechanism in a 300M-parameter action expert model that decodes 7-DoF action chunks conditioned on vision-language model states and spatial thoughts.
  • Training with combined flow, CoT, and depth losses across progressive stages leads to significant performance gains, achieving state-of-the-art results on LIBERO and SimplerEnv benchmarks.
  • Ablation studies confirm that each component and training stage of GST-VLA independently and synergistically contributes to improvements, especially on precision-demanding vision-language-action tasks.

Computer Science > Computer Vision and Pattern Recognition

arXiv:2603.09079 (cs)
[Submitted on 10 Mar 2026]

Title:GST-VLA: Structured Gaussian Spatial Tokens for 3D Depth-Aware Vision-Language-Action Models

View a PDF of the paper titled GST-VLA: Structured Gaussian Spatial Tokens for 3D Depth-Aware Vision-Language-Action Models, by Md Selim Sarowar and 2 other authors
View PDF HTML (experimental)
Abstract:VLA models encode visual observations as 2D patch tokens with no intrinsic geometric structure. We introduce GST-VLA with two contributions. First, the Gaussian Spatial Tokenizer (GST) converts frozen dense depth and frozen semantic patch features into $N_g{=}128$ anisotropic 3D Gaussian primitives, each parameterized by a metric residual mean $\mu \in \mathbb{R}^3$, log-scale covariance $\log \sigma \in \mathbb{R}^3$, and learned opacity $\alpha \in (0,1)$. The covariance eigenstructure encodes local surface orientation, and opacity provides per-primitive geometric confidence, both inaccessible from scalar depth. Spatial attention pooling with learned queries concentrates the fixed token budget on geometrically salient regions rather than distributing uniformly. Second, 3D Depth-Aware Chain-of-Thought (DA-CoT) reasoning supervises four structured intermediate spatial thoughts, covering 3D object grounding, grasp affordance contact geometry, pairwise metric distances, and coarse SE(3) waypoints, as explicit generation targets in the training loss. A cross-attention sublayer at every VLM transformer block provides direct access to the raw 256-primitive Gaussian field during DA-CoT generation. A 300M-parameter flow-matching action expert with mixture-of-experts feedforward sublayers decodes 7-DoF delta action chunks via conditional ODE integration, conditioned on both VLM hidden states and DA-CoT outputs through dual cross-attention. Trained with composite $\mathcal{L}_\mathrm{flow} + \mathcal{L}_\mathrm{CoT} + \mathcal{L}_\mathrm{depth}$ across three progressive stages, GST-VLA achieves 96.4% on LIBERO (+2.0%), and 80.2% on SimplerEnv (+5.4%). Ablations isolate the contribution of each GST component, each DA-CoT thought, and each training stage, confirming independent and synergistic gains concentrated on precision demanding tasks.
Comments:
Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO)
Cite as: arXiv:2603.09079 [cs.CV]
  (or arXiv:2603.09079v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2603.09079
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Md Selim Sarowar [view email]
[v1] Tue, 10 Mar 2026 01:39:38 UTC (6,833 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled GST-VLA: Structured Gaussian Spatial Tokens for 3D Depth-Aware Vision-Language-Action Models, by Md Selim Sarowar and 2 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CV
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.