Stable Language Guidance for Vision-Language-Action Models

arXiv cs.RO / 4/21/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • Vision-Language-Action (VLA) robotic models can fail under small linguistic changes due to a “modality collapse,” where strong visual priors drown out sparse language signals and the agent overfits to exact phrasing.
  • The paper introduces Residual Semantic Steering (RSS), which probabilistically separates physical affordance from semantic execution to make actions follow intent rather than wording artifacts.
  • RSS adds two components: Monte Carlo Syntactic Integration to approximate a better semantic posterior using LLM-driven distributional expansion, and Residual Affordance Steering to subtract visual affordance influence during decoding.
  • Theoretical analysis claims RSS increases mutual information between action and intent while suppressing visual distractors, and experiments show state-of-the-art robustness on multiple manipulation benchmarks, including adversarial linguistic perturbations.
  • The authors release the code for RSS on GitHub, enabling direct reproduction and further testing.

Abstract

Vision-Language-Action (VLA) models have demonstrated impressive capabilities in generalized robotic control; however, they remain notoriously brittle to linguistic perturbations. We identify a critical ``modality collapse'' phenomenon where strong visual priors overwhelm sparse linguistic signals, causing agents to overfit to specific instruction phrasings while ignoring the underlying semantic intent. To address this, we propose Residual Semantic Steering (RSS), a probabilistic framework that disentangles physical affordance from semantic execution. RSS introduces two theoretical innovations: (1) Monte Carlo Syntactic Integration, which approximates the true semantic posterior via dense, LLM-driven distributional expansion, and (2) Residual Affordance Steering, a dual-stream decoding mechanism that explicitly isolates the causal influence of language by subtracting the visual affordance prior. Theoretical analysis suggests that RSS effectively maximizes the mutual information between action and intent while suppressing visual distractors. Empirical results across diverse manipulation benchmarks demonstrate that RSS achieves state-of-the-art robustness, maintaining performance even under adversarial linguistic perturbations. We release our code at https://github.com/Doo-mon/RSS.