Vision-Language-Action Model, Robustness, Multi-modal Learning, Robot Manipulation

arXiv cs.RO / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that Vision-Language-Action (VLA) models, despite high embodied-task performance, are brittle when visual corruption and language noise occur together, causing harmful distribution shifts.
  • It introduces STRONG-VLA, a decoupled fine-tuning method that first learns robustness via a curriculum of multimodal perturbations and then re-aligns to clean task data to restore fidelity.
  • STRONG-VLA is evaluated with a new multimodal robustness benchmark covering 28 perturbation types tied to realistic sensor noise, occlusion, and instruction corruption.
  • Experiments on LIBERO and OpenVLA show consistent improvements, with reported gains up to +12.60% (seen) and +7.77% (unseen), and strong cross-architecture generalization across OpenVLA variants and pi0.
  • Real-robot tests on an AIRBOT platform further support that the approach improves practical embodied control under multimodal disturbances.

Abstract

Despite their strong performance in embodied tasks, recent Vision-Language-Action (VLA) models remain highly fragile under multimodal perturbations, where visual corruption and linguistic noise jointly induce distribution shifts that degrade task-level execution. Existing robustness approaches typically rely on joint training with perturbed data, treating robustness as a static objective, which leads to conflicting optimization between robustness and task fidelity. In this work, we propose STRONG-VLA, a decoupled fine-tuning framework that explicitly separates robustness acquisition from task-aligned refinement. In Stage I, the model is exposed to a curriculum of multimodal perturbations with increasing difficulty, enabling progressive robustness learning under controlled distribution shifts. In Stage II, the model is re-aligned with clean task distributions to recover execution fidelity while preserving robustness. We further establish a comprehensive benchmark with 28 perturbation types spanning both textual and visual modalities, grounded in realistic sources of sensor noise, occlusion, and instruction corruption. Extensive experiments on the LIBERO benchmark show that STRONG-VLA consistently improves task success rates across multiple VLA architectures. On OpenVLA, our method achieves gains of up to 12.60% under seen perturbations and 7.77% under unseen perturbations. Notably, similar or larger improvements are observed on OpenVLA-OFT (+14.48% / +13.81%) and pi0 (+16.49% / +5.58%), demonstrating strong cross-architecture generalization. Real-world experiments on an AIRBOT robotic platform further validate its practical effectiveness. These results highlight the importance of decoupled optimization for multimodal robustness and establish STRONG-VLA as a simple yet principled framework for robust embodied control.