Modular Sensory Stream for Integrating Physical Feedback in Vision-Language-Action Models

arXiv cs.RO / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MoSS, a modular sensory-stream framework for Vision-Language-Action (VLA) models that can ingest multiple heterogeneous physical signals rather than only a single modality.
  • MoSS uses decoupled modality streams and joint cross-modal self-attention to fuse different physical signals into a unified action prediction stream.
  • To add new sensory modalities without destabilizing performance, it applies a two-stage training approach that initially freezes pretrained VLA parameters.
  • It also adds an auxiliary objective to predict future physical signals, aiming to better model contact-interaction dynamics.
  • Experiments on real-world tasks show MoSS improves VLA performance by jointly leveraging diverse signals such as tactile and torque, producing synergistic gains.

Abstract

Humans understand and interact with the real world by relying on diverse physical feedback beyond visual perception. Motivated by this, recent approaches attempt to incorporate physical sensory signals into Vision-Language-Action models (VLAs). However, they typically focus on a single type of physical signal, failing to capture the heterogeneous and complementary nature of real-world interactions. In this paper, we propose MoSS, a modular sensory stream framework that adapts VLAs to leverage multiple sensory signals for action prediction. Specifically, we introduce decoupled modality streams that integrate heterogeneous physical signals into the action stream via joint cross-modal self-attention. To enable stable incorporation of new modalities, we adopt a two-stage training scheme that freezes pretrained VLA parameters in the early stage. Furthermore, to better capture contact interaction dynamics, we incorporate an auxiliary task that predicts future physical signals. Through extensive real-world experiments, we demonstrate that MoSS successfully augments VLAs to leverage diverse physical signals (i.e., tactile and torque), integrating multiple signals to achieve synergistic performance gains.