VTAM: Video-Tactile-Action Models for Complex Physical Interaction Beyond VLAs

arXiv cs.RO / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing video-action/world models struggle in contact-rich manipulation because key interaction states (e.g., force modulation and contact transitions) are only partially observable from vision.
  • It introduces VTAM, a multimodal world modeling framework that augments a pretrained video transformer with tactile streams using lightweight modality-transfer finetuning.
  • VTAM is designed to learn cross-modal representations efficiently without requiring tactile-language paired data or separately pretrained tactile models.
  • To improve stability during multimodal fusion, the method adds a tactile regularization loss that encourages balanced cross-modal attention and prevents visual latent dominance.
  • Experiments report an average 90% success rate on contact-rich tasks and an 80% improvement over the pi 0.5 baseline on high-fidelity force-awareness scenarios like potato chip pick-and-place.

Abstract

Video-Action Models (VAMs) have emerged as a promising framework for embodied intelligence, learning implicit world dynamics from raw video streams to produce temporally consistent action predictions. Although such models demonstrate strong performance on long-horizon tasks through visual reasoning, they remain limited in contact-rich scenarios where critical interaction states are only partially observable from vision alone. In particular, fine-grained force modulation and contact transitions are not reliably encoded in visual tokens, leading to unstable or imprecise behaviors. To bridge this gap, we introduce the Video-Tactile Action Model (VTAM), a multimodal world modeling framework that incorporates tactile perception as a complementary grounding signal. VTAM augments a pretrained video transformer with tactile streams via a lightweight modality transfer finetuning, enabling efficient cross-modal representation learning without tactile-language paired data or independent tactile pretraining. To stabilize multimodal fusion, we introduce a tactile regularization loss that enforces balanced cross-modal attention, preventing visual latent dominance in the action model. VTAM demonstrates superior performance in contact-rich manipulation, maintaining a robust success rate of 90 percent on average. In challenging scenarios such as potato chip pick-and-place requiring high-fidelity force awareness, VTAM outperforms the pi 0.5 baseline by 80 percent. Our findings demonstrate that integrating tactile feedback is essential for correcting visual estimation errors in world action models, providing a scalable approach to physically grounded embodied foundation models.