AI Navigate

PVI: Plug-in Visual Injection for Vision-Language-Action Models

arXiv cs.CV / 3/16/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • PVI is a lightweight, encoder-agnostic plug-in module that attaches to a pretrained vision-language-action policy and injects auxiliary visual representations via zero-initialized residual pathways, preserving pretrained behavior with only single-stage fine-tuning.
  • The study finds that temporal video features (V-JEPA2) outperform static image features (DINOv2), with the largest gains on multi-phase tasks that require state tracking and coordination.
  • PVI delivers consistent gains over the base policy and across a range of injection strategies, demonstrating its effectiveness compared with alternative approaches.
  • Real-robot experiments on long-horizon bimanual cloth folding validate PVI's practicality beyond simulation and its potential for real-world robotics applications.

Abstract

VLA architectures that pair a pretrained VLM with a flow-matching action expert have emerged as a strong paradigm for language-conditioned manipulation. Yet the VLM, optimized for semantic abstraction and typically conditioned on static visual observations, tends to attenuate fine-grained geometric cues and often lacks explicit temporal evidence for the action expert. Prior work mitigates this by injecting auxiliary visual features, but existing approaches either focus on static spatial representations or require substantial architectural modifications to accommodate temporal inputs, leaving temporal information underexplored. We propose Plug-in Visual Injection (PVI), a lightweight, encoder-agnostic module that attaches to a pretrained action expert and injects auxiliary visual representations via zero-initialized residual pathways, preserving pretrained behavior with only single-stage fine-tuning. Using PVI, we obtain consistent gains over the base policy and a range of competitive alternative injection strategies, and our controlled study shows that temporal video features (V-JEPA2) outperform strong static image features (DINOv2), with the largest gains on multi-phase tasks requiring state tracking and coordination. Real-robot experiments on long-horizon bimanual cloth folding further demonstrate the practicality of PVI beyond simulation.