VLA-IAP: Training-Free Visual Token Pruning via Interaction Alignment for Vision-Language-Action Models

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VLA-IAP, a training-free visual token pruning method for Vision-Language-Action (VLA) models that targets inference cost as visual context length grows.
  • It argues that existing pruning techniques miss the key role of continuous physical interaction in VLA tasks, which can prune structurally critical regions and cause unstable early-phase behavior.
  • VLA-IAP uses an Interaction-First paradigm with a geometric prior to preserve structural anchors and a dynamic schedule that adjusts pruning intensity based on semantic–motion alignment.
  • Experiments on the LIBERO benchmark report a 97.8% success rate and a 1.25× speedup, with up to 1.54× speedup while keeping performance comparable to the unpruned backbone.
  • The method generalizes across multiple model architectures, three simulation environments, and a real robot platform, suggesting practical deployment potential on resource-constrained devices.

Abstract

Vision-Language-Action (VLA) models have rapidly advanced embodied intelligence, enabling robots to execute complex, instruction-driven tasks. However, as model capacity and visual context length grow, the inference cost of VLA systems becomes a major bottleneck for real-world deployment on resource-constrained platforms. Existing visual token pruning methods mainly rely on semantic saliency or simple temporal cues, overlooking the continuous physical interaction, a fundamental property of VLA tasks. Consequently, current approaches often prune visually sparse yet structurally critical regions that support manipulation, leading to unstable behavior during early task phases. To overcome this, we propose a shift toward an explicit Interaction-First paradigm. Our proposed \textbf{training-free} method, VLA-IAP (Interaction-Aligned Pruning), introduces a geometric prior mechanism to preserve structural anchors and a dynamic scheduling strategy that adapts pruning intensity based on semantic-motion alignment. This enables a conservative-to-aggressive transition, ensuring robustness during early uncertainty and efficiency once interaction is locked. Extensive experiments show that VLA-IAP achieves a \textbf{97.8\% success rate} with a \textbf{1.25\times speedup} on the LIBERO benchmark, and up to \textbf{1.54\times speedup} while maintaining performance \textbf{comparable to the unpruned backbone}. Moreover, the method demonstrates superior and consistent performance across multiple model architectures and three different simulation environments, as well as a real robot platform, validating its strong generalization capability and practical applicability. Our project website is: \href{https://chengjt1999.github.io/VLA-IAP.github.io/}{VLA-IAP.com}.