VLA-InfoEntropy: A Training-Free Vision-Attention Information Entropy Approach for Vision-Language-Action Models Inference Acceleration and Success

arXiv cs.CV / 4/8/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “VLA-InfoEntropy,” a training-free inference acceleration method for Vision-Language-Action (VLA) models that targets computational overhead from jointly processing visual, linguistic, and action inputs.
  • It introduces two entropy-based signals—image entropy over visual tokens to find texture/structure-rich regions, and attention entropy over task-relevant text tokens to identify semantically important attention patterns.
  • By combining these entropy metrics with timestep information, the method uses a dynamic transition strategy to shift model focus from broad visual features to attention-guided local informative regions over time.
  • The authors report that VLA-InfoEntropy reduces inference parameters, improves inference speed, and achieves better performance than existing approaches through extensive experiments.
  • Overall, the work frames entropy as a practical guide for reducing redundancy while preserving task-critical multimodal content at inference time.

Abstract

Vision-Language-Action (VLA) models integrate visual perception, language understanding, and action decision-making for cross-modal semantic alignment, exhibiting broad application potential. However, the joint processing of high-dimensional visual features, complex linguistic inputs, and continuous action sequences incurs significant computational overhead and low inference efficiency, thereby hindering real-time deployment and reliability. To address this issue, we use image entropy to quantify the grayscale distribution characteristics of each visual token and introduce attention entropy to capture the distribution of attention scores over task-related text. Visual entropy identifies texture-rich or structurally informative regions, while attention entropy pinpoints semantically relevant tokens. Combined with timestep information, these metrics enable a dynamic transition strategy that shifts the model's focus from global visual features to attention-guided local informative regions. Thus, the resulting VLA-InfoEntropy method integrates spatial, semantic, and temporal cues to reduce redundancy while preserving critical content. Extensive experiments show that our method reduces inference parameters, accelerates inference speed, and outperforms existing approaches.