Beyond Attention Magnitude: Leveraging Inter-layer Rank Consistency for Efficient Vision-Language-Action Models

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that using attention magnitude alone for vision-language-action (VLA) token reduction is unreliable because “high-attention” tokens are task-dependent and can hurt policy performance.
  • It proposes TIES (Tau-guided Inter-layer Efficient Selection), a dynamic token selection method that uses inter-layer ranking consistency while balancing it with attention magnitude.
  • TIES performs selection robustly without additional training by exploiting agreement in token ranking across layers.
  • Experiments on the CogACT + SIMPLER benchmarks show a 6% improvement in average success rate alongside a 78% reduction in token usage.
  • The method demonstrates strong generalization across different decoders and benchmarks, suggesting it can be broadly applied to improve VLA inference efficiency.

Abstract

Vision-Language-Action (VLA) models excel in robotic manipulation but suffer from significant inference latency due to processing dense visual tokens. Existing token reduction methods predominantly rely on attention magnitude as a static selection. In this work, we challenge this assumption, revealing that high-attention tokens are task-dependent and can even degrade policy performance. To address this, we introduce \textbf{TIES} (\textbf{T}au-guided \textbf{I}nter-layer \textbf{E}fficient \textbf{S}election), a dynamic framework guided by inter-layer token ranking consistency. By adaptively balancing attention magnitude with ranking consistency, TIES ensures robust token selection without requiring additional training. On the CogACT + SIMPLER benchmark, TIES improves average success rates by 6\% while reducing token usage by 78\%, and demonstrate strong generalization across diverse decoders and benchmarks.