FocusVLA: Focused Visual Utilization for Vision-Language-Action Models

arXiv cs.RO / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • VLA(Vision-Language-Action)モデルの行動生成が、視覚情報の「使われ方」に起因する3つのボトルネック(視覚ディテールの見落とし、視覚トークン過多による注意散漫、タスク不要情報によるノイズ)で大きく制約されることを実験的に検証しています。
  • 既存の視覚表現の品質よりも、視覚情報をどのように利用するかが性能を主に左右していると示しています。
  • 提案手法FocusVLAは、タスク関連領域への注意集中を促すためにModality Cascaded Attentionでショートカット経路を抑制し、さらにFocus Attentionでタスク関連パッチを動的に選択して情報量と影響度を制御する設計です。
  • シミュレーションおよび実環境のロボティクス・ベンチマークで、器用な操作の達成に加え、多様なタスクで性能向上と学習の収束加速を同時に示しています。

Abstract

Vision-Language-Action (VLA) models improve action generation by conditioning policies on rich vision-language information. However, current auto-regressive policies are constrained by three bottlenecks: (1) architectural bias drives models to overlook visual details, (2) an excessive number of visual tokens makes attention difficult to focus on the correct regions, and (3) task-irrelevant visual information introduces substantial noise - together severely impairing the quality of action. In this paper, we investigate how to effectively utilize different visual representations for action generation. To this end, we first empirically validate the above issues and show that VLA performance is primarily limited by how visual information is utilized, rather than by the quality of visual representations. Based on these insights, we introduce FocusVLA, a novel paradigm that directs the model's attention to task-relevant visual regions to effectively bridge vision to action. Specifically, we first propose Modality Cascaded Attention to eliminate shortcut pathways, thereby compelling VLA models to rely on task-relevant visual details for action generation. Furthermore, we propose Focus Attention, which dynamically selects task-relevant visual patches to control information quantity while explicitly modulating their influence to suppress task-irrelevant noise. Extensive experiments on both simulated and real-world robotic benchmarks demonstrate that FocusVLA not only effectively leverages visual details to perform dexterous manipulations, but also substantially improves performance and accelerates convergence across a variety of tasks.