AVA-VLA: Improving Vision-Language-Action models with Active Visual Attention

arXiv cs.RO / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing Vision-Language-Action (VLA) models often treat each visual frame independently, which mismatches real robotic control that is partially observable and depends on prior interactions.
  • It proposes AVA-VLA, reformulating VLA policy learning from a POMDP perspective and using a recurrent internal state to approximate the agent’s belief over task history.
  • The method introduces Active Visual Attention (AVA), which adaptively reweights visual tokens based on both the instruction and the execution history to emphasize temporally relevant regions.
  • Experiments report state-of-the-art results on robotic benchmarks such as LIBERO and CALVIN, along with effective transfer to real-world dual-arm manipulation tasks.

Abstract

Vision-Language-Action (VLA) models have shown remarkable progress in embodied tasks recently, but most methods process visual observations independently at each timestep. This history-agnostic design treats robot manipulation as a Markov Decision Process, even though real-world robotic control is inherently partially observable and requires reasoning over past interactions. To address this mismatch, we reformulate VLA policy learning from a Partially Observable Markov Decision Process perspective and propose AVA-VLA, a framework that conditions action generation on a recurrent state that serves as a neural approximation to the agent's belief over task history. Built on this recurrent state, we introduce Active Visual Attention (AVA), which dynamically reweights visual tokens in the current observation to focus on regions most relevant given both the instruction and execution history. Extensive experiments show that AVA-VLA achieves state-of-the-art performance on standard robotic benchmarks, including LIBERO and CALVIN, and transfers effectively to real-world dual-arm manipulation tasks. These results demonstrate the effectiveness of temporally grounded active visual processing for improving VLA performance in robotic sequential decision-making. The project page is available at https://liauto-dsr.github.io/AVA-VLA-Page.