GazeVLA: Learning Human Intention for Robotic Manipulation

arXiv cs.RO / 4/27/2026

📰 NewsModels & Research

Key Points

  • GazeVLA proposes bridging the “embodiment gap” between humans and robots by using human intention as an intermediate representation for robotic manipulation.
  • The method models intention from gaze, treating it as an observable signal that naturally precedes physical actions and can be transferred to robot behavior.
  • GazeVLA is pretrained on a large-scale egocentric human dataset to learn intention and its relationship with actions, then fine-tuned with a small set of robot and human data.
  • During inference, it uses a Chain-of-Thought-style process to predict intention sequentially before executing actions.
  • Experiments in simulation and real-world tests across long-horizon, fine-grained, few-shot, and robustness benchmarks show consistent gains over strong baselines and state-of-the-art results.

Abstract

Embodied foundation models have achieved significant breakthroughs in robotic manipulation, yet they still depend heavily on large-scale robot demonstrations. Although recent works have explored leveraging human data to alleviate this dependency, effectively extracting transferable knowledge remains a significant challenge due to the inherent embodiment gap between human and robot. We argue that the intention underlying human actions can serve as a powerful intermediate representation for bridging this gap. In this paper, we introduce a novel framework that explicitly learns and transfers human intention to facilitate robotic manipulation. Specifically, we model intention through gaze, as it naturally precedes physical actions and serves as an observable proxy for human intent. Our model is first pretrained on a large-scale egocentric human dataset to capture human intention and its synergy with action, followed by finetuning on a small set of robot and human data. During inference, the model adopts a Chain-of-Thought reasoning paradigm, sequentially predicting intention before executing the action. Extensive evaluations in simulation and real-world settings, across long-horizon and fine-grained tasks, and under few-shot and robustness benchmarks, show that our method consistently outperforms strong baselines, generalizes better, and achieves state-of-the-art performance.