AI Navigate

Recurrent Reasoning with Vision-Language Models for Estimating Long-Horizon Embodied Task Progress

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the Recurrent Reasoning Vision-Language Model (R^2VLM), which processes local video snippets with a growing Chain of Thought to estimate long-horizon task progress.
  • R^2VLM addresses the computational cost of long video processing while preserving key reasoning capabilities through a recurrent framework that maintains global context.
  • The model is trained on ALFRED and Ego4D and demonstrates state-of-the-art performance in long-horizon progress estimation, with benefits for downstream tasks like progress-enhanced policy learning, RL reward modeling, and proactive assistance.
  • The authors provide publicly available models and benchmarks on HuggingFace for broader use and evaluation.

Abstract

Accurately estimating task progress is critical for embodied agents to plan and execute long-horizon, multi-step tasks. Despite promising advances, existing Vision-Language Models (VLMs) based methods primarily leverage their video understanding capabilities, while neglecting their complex reasoning potential. Furthermore, processing long video trajectories with VLMs is computationally prohibitive for real-world deployment. To address these challenges, we propose the Recurrent Reasoning Vision-Language Model (\text{R}^2VLM). Our model features a recurrent reasoning framework that processes local video snippets iteratively, maintaining a global context through an evolving Chain of Thought (CoT). This CoT explicitly records task decomposition, key steps, and their completion status, enabling the model to reason about complex temporal dependencies. This design avoids the high cost of processing long videos while preserving essential reasoning capabilities. We train \text{R}^2VLM on large-scale, automatically generated datasets from ALFRED and Ego4D. Extensive experiments on progress estimation and downstream applications, including progress-enhanced policy learning, reward modeling for reinforcement learning, and proactive assistance, demonstrate that \text{R}^2VLM achieves strong performance and generalization, achieving a new state-of-the-art in long-horizon task progress estimation. The models and benchmarks are publicly available at \href{https://huggingface.co/collections/zhangyuelin/r2vlm}{huggingface}.