SOLE-R1: Video-Language Reasoning as the Sole Reward for On-Robot Reinforcement Learning

arXiv cs.RO / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SOLE-R1, a video-language reasoning model designed to act as the sole reward signal for online reinforcement learning from raw video and a natural-language goal.
  • SOLE-R1 generates per-timestep spatiotemporal chain-of-thought reasoning and dense task-progress estimates intended to prevent policies from exploiting evaluator perceptual errors under partial observability and distribution shift.
  • Training relies on a large-scale pipeline that creates temporally grounded reasoning traces aligned with continuous progress supervision, then uses a hybrid approach combining supervised fine-tuning with RL driven by verifiable rewards.
  • Experiments across multiple simulation environments and a real-robot setting show zero-shot online RL from random initialization on 24 unseen manipulation tasks, without ground-truth rewards, demonstrations, or task-specific tuning.
  • The results report substantial improvements over strong existing vision-language reward models (including GPT-5 and Gemini-3-Pro) and stronger robustness against reward hacking.

Abstract

Vision-language models (VLMs) have shown impressive capabilities across diverse tasks, motivating efforts to leverage these models to supervise robot learning. However, when used as evaluators in reinforcement learning (RL), today's strongest models often fail under partial observability and distribution shift, enabling policies to exploit perceptual errors rather than solve the task. To address this limitation, we introduce SOLE-R1 (Self-Observing LEarner), a video-language reasoning model explicitly designed to serve as the sole reward signal for online RL. Given only raw video observations and a natural-language goal, SOLE-R1 performs per-timestep spatiotemporal chain-of-thought (CoT) reasoning and produces dense estimates of task progress that can be used directly as rewards. To train SOLE-R1, we develop a large-scale video trajectory and reasoning synthesis pipeline that generates temporally grounded CoT traces aligned with continuous progress supervision. This data is combined with foundational spatial and multi-frame temporal reasoning, and used to train the model with a hybrid framework that couples supervised fine-tuning with RL from verifiable rewards. Across four different simulation environments and a real-robot setting, SOLE-R1 enables zero-shot online RL from random initialization: robots learn previously unseen manipulation tasks without ground-truth rewards, success indicators, demonstrations, or task-specific tuning. SOLE-R1 succeeds on 24 unseen tasks and substantially outperforms strong vision-language rewarders, including GPT-5 and Gemini-3-Pro, while exhibiting markedly greater robustness to reward hacking.