Generalizable Dense Reward for Long-Horizon Robotic Tasks

arXiv cs.RO / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that robotic foundation policies trained mainly with imitation learning often fail on long-horizon tasks due to distribution shift and compounding errors, and that RL finetuning typically requires manual reward engineering to generalize across tasks.
  • It introduces VLLR (Generalizable Dense Reward for Long-Horizon Robotic Tasks), which combines an extrinsic dense reward derived from LLM/VLM progress recognition with an intrinsic reward based on the policy’s self-certainty to guide learning step-by-step.
  • VLLR uses LLMs to decompose tasks into verifiable subtasks and VLMs to estimate progress, enabling a value-function initialization via a brief warm-up phase that avoids the high inference cost of dense reward computation throughout training.
  • Ablation results show that VLM-based value initialization mainly improves task completion efficiency, while self-certainty most strongly boosts success rates, especially on out-of-distribution tasks.
  • On the CHORES benchmark (mobile manipulation and navigation), VLLR reports up to 56% absolute success-rate gains over the pretrained policy, up to 5% over existing RL finetuning on in-distribution tasks, and up to 10% gains on out-of-distribution tasks without manual reward engineering.

Abstract

Existing robotic foundation policies are trained primarily via large-scale imitation learning. While such models demonstrate strong capabilities, they often struggle with long-horizon tasks due to distribution shift and error accumulation. While reinforcement learning (RL) can finetune these models, it cannot work well across diverse tasks without manual reward engineering. We propose VLLR, a dense reward framework combining (1) an extrinsic reward from Large Language Models (LLMs) and Vision-Language Models (VLMs) for task progress recognition, and (2) an intrinsic reward based on policy self-certainty. VLLR uses LLMs to decompose tasks into verifiable subtasks and then VLMs to estimate progress to initialize the value function for a brief warm-up phase, avoiding prohibitive inference cost during full training; and self-certainty provides per-step intrinsic guidance throughout PPO finetuning. Ablation studies reveal complementary benefits: VLM-based value initialization primarily improves task completion efficiency, while self-certainty primarily enhances success rates, particularly on out-of-distribution tasks. On the CHORES benchmark covering mobile manipulation and navigation, VLLR achieves up to 56% absolute success rate gains over the pretrained policy, up to 5% gains over state-of-the-art RL finetuning methods on in-distribution tasks, and up to 10\% gains on out-of-distribution tasks, all without manual reward engineering. Additional visualizations can be found in https://silongyong.github.io/vllr_project_page/