Reinforcement Learning with Foundation Priors: Let the Embodied Agent Efficiently Learn on Its Own

arXiv cs.RO / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes Reinforcement Learning with Foundation Priors (RLFP) to make robotic manipulation reinforcement learning more practical by leveraging foundation models for guidance and feedback rather than relying on heavy manual reward engineering.
  • It introduces the Foundation-guided Actor-Critic (FAC) algorithm that helps embodied agents explore more efficiently using automatically generated/assisted reward functions.
  • The framework is designed to be sample-efficient, to require minimal yet effective reward engineering, and to be robust even when foundation-model priors are noisy.
  • Experiments show strong results: on real robots, FAC reaches an average 86% success rate after one hour of real-time learning across five dexterous tasks, and in simulation (Meta-World) it achieves 100% success in 7 of 8 tasks with under 100k frames.
  • The authors provide visualizations and code via the project website to support further research and adoption.

Abstract

Reinforcement learning (RL) is a promising approach for solving robotic manipulation tasks. However, it is challenging to apply the RL algorithms directly in the real world. For one thing, RL is data-intensive and typically requires millions of interactions with environments, which are impractical in real scenarios. For another, it is necessary to make heavy engineering efforts to design reward functions manually. To address these issues, we leverage foundation models in this paper. We propose Reinforcement Learning with Foundation Priors (RLFP) to utilize guidance and feedback from policy, value, and success-reward foundation models. Within this framework, we introduce the Foundation-guided Actor-Critic (FAC) algorithm, which enables embodied agents to explore more efficiently with automatic reward functions. The benefits of our framework are threefold: (1) \textit{sample efficient}; (2) \textit{minimal and effective reward engineering}; (3) \textit{agnostic to foundation model forms and robust to noisy priors}. Our method achieves remarkable performances in various manipulation tasks on both real robots and in simulation. Across 5 dexterous tasks with real robots, FAC achieves an average success rate of 86\% after one hour of real-time learning. Across 8 tasks in the simulated Meta-world, FAC achieves 100\% success rates in 7/8 tasks under less than 100k frames (about 1-hour training), outperforming baseline methods with manual-designed rewards in 1M frames. We believe the RLFP framework can enable future robots to explore and learn autonomously in the physical world for more tasks. Visualizations and code are available at https://yewr.github.io/rlfp.