ExpertGen: Scalable Sim-to-Real Expert Policy Learning from Imperfect Behavior Priors

arXiv cs.RO / 4/22/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • ExpertGen is a simulation-first framework for learning robust, generalizable robotics behavior cloning policies without collecting prohibitively expensive real-world expert demonstrations.
  • It initializes a diffusion-policy behavior prior from imperfect demonstrations, which can be generated by large language models or collected from humans, before applying reinforcement learning to improve task success.
  • The RL stage optimizes the diffusion model’s initial noise while keeping the pretrained diffusion policy frozen, constraining exploration to remain within safe, human-like behavior manifolds.
  • Experiments on manipulation benchmarks show ExpertGen reaches high-quality expert policies with sparse rewards and no reward engineering, including strong performance on industrial assembly and long-horizon manipulation.
  • For sim-to-real, ExpertGen state-based policies are distilled into visuomotor policies using DAgger and deployed on real robotic hardware successfully.

Abstract

Learning generalizable and robust behavior cloning policies requires large volumes of high-quality robotics data. While human demonstrations (e.g., through teleoperation) serve as the standard source for expert behaviors, acquiring such data at scale in the real world is prohibitively expensive. This paper introduces ExpertGen, a framework that automates expert policy learning in simulation to enable scalable sim-to-real transfer. ExpertGen first initializes a behavior prior using a diffusion policy trained on imperfect demonstrations, which may be synthesized by large language models or provided by humans. Reinforcement learning is then used to steer this prior toward high task success by optimizing the diffusion model's initial noise while keep original policy frozen. By keeping the pretrained diffusion policy frozen, ExpertGen regularizes exploration to remain within safe, human-like behavior manifolds, while also enabling effective learning with only sparse rewards. Empirical evaluations on challenging manipulation benchmarks demonstrate that ExpertGen reliably produces high-quality expert policies with no reward engineering. On industrial assembly tasks, ExpertGen achieves a 90.5% overall success rate, while on long-horizon manipulation tasks it attains 85% overall success, outperforming all baseline methods. The resulting policies exhibit dexterous control and remain robust across diverse initial configurations and failure states. To validate sim-to-real transfer, the learned state-based expert policies are further distilled into visuomotor policies via DAgger and successfully deployed on real robotic hardware.