Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation

arXiv cs.RO / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets the “seeing-to-doing gap” in embodied AI by introducing “pointing” as a shared, embodiment-agnostic intermediate representation between vision-language understanding and robot action primitives.
  • It presents Embodied-R1, a 3B vision-language model trained specifically for embodied reasoning and pointing, alongside a new large dataset, Embodied-Points-200K, built from multiple embodied and general visual reasoning sources.
  • Training is done via a two-stage reinforced fine-tuning curriculum with a specialized multi-task reward design to improve embodied pointing behaviors.
  • Embodied-R1 achieves state-of-the-art results on 11 embodied spatial and pointing benchmarks and shows strong zero-shot generalization (56.2% success in SIMPLEREnv and 87.5% across eight real-world XArm tasks) without task-specific fine-tuning.
  • The model also maintains robustness under diverse visual disturbances, suggesting the pointing-centric representation and reinforced fine-tuning approach can generalize perception-to-action in robotics.

Abstract

Generalization in embodied AI is hindered by the "seeing-to-doing gap," which stems from data scarcity and embodiment heterogeneity. To address this, we pioneer "pointing" as a unified, embodiment-agnostic intermediate representation, defining four core embodied pointing abilities that bridge high-level vision-language comprehension with low-level action primitives. We introduce Embodied-R1, a 3B Vision-Language Model (VLM) specifically designed for embodied reasoning and pointing. We use a wide range of embodied and general visual reasoning datasets as sources to construct a large-scale dataset, Embodied-Points-200K, which supports key embodied pointing capabilities. We then train Embodied-R1 using a two-stage Reinforced Fine-tuning (RFT) curriculum with a specialized multi-task reward design. Embodied-R1 achieves state-of-the-art performance on 11 embodied spatial and pointing benchmarks. Critically, it demonstrates robust zero-shot generalization by achieving a 56.2% success rate in the SIMPLEREnv and 87.5% across 8 real-world XArm tasks without any task-specific fine-tuning, representing a 62% improvement over strong baselines. Furthermore, the model exhibits high robustness against diverse visual disturbances. Our work shows that a pointing-centric representation, combined with an RFT training paradigm, offers an effective and generalizable pathway to closing the perception-action gap in robotics.