Latent Action Diffusion for Cross-Embodiment Manipulation

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces diffusion policies learned in a latent action space to unify diverse end-effector actions across embodiments.
  • It trains encoders with a contrastive loss to create a semantically aligned latent action space for anthropomorphic hands, a human hand, and a parallel jaw gripper.
  • By co-training across end-effectors in this latent space, a single policy can control multiple robots and achieve up to 25.3% higher manipulation success.
  • The approach reduces data collection needs for new robot morphologies and accelerates generalization across embodiments, enabling scalable multi-robot learning.
  • It offers a new method to unify action spaces across robot setups and facilitate data sharing.

Abstract

End-to-end learning is emerging as a powerful paradigm for robotic manipulation, but its effectiveness is limited by data scarcity and the heterogeneity of action spaces across robot embodiments. In particular, diverse action spaces across different end-effectors create barriers for cross-embodiment learning and skill transfer. We address this challenge through diffusion policies learned in a latent action space that unifies diverse end-effector actions. We first show that we can learn a semantically aligned latent action space for anthropomorphic robotic hands, a human hand, and a parallel jaw gripper using encoders trained with a contrastive loss. Second, we show that by using our proposed latent action space for co-training on manipulation data from different end-effectors, we can utilize a single policy for multi-robot control and obtain up to 25.3% improved manipulation success rates, indicating successful skill transfer despite a significant embodiment gap. Our approach using latent cross-embodiment policies presents a new method to unify different action spaces across embodiments, enabling efficient multi-robot control and data sharing across robot setups. This unified representation significantly reduces the need for extensive data collection for each new robot morphology, accelerates generalization across embodiments, and ultimately facilitates more scalable and efficient robotic learning.