DockAnywhere: Data-Efficient Visuomotor Policy Learning for Mobile Manipulation via Novel Demonstration Generation

arXiv cs.RO / 4/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • DockAnywhere introduces a data-efficient demonstration generation framework to improve viewpoint generalization for mobile manipulation when docking points vary in real environments.
  • It lifts a single demonstration into many feasible docking configurations by separating docking-dependent base motions from contact-centric manipulation skills that are invariant across viewpoints.
  • The method samples feasible docking proposals under feasibility constraints and generates corresponding trajectories using structure-preserving augmentation.
  • It synthesizes consistent visual observations across viewpoints by using 3D point-cloud representations and point-level spatial editing to align observations with actions.
  • Experiments on ManiSkill and real-world platforms show substantially higher policy success rates and strong generalization to novel viewpoints from docking points not seen during training.

Abstract

Mobile manipulation is a fundamental capability that enables robots to interact in expansive environments such as homes and factories. Most existing approaches follow a two-stage paradigm, where the robot first navigates to a docking point and then performs fixed-base manipulation using powerful visuomotor policies. However, real-world mobile manipulation often suffers from the view generalization problem due to shifts of docking points. To address this issue, we propose a novel low-cost demonstration generation framework named DockAnywhere, which improves viewpoint generalization under docking variability by lifting a single demonstration to diverse feasible docking configurations. Specifically, DockAnywhere lifts a trajectory to any feasible docking points by decoupling docking-dependent base motions from contact-rich manipulation skills that remain invariant across viewpoints. Feasible docking proposals are sampled under feasibility constraints, and corresponding trajectories are generated via structure-preserving augmentation. Visual observations are synthesized in 3D space by representing the robot and objects as point clouds and applying point-level spatial editing to ensure the consistency of observation and action across viewpoints. Extensive experiments on ManiSkill and real-world platforms demonstrate that DockAnywhere substantially improves policy success rates and easily generalizes to novel viewpoints from unseen docking points during training, significantly enhancing the generalization capability of mobile manipulation policy in real-world deployment.