VLBiMan: Vision-Language Anchored One-Shot Demonstration Enables Generalizable Bimanual Robotic Manipulation

arXiv cs.RO / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • VLBiMan is a vision-language anchored robotic framework that learns generalizable bimanual manipulation skills from a single human demonstration by decomposing tasks into reusable components.
  • The method keeps invariant “primitive” skills as anchors while dynamically adapting adjustable parts using vision-language grounding, avoiding policy retraining when scenes change.
  • It addresses real-world scene ambiguities such as background variation, object repositioning, visual clutter, and external disturbances via semantic parsing and geometric feasibility constraints.
  • Experiments show VLBiMan reduces required demonstrations versus imitation-learning baselines, supports compositional generalization through atomic skill splicing, improves robustness to novel but semantically similar objects, and transfers across different robot embodiments without retraining.

Abstract

Achieving generalizable bimanual manipulation requires systems that can learn efficiently from minimal human input while adapting to real-world uncertainties and diverse embodiments. Existing approaches face a dilemma: imitation policy learning demands extensive demonstrations to cover task variations, while modular methods often lack flexibility in dynamic scenes. We introduce VLBiMan, a framework that derives reusable skills from a single human example through task-aware decomposition, preserving invariant primitives as anchors while dynamically adapting adjustable components via vision-language grounding. This adaptation mechanism resolves scene ambiguities caused by background changes, object repositioning, or visual clutter without policy retraining, leveraging semantic parsing and geometric feasibility constraints. Moreover, the system inherits human-like hybrid control capabilities, enabling mixed synchronous and asynchronous use of both arms. Extensive experiments validate VLBiMan across tool-use and multi-object tasks, demonstrating: (1) a drastic reduction in demonstration requirements compared to imitation baselines, (2) compositional generalization through atomic skill splicing for long-horizon tasks, (3) robustness to novel but semantically similar objects and external disturbances, and (4) strong cross-embodiment transfer, showing that skills learned from human demonstrations can be instantiated on different robotic platforms without retraining. By bridging human priors with vision-language anchored adaptation, our work takes a step toward practical and versatile dual-arm manipulation in unstructured settings.