Learning to Grasp Anything by Playing with Random Toys

arXiv cs.RO / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation in robotic manipulation: grasping policies often fail to generalize to novel objects, reducing real-world usefulness.
  • It proposes that robots can learn broadly generalizable grasping from a small set of “random toys” assembled from four simple shape primitives (spheres, cuboids, cylinders, rings).
  • The authors identify object-centric visual representation—created via a detection pooling mechanism—as the critical driver of robust zero-shot generalization to real-world objects.
  • Across simulation and physical robot experiments, the approach achieves a 67% real-world grasp success rate on the YCB dataset, outperforming state-of-the-art methods that require substantially more in-domain data.
  • The study also analyzes scaling behavior, showing how performance changes with the number/diversity of training toys and the number of demonstrations per toy, and releases code/checkpoints/datasets for reuse.

Abstract

Robotic manipulation policies often struggle to generalize to novel objects, limiting their real-world utility. In contrast, cognitive science suggests that children develop generalizable dexterous manipulation skills by mastering a small set of simple toys and then applying that knowledge to more complex items. Inspired by this, we study if similar generalization capabilities can also be achieved by robots. Our results indicate robots can learn generalizable grasping using randomly assembled objects that are composed from just four shape primitives: spheres, cuboids, cylinders, and rings. We show that training on these "toys" enables robust generalization to real-world objects, yielding strong zero-shot performance. Crucially, we find the key to this generalization is an object-centric visual representation induced by our proposed detection pooling mechanism. Evaluated in both simulation and on physical robots, our model achieves a 67% real-world grasping success rate on the YCB dataset, outperforming state-of-the-art approaches that rely on substantially more in-domain data. We further study how zero-shot generalization performance scales by varying the number and diversity of training toys and the demonstrations per toy. We believe this work offers a promising path to scalable and generalizable learning in robotic manipulation. Demonstration videos, code, checkpoints and our dataset are available on our project page: https://lego-grasp.github.io/ .