AI Navigate

Show, Don't Tell: Detecting Novel Objects by Watching Human Videos

arXiv cs.CV / 3/16/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces 'Show, Don't Tell,' a self-supervised approach that trains bespoke object detectors directly from human demonstrations without relying on language descriptions.
  • It automatically creates a training dataset from the demonstration and deploys an on-robot detector to recognize novel object instances seen during the task.
  • The approach eliminates expensive language-based prompt engineering used by open-set detectors and outperforms state-of-the-art methods for detecting manipulated objects.
  • The authors implement an integrated, real-world robotic system that deploys the paradigm to enable fast adaptation to unseen objects during demonstrations.

Abstract

How can a robot quickly identify and recognize new objects shown to it during a human demonstration? Existing closed-set object detectors frequently fail at this because the objects are out-of-distribution. While open-set detectors (e.g., VLMs) sometimes succeed, they often require expensive and tedious human-in-the-loop prompt engineering to uniquely recognize novel object instances. In this paper, we present a self-supervised system that eliminates the need for tedious language descriptions and expensive prompt engineering by training a bespoke object detector on an automatically created dataset, supervised by the human demonstration itself. In our approach, "Show, Don't Tell," we show the detector the specific objects of interest during the demonstration, rather than telling the detector about these objects via complex language descriptions. By bypassing language altogether, this paradigm enables us to quickly train bespoke detectors tailored to the relevant objects observed in human task demonstrations. We develop an integrated on-robot system to deploy our "Show, Don't Tell" paradigm of automatic dataset creation and novel object-detection on a real-world robot. Empirical results demonstrate that our pipeline significantly outperforms state-of-the-art detection and recognition methods for manipulated objects, leading to improved task completion for the robot.