WildDet3D: Scaling Promptable 3D Detection in the Wild

arXiv cs.CV / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces WildDet3D, a unified geometry-aware architecture for monocular 3D object detection that supports multiple prompt types (text, point, and box) and can ingest auxiliary depth signals at inference time.
  • It addresses key open-world limitations of prior work by enabling promptable detection across categories rather than being restricted to a single prompt modality.
  • The authors also release WildDet3D-Data, an open 3D detection dataset exceeding 1M images spanning 13.5K categories, built from candidate 3D boxes derived from 2D annotations and filtered via human verification.
  • WildDet3D reportedly sets new state-of-the-art results on multiple benchmarks, including open-world text+box performance on WildDet3D-Bench (22.6/24.8 AP3D) and Omni3D (34.2/36.4 AP3D).
  • Adding depth cues at inference provides large improvements, with an average gain of +20.7 AP across evaluated settings, and strong zero-shot scores on Argoverse 2 and ScanNet.

Abstract

Understanding objects in 3D from a single image is a cornerstone of spatial intelligence. A key step toward this goal is monocular 3D object detection--recovering the extent, location, and orientation of objects from an input RGB image. To be practical in the open world, such a detector must generalize beyond closed-set categories, support diverse prompt modalities, and leverage geometric cues when available. Progress is hampered by two bottlenecks: existing methods are designed for a single prompt type and lack a mechanism to incorporate additional geometric cues, and current 3D datasets cover only narrow categories in controlled environments, limiting open-world transfer. In this work we address both gaps. First, we introduce WildDet3D, a unified geometry-aware architecture that natively accepts text, point, and box prompts and can incorporate auxiliary depth signals at inference time. Second, we present WildDet3D-Data, the largest open 3D detection dataset to date, constructed by generating candidate 3D boxes from existing 2D annotations and retaining only human-verified ones, yielding over 1M images across 13.5K categories in diverse real-world scenes. WildDet3D establishes a new state-of-the-art across multiple benchmarks and settings. In the open-world setting, it achieves 22.6/24.8 AP3D on our newly introduced WildDet3D-Bench with text and box prompts. On Omni3D, it reaches 34.2/36.4 AP3D with text and box prompts, respectively. In zero-shot evaluation, it achieves 40.3/48.9 ODS on Argoverse 2 and ScanNet. Notably, incorporating depth cues at inference time yields substantial additional gains (+20.7 AP on average across settings).