Boxer: Robust Lifting of Open-World 2D Bounding Boxes to 3D

arXiv cs.CV / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Boxer, a transformer-based algorithm that lifts 2D open-vocabulary detections into static, metric 3D bounding boxes using posed images and optional depth (sparse point cloud or dense depth).
  • BoxerNet forms the core lifting module, taking 2D bounding box proposals and producing 3D boxes that are then refined via multi-view fusion and geometric filtering to yield globally consistent, de-duplicated 3D results.
  • The approach leverages existing 2D open-vocabulary detectors (e.g., DETIC, OWLv2, SAM3) so the main model focuses on 3D lifting, aiming to reduce reliance on costly 3D bounding-box annotation.
  • The method extends a CuTR-style formulation by adding aleatoric uncertainty for more robust regression and supports sparse-depth inputs via median depth patch encoding; training uses over 1.2M unique 3D bounding boxes.
  • Reported results show substantial gains over prior baselines, including large improvements in egocentric settings without dense depth and strong performance on CA-1M when dense depth is available.

Abstract

Detecting and localizing objects in space is a fundamental computer vision problem. While much progress has been made to solve 2D object detection, 3D object localization is much less explored and far from solved, especially for open-world categories. To address this research challenge, we propose Boxer, an algorithm to estimate static 3D bounding boxes (3DBBs) from 2D open-vocabulary object detections, posed images and optional depth either represented as a sparse point cloud or dense depth. At its core is BoxerNet, a transformer-based network which lifts 2D bounding box (2DBB) proposals into 3D, followed by multi-view fusion and geometric filtering to produce globally consistent de-duplicated 3DBBs in metric world space. Boxer leverages the power of existing 2DBB detection algorithms (e.g. DETIC, OWLv2, SAM3) to localize objects in 2D. This allows the main BoxerNet model to focus on lifting to 3D rather than detecting, ultimately reducing the demand for costly annotated 3DBB training data. Extending the CuTR formulation, we incorporate an aleatoric uncertainty for robust regression, a median depth patch encoding to support sparse depth inputs, and large-scale training with over 1.2 million unique 3DBBs. BoxerNet outperforms state-of-the-art baselines in open-world 3DBB lifting, including CuTR in egocentric settings without dense depth (0.532 vs. 0.010 mAP) and on CA-1M with dense depth available (0.412 vs. 0.250 mAP).