AnchorD: Metric Grounding of Monocular Depth Using Factor Graphs

arXiv cs.RO / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces AnchorD, a training-free framework to “ground” monocular depth estimates into metric (real-world) scale using factor graph optimization.
  • It uses patch-wise affine alignment to locally anchor monocular depth priors to raw sensor depth, aiming to correct mis-scaling while preserving geometric details and depth discontinuities.
  • The authors report improved depth accuracy across different sensors and domains without requiring any model retraining, making it practical for robotics use cases.
  • To better evaluate on difficult real-world surfaces, they release a benchmark dataset with dense ground-truth depth for non-Lambertian objects using matte spray and multi-camera fusion.
  • The implementation is provided publicly, supporting adoption and reproducibility for researchers and developers working on depth sensing and robotic perception.

Abstract

Dense and accurate depth estimation is essential for robotic manipulation, grasping, and navigation, yet currently available depth sensors are prone to errors on transparent, specular, and general non-Lambertian surfaces. To mitigate these errors, large-scale monocular depth estimation approaches provide strong structural priors, but their predictions can be potentially skewed or mis-scaled in metric units, limiting their direct use in robotics. Thus, in this work, we propose a training-free depth grounding framework that anchors monocular depth estimation priors from a depth foundation model in raw sensor depth through factor graph optimization. Our method performs a patch-wise affine alignment, locally grounding monocular predictions in metric real-world depth while preserving fine-grained geometric structure and discontinuities. To facilitate evaluation in challenging real-world conditions, we introduce a benchmark dataset with dense scene-wide ground truth depth in the presence of non-Lambertian objects. Ground truth is obtained via matte reflection spray and multi-camera fusion, overcoming the reliance on object-only CAD-based annotations used in prior datasets. Extensive evaluations across diverse sensors and domains demonstrate consistent improvements in depth performance without any (re-)training. We make our implementation publicly available at https://anchord.cs.uni-freiburg.de.