Hierarchical Awareness Adapters with Hybrid Pyramid Feature Fusion for Dense Depth Prediction

arXiv cs.CV / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses monocular dense depth estimation, targeting scale ambiguity and missing geometric cues when predicting depth maps from single RGB images.
  • It proposes a Swin-Transformer-based multilevel conditional random field (CRF) framework with an adaptive hybrid pyramid feature fusion (HPF) module to capture both short- and long-range dependencies via multi-scale fusion.
  • A hierarchical awareness adapter (HA) is introduced to strengthen cross-level encoder feature interactions using lightweight broadcast modules with learnable dimensional scaling to keep compute low.
  • For pixel-level refinement, the method uses a fully-connected CRF decoder with dynamic scaling attention and a bias learning unit to improve spatial relationship modeling and avoid extreme-value collapse.
  • Experiments on NYU Depth v2, KITTI, and MatterPort3D report state-of-the-art results, including Abs Rel 0.088 and RMSE 0.316 on NYU Depth v2, near-perfect threshold accuracy on KITTI, and practical efficiency (194M parameters, ~21ms inference).

Abstract

Monocular depth estimation from a single RGB image remains a fundamental challenge in computer vision due to inherent scale ambiguity and the absence of explicit geometric cues. Existing approaches typically rely on increasingly complex network architectures to regress depth maps, which escalates training costs and computational overhead without fully exploiting inter-pixel spatial dependencies. We propose a multilevel perceptual conditional random field (CRF) model built upon the Swin Transformer backbone that addresses these limitations through three synergistic innovations: (1) an adaptive hybrid pyramid feature fusion (HPF) strategy that captures both short-range and long-range dependencies by combining multi-scale spatial pyramid pooling with biaxial feature aggregation, enabling effective integration of global and local contextual information; (2) a hierarchical awareness adapter (HA) that enriches cross-level feature interactions within the encoder through lightweight broadcast modules with learnable dimensional scaling, reducing computational complexity while enhancing representational capacity; and (3) a fully-connected CRF decoder with dynamic scaling attention that models fine-grained pixel-level spatial relationships, incorporating a bias learning unit to prevent extreme-value collapse and ensure stable training. Extensive experiments on NYU Depth v2, KITTI, and MatterPort3D datasets demonstrate that our method achieves state-of-the-art performance, reducing Abs Rel to 0.088 (-7.4\%) and RMSE to 0.316 (-5.4\%) on NYU Depth v2, while attaining near-perfect threshold accuracy (\delta < 1.25^3 \approx 99.8\%) on KITTI with only 194M parameters and 21ms inference time.