Adaptive Depth-converted-Scale Convolution for Self-supervised Monocular Depth Estimation

arXiv cs.CV / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses self-supervised monocular depth estimation by explicitly handling the ambiguity between object depth and object scale caused by the same object changing apparent size across monocular video frames.
  • It introduces Depth-converted-Scale Convolution (DcSConv), which adaptively selects convolution receptive-field scales based on the depth–scale prior rather than relying on local deformation of convolution filters.
  • The authors further propose Depth-converted-Scale aware Fusion (DcS-F) to adaptively combine DcSConv-enhanced features with conventional convolution features.
  • DcSConv is designed as a plug-and-play module that can be added on top of existing CNN-based depth estimation methods, improving performance on KITTI.
  • Experiments on the KITTI benchmark show up to an 11.6% reduction in SqRel versus baselines, and ablations confirm that both DcSConv and DcS-F contribute to the gains.

Abstract

Self-supervised monocular depth estimation (MDE) has received increasing interests in the last few years. The objects in the scene, including the object size and relationship among different objects, are the main clues to extract the scene structure. However, previous works lack the explicit handling of the changing sizes of the object due to the change of its depth. Especially in a monocular video, the size of the same object is continuously changed, resulting in size and depth ambiguity. To address this problem, we propose a Depth-converted-Scale Convolution (DcSConv) enhanced monocular depth estimation framework, by incorporating the prior relationship between the object depth and object scale to extract features from appropriate scales of the convolution receptive field. The proposed DcSConv focuses on the adaptive scale of the convolution filter instead of the local deformation of its shape. It establishes that the scale of the convolution filter matters no less (or even more in the evaluated task) than its local deformation. Moreover, a Depth-converted-Scale aware Fusion (DcS-F) is developed to adaptively fuse the DcSConv features and the conventional convolution features. Our DcSConv enhanced monocular depth estimation framework can be applied on top of existing CNN based methods as a plug-and-play module to enhance the conventional convolution block. Extensive experiments with different baselines have been conducted on the KITTI benchmark and our method achieves the best results with an improvement up to 11.6% in terms of SqRel reduction. Ablation study also validates the effectiveness of each proposed module.