Towards Intrinsic-Aware Monocular 3D Object Detection

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses monocular 3D object detection’s sensitivity to camera intrinsics and its limited generalization across different camera setups.
  • It proposes MonoIA, a unified intrinsic-aware framework that treats intrinsic changes as perceptual transformations affecting apparent scale, perspective, and geometry.
  • MonoIA uses large language models and vision-language models to produce intrinsic embeddings, then integrates them hierarchically into the detection network via an Intrinsic Adaptation Module to adapt features per camera.
  • The approach reframes intrinsic modeling from numeric conditioning to semantic representation to achieve more consistent 3D detection across cameras.
  • Experiments report new state-of-the-art results on KITTI, Waymo, and nuScenes, including +1.18% on the KITTI leaderboard and +4.46% on KITTI Val under multi-dataset training.

Abstract

Monocular 3D object detection (Mono3D) aims to infer object locations and dimensions in 3D space from a single RGB image. Despite recent progress, existing methods remain highly sensitive to camera intrinsics and struggle to generalize across diverse settings, since intrinsics govern how 3D scenes are projected onto the image plane. We propose MonoIA, a unified intrinsic-aware framework that models and adapts to intrinsic variation through a language-grounded representation. The key insight is that intrinsic variation is not a numeric difference but a perceptual transformation that alters apparent scale, perspective, and spatial geometry. To capture this effect, MonoIA employs large language models and vision-language models to generate intrinsic embeddings that encode the visual and geometric implications of camera parameters. These embeddings are hierarchically integrated into the detection network via an Intrinsic Adaptation Module, allowing the model to modulate its feature representations according to camera-specific configurations and maintain consistent 3D detection across intrinsics. This shifts intrinsic modeling from numeric conditioning to semantic representation, enabling robust and unified perception across cameras. Extensive experiments show that MonoIA achieves new state-of-the-art results on standard benchmarks including KITTI, Waymo, and nuScenes (e.g., +1.18% on the KITTI leaderboard), and further improves performance under multi-dataset training (e.g., +4.46% on KITTI Val).