AI Navigate

CLRNet: Targetless Extrinsic Calibration for Camera, Lidar and 4D Radar Using Deep Learning

arXiv cs.CV / 3/18/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • CLRNet is a deep-learning-based framework for joint camera–lidar–radar extrinsic calibration, with support for pairwise calibration between any two sensors.
  • It uses equirectangular projection, camera-based depth prediction, additional radar channels, a shared feature space, and a loop-closure loss to improve calibration accuracy.
  • Experiments on View-of-Delft and Dual-Radar datasets show at least a 50% reduction in median translational and rotational errors compared with state-of-the-art methods.
  • The code will be publicly available upon acceptance at the provided GitHub repository.

Abstract

In this paper, we address extrinsic calibration for camera, lidar, and 4D radar sensors. Accurate extrinsic calibration of radar remains a challenge due to the sparsity of its data. We propose CLRNet, a novel, multi-modal end-to-end deep learning (DL) calibration network capable of addressing joint camera-lidar-radar calibration, or pairwise calibration between any two of these sensors. We incorporate equirectangular projection, camera-based depth image prediction, additional radar channels, and leverage lidar with a shared feature space and loop closure loss. In extensive experiments using the View-of-Delft and Dual-Radar datasets, we demonstrate superior calibration accuracy compared to existing state-of-the-art methods, reducing both median translational and rotational calibration errors by at least 50%. Finally, we examine the domain transfer capabilities of the proposed network and baselines, when evaluating across datasets. The code will be made publicly available upon acceptance at: https://github.com/tudelft-iv.