Radar-Informed 3D Multi-Object Tracking under Adverse Conditions

arXiv cs.CV / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses robustness challenges in 3D multi-object tracking (3D MOT), especially under adverse conditions and as objects get farther away.
  • It critiques common sensor-fusion approaches that treat radar as just another learned feature, noting that radar’s robustness benefits can vanish when the overall network degrades.
  • The authors propose RadarMOT, which explicitly incorporates radar point cloud data to refine state estimation and recover missed detections at long range.
  • Experiments on the MAN-TruckScenes dataset show consistent gains in AMOTA, including +12.7% at long range and +10.3% in adverse weather.
  • The work announces code availability via the provided GitHub link, supporting reproducibility and adoption.

Abstract

The challenge of 3D multi-object tracking (3D MOT) is achieving robustness in real-world applications, for example under adverse conditions and maintaining consistency as distance increases. To overcome these challenges, sensor fusion approaches that combine LiDAR, cameras, and radar have emerged. However, existing multi-modal fusion methods usually treat radar as another learned feature inside the network. When the overall model degrades in difficult environmental conditions, the robustness advantages that radar could provide are also reduced. We propose RadarMOT, a radar-informed 3D MOT framework that explicitly uses radar point cloud data as additional observation to refine state estimation and recover detector misses at long ranges. Evaluations on the MAN-TruckScenes dataset show that RadarMOT consistently improves the Average Multi-Object Tracking Accuracy (AMOTA) with absolute 12.7% at long range and 10.3% in adverse weather. The code will be available at https://github.com/bingxue-xu/radarmot