AI Navigate

CM-Bench: A Comprehensive Cross-Modal Feature Matching Benchmark Bridging Visible and Infrared Images

arXiv cs.CV / 3/16/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • Introduces CM-Bench, a comprehensive benchmark for infrared-visible cross-modal feature matching to standardize evaluation in this area.
  • Surveys and categorizes 30 feature matching methods into sparse, semidense, and dense, and evaluates them across tasks such as homography estimation, relative pose estimation, and feature-matching-based geo-localization.
  • Proposes a classification-network-based adaptive preprocessing front-end that automatically selects suitable enhancement strategies before matching.
  • Presents a novel infrared-satellite cross-modal dataset with manually annotated ground-truth correspondences for practical geo-localization, with resources available on GitHub.

Abstract

Infrared-visible (IR-VIS) feature matching plays an essential role in cross-modality visual localization, navigation and perception. Along with the rapid development of deep learning techniques, a number of representative image matching methods have been proposed. However, crossmodal feature matching is still a challenging task due to the significant appearance difference. A significant gap for cross-modal feature matching research lies in the absence of standardized benchmarks and metrics for evaluations. In this paper, we introduce a comprehensive cross-modal feature matching benchmark, CM-Bench, which encompasses 30 feature matching algorithms across diverse cross-modal datasets. Specifically, state-of-the-art traditional and deep learning-based methods are first summarized and categorized into sparse, semidense, and dense methods. These methods are evaluated by different tasks including homography estimation, relative pose estimation, and feature-matching-based geo-localization. In addition, we introduce a classification-network-based adaptive preprocessing front-end that automatically selects suitable enhancement strategies before matching. We also present a novel infrared-satellite cross-modal dataset with manually annotated ground-truth correspondences for practical geo-localization evaluation. The dataset and resource will be available at: https://github.com/SLZ98/CM-Bench.