DUALVISION: RGB-Infrared Multimodal Large Language Models for Robust Visual Reasoning

arXiv cs.CV / 4/22/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The paper introduces DUALVISION, a lightweight fusion module that integrates infrared (IR) and RGB information into multimodal large language models (MLLMs) for more robust visual reasoning.
  • DUALVISION uses patch-level localized cross-attention to combine IR-RGB cues efficiently, addressing the fragility of RGB-only MLLMs under degradations like fog, blur, and low light.
  • To enable training and evaluation, the authors release DV-204K, a ~25K public dataset of aligned IR-RGB image pairs with modality-specific QA annotations.
  • They also provide DV-500, a smaller benchmark of 500 IR-RGB pairs with 500 QA pairs specifically aimed at evaluating cross-modal reasoning.
  • Experiments across both open- and closed-source MLLMs show that DUALVISION improves empirical performance across a wide range of visual degradation conditions.

Abstract

Multimodal large language models (MLLMs) have achieved impressive performance on visual perception and reasoning tasks with RGB imagery, yet they remain fragile under common degradations, such as fog, blur, or low-light conditions. Infrared (IR) imaging, a well-established complement to RGB, offers inherent robustness in these conditions, but its integration into MLLMs remains underexplored. To bridge this gap, we propose DUALVISION, a lightweight fusion module that efficiently incorporates IR-RGB information into MLLMs via patch-level localized cross-attention. To support training and evaluation and to facilitate future research, we also introduce DV-204K, a dataset of ~25K publicly available aligned IR-RGB image pairs with 204K modality-specific QA annotations, and DV-500, a benchmark of 500 IR-RGB image pairs with 500 QA pairs designed for evaluating cross-modal reasoning. Leveraging these datasets, we benchmark both open- and closed-source MLLMs and demonstrate that DUALVISION delivers strong empirical performance under a wide range of visual degradations. Our code and dataset are available at https://abrarmajeedi.github.io/dualvision.