Customized Fusion: A Closed-Loop Dynamic Network for Adaptive Multi-Task-Aware Infrared-Visible Image Fusion

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes CLDyN, a closed-loop dynamic network for infrared-visible image fusion that adapts to multiple downstream tasks by using explicit semantic feedback from those tasks.
  • It introduces a Requirement-driven Semantic Compensation (RSC) module that customizes fusion behavior using a Basis Vector Bank (BVB) and an Architecture-Adaptive Semantic Injection (A2SI) block, enabling task-specific semantic compensation without retraining.
  • A reward-penalty strategy is used to train the RSC module based on task performance changes, encouraging beneficial semantic adjustments and discouraging harmful ones.
  • Experiments on M3FD, FMB, and VT5000 show CLDyN preserves high image fusion quality while improving multi-task adaptability.
  • The authors provide an open-source implementation via the linked GitHub repository, supporting reproducibility and further research use.

Abstract

Infrared-visible image fusion aims to integrate complementary information for robust visual understanding, but existing fusion methods struggle with simultaneously adapting to multiple downstream tasks. To address this issue, we propose a Closed-Loop Dynamic Network (CLDyN) that can adaptively respond to the semantic requirements of diverse downstream tasks for task-customized image fusion. Specifically, CLDyN introduces a closed-loop optimization mechanism that establishes a semantic transmission chain to achieve explicit feedback from downstream tasks to the fusion network through a Requirement-driven Semantic Compensation (RSC) module. The RSC module leverages a Basis Vector Bank (BVB) and an Architecture-Adaptive Semantic Injection (A2SI) block to customize the network architecture according to task requirements, thereby enabling task-specific semantic compensation and allowing the fusion network to actively adapt to diverse tasks without retraining. To promote semantic compensation, a reward-penalty strategy is introduced to reward or penalize the RSC module based on task performance variations. Experiments on the M3FD, FMB, and VT5000 datasets demonstrate that CLDyN not only maintains high fusion quality but also exhibits strong multi-task adaptability. The code is available at https://github.com/YR0211/CLDyN.