Combined Dictionary Unfolding Network with Gradient-Adaptive Fidelity for Transferable Multi-Source Fusion
arXiv cs.CV / 5/4/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes CDNet, a lightweight Combined Dictionary Unfolding Network aimed at efficient multi-source image fusion, especially on resource-constrained edge devices.
- Unlike prior deep unfolding approaches based on alternating minimization that update modalities separately, CDNet uses a structurally constrained joint unfolding architecture derived from coupled dictionary learning’s unique-common decomposition prior.
- CDNet’s CDBlock employs block-sparse interaction topology and performs joint model-derived updates for common and modality-specific representations to reduce computational and memory overhead.
- The authors introduce a compact High- and Low-frequency Image Fidelity loss to enable unsupervised training without ground-truth images.
- Experiments across four fusion tasks (multi-exposure, infrared-visible, medical, and infrared-visible for semantic segmentation) show competitive or better performance, including PSNR gains of 1.23 dB (TNO) and 1.59 dB (RoadScene) over the second-best method in specific settings.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

You Are Right — You Don't Need CLAUDE.md
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to