Modality-Agnostic Prompt Learning for Multi-Modal Camouflaged Object Detection

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces modality-agnostic multi-modal prompt learning for camouflaged object detection (COD), targeting limitations of modality-specific architectures and fusion designs.
  • It proposes a framework that generates unified prompts for the Segment Anything Model (SAM) using interactions between a content domain and a prompt (knowledge) domain, enabling parameter-efficient adaptation to arbitrary auxiliary modalities.
  • A lightweight Mask Refine Module is added to improve coarse segmentation by injecting fine-grained prompt cues for sharper, more accurate camouflaged object boundaries.
  • Experiments across RGB-Depth, RGB-Thermal, and RGB-Polarization benchmarks show improved performance and stronger cross-modal generalization.
  • The work positions SAM prompt-based adaptation as a scalable route to incorporate additional modalities without redesigning the core model for each modality type.

Abstract

Camouflaged Object Detection (COD) aims to segment objects that blend seamlessly into complex backgrounds, with growing interest in exploiting additional visual modalities to enhance robustness through complementary information. However, most existing approaches generally rely on modality-specific architectures or customized fusion strategies, which limit scalability and cross-modal generalization. To address this, we propose a novel framework that generates modality-agnostic multi-modal prompts for the Segment Anything Model (SAM), enabling parameter-efficient adaptation to arbitrary auxiliary modalities and significantly improving overall performance on COD tasks. Specifically, we model multi-modal learning through interactions between a data-driven content domain and a knowledge-driven prompt domain, distilling task-relevant cues into unified prompts for SAM decoding. We further introduce a lightweight Mask Refine Module to calibrate coarse predictions by incorporating fine-grained prompt cues, leading to more accurate camouflaged object boundaries. Extensive experiments on RGB-Depth, RGB-Thermal, and RGB-Polarization benchmarks validate the effectiveness and generalization of our modality-agnostic framework.