AI Navigate

Bi-CamoDiffusion: A Boundary-informed Diffusion Approach for Camouflaged Object Detection

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • Bi-CamoDiffusion evolves the CamoDiffusion framework by injecting edge priors into early-stage embeddings through a parameter-free process to sharpen object boundaries.
  • It uses a unified optimization objective that balances spatial accuracy, structural constraints, and uncertainty supervision to capture both global context and boundary transitions.
  • The method achieves sharper delineation of thin structures and protrusions while reducing false positives on CAMO, COD10K, and NC4K benchmarks.
  • Evaluations show Bi-CamoDiffusion consistently outperforms existing state-of-the-art methods across metrics such as S_m, Fβ^w, E_m, and MAE, indicating improved object-background separation and boundary recovery.

Abstract

Bi-CamoDiffusion is introduced, an evolution of the CamoDiffusion framework for camouflaged object detection. It integrates edge priors into early-stage embeddings via a parameter-free injection process, which enhances boundary sharpness and prevents structural ambiguity. This is governed by a unified optimization objective that balances spatial accuracy, structural constraints, and uncertainty supervision, allowing the model to capture of both the object's global context and its intricate boundary transitions. Evaluations across the CAMO, COD10K, and NC4K benchmarks show that Bi-CamoDiffusion surpasses the baseline, delivering sharper delineation of thin structures and protrusions while also minimizing false positives. Also, our model consistently outperforms existing state-of-the-art methods across all evaluated metrics, including S_m, F_{\beta}^{w}, E_m, and MAE, demonstrating a more precise object-background separation and sharper boundary recovery.