AI Navigate

FBCIR: Balancing Cross-Modal Focuses in Composed Image Retrieval

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies focus imbalances between visual and textual inputs as a key reason for CIR failure in hard negatives.
  • It introduces FBCIR, a multi-modal focus interpretation method to identify crucial visual and textual components behind a model's decisions.
  • It shows across multiple CIR models that focus imbalances are prevalent, especially under hard negative settings.
  • It proposes a data augmentation workflow to add curated hard negatives to CIR datasets to encourage balanced cross-modal reasoning, improving performance on challenging cases while preserving standard benchmark performance.

Abstract

Composed image retrieval (CIR) requires multi-modal models to jointly reason over visual content and semantic modifications presented in text-image input pairs. While current CIR models achieve strong performance on common benchmark cases, their accuracies often degrades in more challenging scenarios where negative candidates are semantically aligned with the query image or text. In this paper, we attribute this degradation to focus imbalances, where models disproportionately attend to one modality while neglecting the other. To validate this claim, we propose FBCIR, a multi-modal focus interpretation method that identifies the most crucial visual and textual input components to a model's retrieval decisions. Using FBCIR, we report that focus imbalances are prevalent in existing CIR models, especially under hard negative settings. Building on the analyses, we further propose a CIR data augmentation workflow that facilitates existing CIR datasets with curated hard negatives designed to encourage balanced cross-modal reasoning. Extensive experiments across multiple CIR models demonstrate that the proposed augmentation consistently improves performance in challenging cases, while maintaining their capabilities on standard benchmarks. Together, our interpretation method and data augmentation workflow provide a new perspective on CIR model diagnosis and robustness improvements.