Sketch and Text Synergy: Fusing Structural Contours and Descriptive Attributes for Fine-Grained Image Retrieval

arXiv cs.CV / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper addresses fine-grained image retrieval from either hand-drawn sketches or text by tackling the modality gap between structural contours (sketches) and appearance cues like color/texture (text).
  • It proposes the Sketch and Text Based Image Retrieval (STBIR) framework that fuses sketch-derived structural outlines with text-provided color/texture information to improve retrieval accuracy.
  • STBIR includes three main technical components: a curriculum-learning robustness module for queries of varying quality, a category-knowledge-based feature space optimization module to strengthen representations, and a multi-stage cross-modal alignment mechanism to reduce cross-modal misalignment.
  • The authors also build a fine-grained STBIR benchmark dataset and report extensive experiments showing STBIR significantly outperforms existing state-of-the-art methods.
  • Overall, the work contributes both a new multimodal retrieval approach and a benchmark to support future research on sketch-and-text-based fine-grained image search.

Abstract

Fine-grained image retrieval via hand-drawn sketches or textual descriptions remains a critical challenge due to inherent modality gaps. While hand-drawn sketches capture complex structural contours, they lack color and texture, which text effectively provides despite omitting spatial contours. Motivated by the complementary nature of these modalities, we propose the Sketch and Text Based Image Retrieval (STBIR) framework. By synergizing the rich color and texture cues from text with the structural outlines provided by sketches, STBIR achieves superior fine-grained retrieval performance. First, a curriculum learning driven robustness enhancement module is proposed to enhance the model's robustness when handling queries of varying quality. Second, we introduce a category-knowledge-based feature space optimization module, thereby significantly boosting the model's representational power. Finally, we design a multi-stage cross-modal feature alignment mechanism to effectively mitigate the challenges of cross modal feature alignment. Furthermore, we curate the fine-grained STBIR benchmark dataset to rigorously validate the efficacy of our proposed framework and to provide data support as a reference for subsequent related research. Extensive experiments demonstrate that the proposed STBIR framework significantly outperforms state of the art methods.