Diff-SBSR: Learning Multimodal Feature-Enhanced Diffusion Models for Zero-Shot Sketch-Based 3D Shape Retrieval

arXiv cs.CV / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Diff-SBSR, the first exploration of text-to-image diffusion models for zero-shot sketch-based 3D shape retrieval, targeting the hard zero-shot setting caused by missing category supervision and sparse sketch inputs.
  • It uses a frozen Stable Diffusion backbone to extract multimodal, discriminative features from intermediate U-Net layers for both sketch inputs and rendered 3D views, leveraging diffusion models’ open-vocabulary capability and shape bias.
  • To mitigate sketches’ abstraction/sparsity and the domain gap versus natural images without expensive retraining, the method conditions the frozen diffusion model with CLIP-derived visual features and enriched textual guidance from BLIP via soft prompts plus hard descriptions.
  • It introduces Circle-T loss to adaptively strengthen attraction between positive sketch–3D pairs once negatives are sufficiently separated, improving alignment under sketch noise.
  • Experiments on two public benchmarks show Diff-SBSR consistently outperforms prior state-of-the-art methods for zero-shot sketch-to-3D retrieval.

Abstract

This paper presents the first exploration of text-to-image diffusion models for zero-shot sketch-based 3D shape retrieval (ZS-SBSR). Existing sketch-based 3D shape retrieval methods struggle in zero-shot settings due to the absence of category supervision and the extreme sparsity of sketch inputs. Our key insight is that large-scale pretrained diffusion models inherently exhibit open-vocabulary capability and strong shape bias, making them well suited for zero-shot visual retrieval. We leverage a frozen Stable Diffusion backbone to extract and aggregate discriminative representations from intermediate U-Net layers for both sketches and rendered 3D views. Diffusion models struggle with sketches due to their extreme abstraction and sparsity, compounded by a significant domain gap from natural images. To address this limitation without costly retraining, we introduce a multimodal feature-enhanced strategy that conditions the frozen diffusion backbone with complementary visual and textual cues from CLIP, explicitly enhancing the ability of semantic context capture and concentrating on sketch contours. Specifically, we inject global and local visual features derived from a pretrained CLIP visual encoder, and incorporate enriched textual guidance by combining learnable soft prompts with hard textual descriptions generated by BLIP. Furthermore, we employ the Circle-T loss to dynamically strengthen positive-pair attraction once negative samples are sufficiently separated, thereby adapting to sketch noise and enabling more effective sketch-3D alignment. Extensive experiments on two public benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in ZS-SBSR.