Concept-wise Attention for Fine-grained Concept Bottleneck Models

arXiv cs.CV / 4/20/2026

📰 NewsModels & Research

Key Points

  • The paper introduces CoAt-CBM, a framework for Concept Bottleneck Models (CBMs) that improves fine-grained image-to-concept alignment beyond prior approaches that rely on CLIP-style image-text alignment.
  • It addresses limitations of existing concept modeling, including pre-training bias issues (such as granularity mismatch or structural priors) and suboptimal learning caused by Binary Cross-Entropy treating concepts independently.
  • CoAt-CBM uses learnable concept-wise visual queries to extract adaptive, concept-specific visual embeddings and then produces concept score vectors for more interpretable predictions.
  • A novel concept-contrastive optimization is proposed to account for the relative importance of concept scores and strengthen alignment between predicted concepts and the underlying image content.
  • Experiments report consistent improvements over state-of-the-art CBM methods, with code planned to be released after acceptance.

Abstract

Recently impressive performance has been achieved in Concept Bottleneck Models (CBM) by utilizing the image-text alignment learned by a large pre-trained vision-language model (i.e. CLIP). However, there exist two key limitations in concept modeling. Existing methods often suffer from pre-training biases, manifested as granularity misalignment or reliance on structural priors. Moreover, fine-tuning with Binary Cross-Entropy (BCE) loss treats each concept independently, which ignores mutual exclusivity among concepts, leading to suboptimal alignment. To address these limitations, we propose Concept-wise Attention for Fine-grained Concept Bottleneck Models (CoAt-CBM), a novel framework that achieves adaptive fine-grained image-concept alignment and high interpretability. Specifically, CoAt-CBM employs learnable concept-wise visual queries to adaptively obtain fine-grained concept-wise visual embeddings, which are then used to produce a concept score vector. Then, a novel concept contrastive optimization guides the model to handle the relative importance of the concept scores, enabling concept predictions to faithfully reflect the image content and improved alignment. Extensive experiments demonstrate that CoAt-CBM consistently outperforms state-of-the-art methods. The codes will be available upon acceptance.