Prototype-Grounded Concept Models for Verifiable Concept Alignment

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Concept Bottleneck Models (CBMs) improve deep learning interpretability by using human-understandable concepts, but they lack a mechanism to confirm that the learned concepts match the intended human meaning.
  • The paper introduces Prototype-Grounded Concept Models (PGCMs), which ground each concept in learned visual prototypes (image parts) that act as explicit evidence.
  • This prototype grounding makes concept semantics directly inspectable and allows targeted human intervention at the prototype level to fix misalignments.
  • Experiments show PGCMs achieve predictive performance comparable to state-of-the-art CBMs while substantially improving transparency, interpretability, and intervenability.

Abstract

Concept Bottleneck Models (CBMs) aim to improve interpretability in Deep Learning by structuring predictions through human-understandable concepts, but they provide no way to verify whether learned concepts align with the human's intended meaning, hurting interpretability. We introduce Prototype-Grounded Concept Models (PGCMs), which ground concepts in learned visual prototypes: image parts that serve as explicit evidence for the concepts. This grounding enables direct inspection of concept semantics and supports targeted human intervention at the prototype level to correct misalignments. Empirically, PGCMs match the predictive performance of state-of-the-art CBMs while substantially improving transparency, interpretability, and intervenability.