CI-CBM: Class-Incremental Concept Bottleneck Model for Interpretable Continual Learning
arXiv cs.LG / 4/17/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- Catastrophic forgetting is a major problem in continual learning, and it is particularly severe for class-incremental learning (CIL) where models must learn new classes without losing old knowledge.
- The paper proposes CI-CBM (Class-Incremental Concept Bottleneck Model) to preserve interpretability while combating forgetting, using concept regularization and pseudo-concept generation.
- Across evaluations on seven datasets, CI-CBM matches black-box model performance and improves over prior interpretable methods in CIL by an average of 36% accuracy.
- The method produces both input-level interpretable decisions and global, human-understandable decision rules, and it works in pretrained and from-scratch training settings.
- The authors make the code publicly available on GitHub for replication and further experimentation.

![[Patterns] AI Agent Error Handling That Actually Works](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Frn5czaopq2vzo7cglady.png&w=3840&q=75)


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)