AI Navigate

From Concepts to Judgments: Interpretable Image Aesthetic Assessment

arXiv cs.CV / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the interpretability gap in image aesthetic assessment (IAA) by proposing an interpretable framework based on human-understandable aesthetic concepts.
  • It learns these concepts in an accessible manner to form a subspace that underpins the model's explanations of judgments.
  • It introduces a residual predictor to capture nuanced influences on aesthetics beyond explicit concepts.
  • Experiments on photographic and artistic datasets show the method achieves competitive predictive performance while providing transparent, human-understandable aesthetic judgments.

Abstract

Image aesthetic assessment (IAA) aims to predict the aesthetic quality of images as perceived by humans. While recent IAA models achieve strong predictive performance, they offer little insight into the factors driving their predictions. Yet for users, understanding why an image is considered pleasing or not is as valuable as the score itself, motivating growing interest in interpretability within IAA. When humans evaluate aesthetics, they naturally rely on high-level cues to justify their judgments. Motivated by this observation, we propose an interpretable IAA framework grounded in human-understandable aesthetic concepts. We learn these concepts in an accessible manner, constructing a subspace that forms the foundation of an inherently interpretable model. To capture nuanced influences on aesthetic perception beyond explicit concepts, we introduce a simple yet effective residual predictor. Experiments on photographic and artistic datasets demonstrate that our method achieves competitive predictive performance while offering transparent, human-understandable aesthetic judgments.