Learning to Look before Learning to Like: Incorporating Human Visual Cognition into Aesthetic Quality Assessment
arXiv cs.CV / 4/20/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- Automated Aesthetic Quality Assessment (AQA) often models images as static pixel data and relies mainly on semantic perception, which does not fully match how humans form aesthetic judgments through dynamic visual exploration.
- The paper proposes AestheticNet, a cognitive-inspired AQA framework that adds a human-like visual attention pathway to complement a semantic pathway (e.g., CLIP) using cross-attention fusion.
- The attention pathway is implemented as a gaze-aligned visual encoder (GAVE) pre-trained offline on eye-tracking data with resource-efficient contrast gaze alignment, capturing factors like foreground/background structure and lighting conditions.
- Experiments with hypothesis testing show AestheticNet improves performance compared with semantic-only baselines and that the gaze module works as a model-agnostic corrector across different AQA backbones.
- The authors provide code at the linked GitHub repository to support reuse and further evaluation of the approach.



