Learning to Look before Learning to Like: Incorporating Human Visual Cognition into Aesthetic Quality Assessment

arXiv cs.CV / 4/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Automated Aesthetic Quality Assessment (AQA) often models images as static pixel data and relies mainly on semantic perception, which does not fully match how humans form aesthetic judgments through dynamic visual exploration.
  • The paper proposes AestheticNet, a cognitive-inspired AQA framework that adds a human-like visual attention pathway to complement a semantic pathway (e.g., CLIP) using cross-attention fusion.
  • The attention pathway is implemented as a gaze-aligned visual encoder (GAVE) pre-trained offline on eye-tracking data with resource-efficient contrast gaze alignment, capturing factors like foreground/background structure and lighting conditions.
  • Experiments with hypothesis testing show AestheticNet improves performance compared with semantic-only baselines and that the gaze module works as a model-agnostic corrector across different AQA backbones.
  • The authors provide code at the linked GitHub repository to support reuse and further evaluation of the approach.

Abstract

Automated Aesthetic Quality Assessment (AQA) treats images primarily as static pixel vectors, aligning predictions with human-rating scores largely through semantic perception. However, this paradigm diverges from human aesthetic cognition, which arises from dynamic visual exploration shaped by scanning paths, processing fluency, and the interplay between bottom-up salience and top-down intention. We introduce AestheticNet, a novel cognitive-inspired AQA paradigm that integrates human-like visual cognition and semantic perception with a two-pathway architecture. The visual attention pathway, implemented as a gaze-aligned visual encoder (GAVE) pre-trained offline on eye-tracking data using resource-efficient contrast gaze alignment, models attention from human vision system. This pathway augments the semantic pathway, which uses a fixed semantic encoder such as CLIP, through cross-attention fusion. Visual attention provides a cognitive prior reflecting foreground/background structure, color cascade, brightness, and lighting, all of which are determinants of aesthetic perception beyond semantics. Experiments validated by hypothesis testing show a consistent improvement over the semantic-alone baselines, and demonstrate the gaze module as a model-agnostic corrector compatible with diverse AQA backbones, supporting the necessity and modularity of human-like visual cognition for AQA. Our code is available at https://github.com/keepgallop/AestheticNet.