AesRM: Improving Video Aesthetics with Expert-Level Feedback

arXiv cs.CV / 5/1/2026

📰 NewsModels & Research

Key Points

  • The paper introduces AesRM and an evaluation framework that treat video aesthetics as a hierarchical rubric covering three dimensions—Visual Aesthetics (VA), Visual Fidelity (VF), and Visual Plausibility (VP)—with 15 fine-grained criteria such as shot composition.
  • It creates large-scale expert-annotated preference data and a benchmark, AesVideo-Bench, using about 2,500 video pairs labeled by experts across VA, VF, and VP.
  • Two reward-model variants are proposed: AesRM-Base predicts pairwise preferences for efficient reward signals, while AesRM-CoT also produces criterion-aligned chain-of-thought to improve interpretability.
  • Training uses a three-stage progressive scheme (Atomic Aesthetic Capability Learning, Cold-Start, and GRPO), plus self-consistency-based CoT synthesis and CoT-based process rewards to strengthen CoT quality and evaluation accuracy.
  • Experiments report AesRM improves performance and robustness over existing baselines and yields aesthetic gains when aligning Wan2.2 with AesRM compared with other aesthetic reward models.

Abstract

Despite rapid advances in photorealistic video generation, real-world applications such as filmmaking require video aesthetics, e.g., harmonious colors and cinematic lighting, beyond visual fidelity. Prior work on visual aesthetics largely focuses on images, often reducing aesthetics to coarse definitions, e.g., visual pleasure, without a rigorous and systematic evaluation. To improve video aesthetics, we propose a hierarchical rubric that decomposes video aesthetics into three core dimensions, Visual Aesthetics (VA), Visual Fidelity (VF), and Visual Plausibility (VP), with 15 fine-grained criteria, e.g., shot composition. This framework enables a large-scale expert-annotated preference dataset and an evaluation benchmark, AesVideo-Bench, containing about 2500 video pairs with expert annotations on VA, VF, and VP. We then build a family of Video Aesthetic Reward Models (AesRM): AesRM-Base, which directly predicts pairwise preferences on these dimensions to provide efficient post-training rewards, and AesRM-CoT, which additionally generates CoT aligned with all 15 criteria to improve assessment interpretability. Specifically, we train AesRM with a three-stage progressive scheme: (1) Atomic Aesthetic Capability Learning, which strengthens AesRM's recognition of fundamental aesthetic concepts, e.g., accurately identifying centered composition; (2) Cold-Start, aligning the model with structured reasoning protocols; and (3) GRPO, further improving evaluation accuracy. To enhance AesRM-CoT, we additionally propose self-consistency-based CoT synthesis to improve CoT quality and design CoT-based process rewards during GRPO. Extensive experiments show AesRM outperforms baselines on multiple aesthetics benchmarks and is more robust, with lower position bias. Finally, we align Wan2.2 with AesRM and observe clear aesthetic gains over existing aesthetic reward models.