AI Navigate

Enhancing Image Aesthetics with Dual-Conditioned Diffusion Models Guided by Multimodal Perception

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Dual-supervised Image Aesthetic Enhancement (DIAE), a diffusion-based model that uses multimodal aesthetic perception to guide image editing for improved aesthetics.
  • It introduces Multimodal Aesthetic Perception (MAP) to turn ambiguous aesthetic instructions into explicit guidance via detailed aesthetic attributes and multimodal control signals from text–image pairs.
  • To address the lack of perfectly paired data, the authors collect an imperfectly-paired dataset (IIAEData) with identical semantics but varying aesthetics and employ a dual-branch supervision framework for weakly supervised training.
  • Experimental results show that DIAE outperforms baselines in both image aesthetic scores and content consistency, demonstrating the effectiveness of the proposed approach.

Abstract

Image aesthetic enhancement aims to perceive aesthetic deficiencies in images and perform corresponding editing operations, which is highly challenging and requires the model to possess creativity and aesthetic perception capabilities. Although recent advancements in image editing models have significantly enhanced their controllability and flexibility, they struggle with enhancing image aesthetic. The primary challenges are twofold: first, following editing instructions with aesthetic perception is difficult, and second, there is a scarcity of "perfectly-paired" images that have consistent content but distinct aesthetic qualities. In this paper, we propose Dual-supervised Image Aesthetic Enhancement (DIAE), a diffusion-based generative model with multimodal aesthetic perception. First, DIAE incorporates Multimodal Aesthetic Perception (MAP) to convert the ambiguous aesthetic instruction into explicit guidance by (i) employing detailed, standardized aesthetic instructions across multiple aesthetic attributes, and (ii) utilizing multimodal control signals derived from text-image pairs that maintain consistency within the same aesthetic attribute. Second, to mitigate the lack of "perfectly-paired" images, we collect "imperfectly-paired" dataset called IIAEData, consisting of images with varying aesthetic qualities while sharing identical semantics. To better leverage the weak matching characteristics of IIAEData during training, a dual-branch supervision framework is also introduced for weakly supervised image aesthetic enhancement. Experimental results demonstrate that DIAE outperforms the baselines and obtains superior image aesthetic scores and image content consistency scores.