PortraitCraft: A Benchmark for Portrait Composition Understanding and Generation

arXiv cs.CV / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • PortraitCraft is introduced as a unified benchmark aimed at advancing structured portrait composition understanding and controllable portrait generation, addressing gaps in prior datasets that focused on coarse aesthetic scores or unconstrained generation.
  • The benchmark is built on ~50,000 curated real portrait images with multi-level supervision, including global composition scores, annotations for 13 composition attributes, explanation texts, visual question answering pairs, and composition-oriented descriptions for generation.
  • It defines two linked benchmark task families: composition understanding (score prediction, fine-grained attribute reasoning, and image-grounded VQA) and composition-aware generation from explicit structured composition descriptions.
  • The authors provide standardized evaluation protocols and baseline results using representative multimodal models, targeting more interpretable aesthetic assessment and attribute-level reasoning.
  • By combining understanding and generation under explicit composition constraints, PortraitCraft is positioned to support systematic research into interpretable, composition-controlled portrait synthesis.

Abstract

Portrait composition plays a central role in portrait aesthetics and visual communication, yet existing datasets and benchmarks mainly focus on coarse aesthetic scoring, generic image aesthetics, or unconstrained portrait generation. This limits systematic research on structured portrait composition analysis and controllable portrait generation under explicit composition requirements. In this paper, we introduce PortraitCraft, a unified benchmark for portrait composition understanding and generation. PortraitCraft is built on a dataset of approximately 50,000 curated real portrait images with structured multi-level supervision, including global composition scores, annotations over 13 composition attributes, attribute-level explanation texts, visual question answering pairs, and composition-oriented textual descriptions for generation. Based on this dataset, we establish two complementary benchmark tasks for composition understanding and composition-aware generation within a unified framework. The first evaluates portrait composition understanding through score prediction, fine-grained attribute reasoning, and image-grounded visual question answering, while the second evaluates portrait generation from structured composition descriptions under explicit composition constraints. We further define standardized evaluation protocols and provide reference baseline results with representative multimodal models. PortraitCraft provides a comprehensive benchmark for future research on fine-grained portrait understanding, interpretable aesthetic assessment, and controllable portrait generation.