AI Navigate

Correlation-Weighted Multi-Reward Optimization for Compositional Generation

arXiv cs.AI / 3/20/2026

💬 OpinionModels & Research

Key Points

  • Correlation-Weighted Multi-Reward Optimization introduces a framework that weights concept rewards based on their correlation, addressing interference and balancing competing signals in compositional generation.
  • The method decomposes prompts into concept groups (objects, attributes, relations) and uses dedicated reward models to provide per-concept signals before reweighting them adaptively.
  • It emphasizes hard-to-satisfy or conflicting concepts by increasing their weights, guiding optimization to consistently satisfy all requested attributes across samples.
  • Experiments show improvements on challenging multi-concept benchmarks (ConceptMix, GenEval 2, T2I-CompBench) when applying the approach to diffusion models SD3.5 and FLUX.1-dev.

Abstract

Text-to-image models produce images that align well with natural language prompts, but compositional generation has long been a central challenge. Models often struggle to satisfy multiple concepts within a single prompt, frequently omitting some concepts and resulting in partial success. Such failures highlight the difficulty of jointly optimizing multiple concepts during reward optimization, where competing concepts can interfere with one another. To address this limitation, we propose Correlation-Weighted Multi-Reward Optimization (\ours), a framework that leverages the correlation structure among concept rewards to adaptively weight each attribute concept in optimization. By accounting for interactions among concepts, \ours balances competing reward signals and emphasizes concepts that are partially satisfied yet inconsistently generated across samples, improving compositional generation. Specifically, we decompose multi-concept prompts into pre-defined concept groups (\eg, objects, attributes, and relations) and obtain reward signals from dedicated reward models for each concept. We then adaptively reweight these rewards, assigning higher weights to conflicting or hard-to-satisfy concepts using correlation-based difficulty estimation. By focusing optimization on the most challenging concepts within each group, \ours encourages the model to consistently satisfy all requested attributes simultaneously. We apply our approach to train state-of-the-art diffusion models, SD3.5 and FLUX.1-dev, and demonstrate consistent improvements on challenging multi-concept benchmarks, including ConceptMix, GenEval 2, and T2I-CompBench.