AI Navigate

coDrawAgents: A Multi-Agent Dialogue Framework for Compositional Image Generation

arXiv cs.CV / 3/16/2026

📰 NewsModels & Research

Key Points

  • The paper introduces coDrawAgents, a multi-agent dialogue framework for compositional image generation with four specialized agents: Interpreter, Planner, Checker, and Painter.
  • It supports two modes: a direct text-to-image pathway and a layout-aware mode where the Interpreter parses prompts into attribute-rich object descriptors and groups objects by semantic priority for joint generation.
  • The Planner uses a divide-and-conquer strategy to propose layouts for objects at the same priority level while grounding decisions in the evolving canvas context.
  • The Checker provides explicit error correction by validating spatial consistency and attribute alignment and refining layouts before rendering.
  • Experiments on GenEval and DPG-Bench show substantial improvements in text-image alignment, spatial accuracy, and attribute binding over existing methods.

Abstract

Text-to-image generation has advanced rapidly, but existing models still struggle with faithfully composing multiple objects and preserving their attributes in complex scenes. We propose coDrawAgents, an interactive multi-agent dialogue framework with four specialized agents: Interpreter, Planner, Checker, and Painter that collaborate to improve compositional generation. The Interpreter adaptively decides between a direct text-to-image pathway and a layout-aware multi-agent process. In the layout-aware mode, it parses the prompt into attribute-rich object descriptors, ranks them by semantic salience, and groups objects with the same semantic priority level for joint generation. Guided by the Interpreter, the Planner adopts a divide-and-conquer strategy, incrementally proposing layouts for objects with the same semantic priority level while grounding decisions in the evolving visual context of the canvas. The Checker introduces an explicit error-correction mechanism by validating spatial consistency and attribute alignment, and refining layouts before they are rendered. Finally, the Painter synthesizes the image step by step, incorporating newly planned objects into the canvas to provide richer context for subsequent iterations. Together, these agents address three key challenges: reducing layout complexity, grounding planning in visual context, and enabling explicit error correction. Extensive experiments on benchmarks GenEval and DPG-Bench demonstrate that coDrawAgents substantially improves text-image alignment, spatial accuracy, and attribute binding compared to existing methods.