Unify-Agent: A Unified Multimodal Agent for World-Grounded Image Synthesis

arXiv cs.CV / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Unify-Agent, a unified multimodal agent that tackles world-grounded image synthesis by reframing generation as an agentic pipeline (prompt understanding, evidence searching, grounded recaptioning, and synthesis).
  • It reports a tailored training approach using a multimodal data pipeline and 143K curated agent trajectories to supervise the full reasoning/search/generation process.
  • The work adds FactIP, a benchmark spanning 12 categories of culturally significant and long-tail factual concepts that explicitly requires external knowledge grounding.
  • Experimental results claim Unify-Agent improves substantially over a base unified multimodal model across multiple benchmarks and real-world generation tasks, while getting closer to closed-source models’ world-knowledge capability.

Abstract

Unified multimodal models provide a natural and promising architecture for understanding diverse and complex real-world knowledge while generating high-quality images. However, they still rely primarily on frozen parametric knowledge, which makes them struggle with real-world image generation involving long-tail and knowledge-intensive concepts. Inspired by the broad success of agents on real-world tasks, we explore agentic modeling to address this limitation. Specifically, we present Unify-Agent, a unified multimodal agent for world-grounded image synthesis, which reframes image generation as an agentic pipeline consisting of prompt understanding, multimodal evidence searching, grounded recaptioning, and final synthesis. To train our model, we construct a tailored multimodal data pipeline and curate 143K high-quality agent trajectories for world-grounded image synthesis, enabling effective supervision over the full agentic generation process. We further introduce FactIP, a benchmark covering 12 categories of culturally significant and long-tail factual concepts that explicitly requires external knowledge grounding. Extensive experiments show that our proposed Unify-Agent substantially improves over its base unified model across diverse benchmarks and real world generation tasks, while approaching the world knowledge capabilities of the strongest closed-source models. As an early exploration of agent-based modeling for world-grounded image synthesis, our work highlights the value of tightly coupling reasoning, searching, and generation for reliable open-world agentic image synthesis.