Self-Reasoning Agentic Framework for Narrative Product Grid-Collage Generation

arXiv cs.CV / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a self-reasoning, agentic framework to generate narrative-driven product grid collages that keep visual storytelling coherent across panels.
  • Instead of generating each panel independently, the system creates the collage as a single unified image using a shared visual style and constraint-aware prompts.
  • It builds a Product Narrative Framework from a product packshot and product name, explicitly modeling identity, usage context, and environment, then translating that into coordinated grids.
  • The approach uses an evaluation-and-refinement loop with content-validity and photography-quality checks, performing failure attribution and targeted refinement when results fall short.
  • Experiments report consistent improvements over direct prompting baselines in aesthetic quality, narrative richness, and cross-grid visual coherence.

Abstract

Narrative-driven product photography has become a prevalent paradigm in modern marketing, as coherent visual storytelling helps convey product value and establishes emotional engagement with consumers. However, existing image generation methods do not support structured narrative planning or cross-panel coordination, often resulting in weak storytelling and visual incoherence. In practice, narrative product photography is commonly presented as multi-grid collages, where multiple views or scenes jointly communicate a product narrative. To ensure visual consistency across grids and aesthetic harmony of the overall composition, we generate the collage as a single unified image rather than composing independently synthesized panels. We propose a self-reasoning agentic framework for narrative product grid collage generation. Given a product packshot and its name, the system first constructs a Product Narrative Framework that explicitly represents the product's identity, usage context, and situational environment, and translates it into complementary grids governed by a shared visual style. Constraint-aware prompts are then compiled and fed to a generation model that synthesizes the collage jointly. The generated output is evaluated on both content validity and photography quality, with explicit gates determining whether to proceed or refine. When evaluation fails, the system performs failure attribution and applies targeted refinement, enabling progressive improvement through iterative self-reflection. Experiments demonstrate that our framework consistently improves aesthetic quality, narrative richness, and visual coherence, compared to direct prompting baselines.