Refinement via Regeneration: Enlarging Modification Space Boosts Image Refinement in Unified Multimodal Models

arXiv cs.CV / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Unified multimodal models can refine text-to-image outputs after initial generation, but existing approaches often use a “refinement via editing” (RvE) strategy that gives only coarse edit instructions and can leave semantic misalignment unresolved.
  • Current RvE methods also enforce pixel-level content preservation, which limits the effective modification space and reduces how well the model can correct errors.
  • The paper proposes Refinement via Regeneration (RvR), reframing refinement as conditional image regeneration guided by the target prompt and semantic tokens from the initial image rather than by explicit editing instructions.
  • Experiments show RvR substantially improves image-refinement quality across multiple benchmarks, raising Geneval from 0.78 to 0.91, DPGBench from 84.02 to 87.21, and UniGenBench++ from 61.53 to 77.41.
  • Overall, expanding the modification space through regeneration appears to push the performance upper bound for refinement in unified multimodal text-to-image systems.

Abstract

Unified multimodal models (UMMs) integrate visual understanding and generation within a single framework. For text-to-image (T2I) tasks, this unified capability allows UMMs to refine outputs after their initial generation, potentially extending the performance upper bound. Current UMM-based refinement methods primarily follow a refinement-via-editing (RvE) paradigm, where UMMs produce editing instructions to modify misaligned regions while preserving aligned content. However, editing instructions often describe prompt-image misalignment only coarsely, leading to incomplete refinement. Moreover, pixel-level preservation, though necessary for editing, unnecessarily restricts the effective modification space for refinement. To address these limitations, we propose Refinement via Regeneration (RvR), a novel framework that reformulates refinement as conditional image regeneration rather than editing. Instead of relying on editing instructions and enforcing strict content preservation, RvR regenerates images conditioned on the target prompt and the semantic tokens of the initial image, enabling more complete semantic alignment with a larger modification space. Extensive experiments demonstrate the effectiveness of RvR, improving Geneval from 0.78 to 0.91, DPGBench from 84.02 to 87.21, and UniGenBench++ from 61.53 to 77.41.