Group Editing : Edit Multiple Images in One Go

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GroupEditing, a framework for making consistent, unified edits across multiple related images even when pose, viewpoint, and layouts differ substantially.
  • It combines explicit geometric correspondences from VGGT with implicit relationships captured by treating the image group as a pseudo-video and using temporal coherence priors from pre-trained video models.
  • A novel fusion mechanism injects VGGT’s geometric cues into the video model to improve accurate application of edits to semantically aligned regions.
  • The authors contribute GroupEditData for large-scale training (high-quality masks and detailed captions) and GroupEditBench for evaluating group-level editing quality and consistency.
  • To preserve identity across images, they add an alignment-enhanced RoPE module, and experiments show GroupEditing surpasses prior methods in visual quality, cross-view consistency, and semantic alignment.

Abstract

In this paper, we tackle the problem of performing consistent and unified modifications across a set of related images. This task is particularly challenging because these images may vary significantly in pose, viewpoint, and spatial layout. Achieving coherent edits requires establishing reliable correspondences across the images, so that modifications can be applied accurately to semantically aligned regions. To address this, we propose GroupEditing, a novel framework that builds both explicit and implicit relationships among images within a group. On the explicit side, we extract geometric correspondences using VGGT, which provides spatial alignment based on visual features. On the implicit side, we reformulate the image group as a pseudo-video and leverage the temporal coherence priors learned by pre-trained video models to capture latent relationships. To effectively fuse these two types of correspondences, we inject the explicit geometric cues from VGGT into the video model through a novel fusion mechanism. To support large-scale training, we construct GroupEditData, a new dataset containing high-quality masks and detailed captions for numerous image groups. Furthermore, to ensure identity preservation during editing, we introduce an alignment-enhanced RoPE module, which improves the model's ability to maintain consistent appearance across multiple images. Finally, we present GroupEditBench, a dedicated benchmark designed to evaluate the effectiveness of group-level image editing. Extensive experiments demonstrate that GroupEditing significantly outperforms existing methods in terms of visual quality, cross-view consistency, and semantic alignment.