ResetEdit: Precise Text-guided Editing of Generated Image via Resettable Starting Latent

arXiv cs.CV / 4/29/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper proposes ResetEdit, a diffusion-model editing framework for precise text-guided modifications that preserves global image structure while changing local regions.
  • It argues that inversion-based methods (e.g., DDIM inversion) produce poor “starting latents,” hurting edit fidelity and causing structural inconsistencies during editing.
  • ResetEdit addresses this by embedding recoverable latent information into the generation process, injecting the discrepancy between clean and diffused latents and later extracting it during inversion to reconstruct a resettable latent close to the true starting state.
  • To further improve results, it introduces a lightweight latent optimization module that compensates for reconstruction bias arising from VAE asymmetry.
  • Evaluated on Stable Diffusion, ResetEdit reportedly integrates with tuning-free editing methods and consistently beats prior baselines in controllability and visual quality.

Abstract

Recent advances in diffusion models have enabled high-quality image generation, leading to increasing demand for post-generation editing that modifies local regions while preserving global structure. Achieving such flexible and precise editing requires a high-quality starting point, a latent representation that provides both the freedom needed for diverse modifications and the precision required for fine-grained, region-specific control. However, existing inversion-based approaches such as DDIM inversion often yield unsatisfactory starting latents, resulting in degraded edit fidelity and structural inconsistency. Ideally, the most suitable editing anchor should be the original latent used during the generation process, as it inherently captures the scene's structure and semantics. Yet, storing this latent for every generated image is impractical due to massive storage and retrieval costs. To address this challenge, we propose ResetEdit, a proactive diffusion editing framework that embeds recoverable latent information directly into the generation process. By injecting the discrepancy between the clean and diffused latents into the diffusion trajectory and extracting it during inversion, ResetEdit reconstructs a resettable latent that closely approximates the true starting state. Additionally, a lightweight latent optimization module compensates for reconstruction bias caused by VAE asymmetry. Built upon Stable Diffusion, ResetEdit integrates seamlessly with existing tuning-free editing methods and consistently outperforms state-of-the-art baselines in both controllability and visual fidelity.