Prompt-Guided Image Editing with Masked Logit Nudging in Visual Autoregressive Models

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper tackles prompt-guided image editing in visual autoregressive models by modifying a source image to match a target text prompt while preserving regions unrelated to the edit.
  • It introduces Masked Logit Nudging, which converts fixed source token encodings into logits and nudges the model’s predictions toward targets along a semantic trajectory derived from source-target prompts.
  • Spatial edits are constrained using masks generated via a dedicated masking scheme based on cross-attention differences between the source and edited prompts.
  • A refinement step is added to reduce quantization errors and improve reconstruction quality.
  • Experiments report state-of-the-art performance on the PIE benchmark at 512px and 1024px, strong reconstruction quality, and faster execution with results comparable to or better than diffusion models; code is released on GitHub.

Abstract

We address the problem of prompt-guided image editing in visual autoregressive models. Given a source image and a target text prompt, we aim to modify the source image according to the target prompt, while preserving all regions which are unrelated to the requested edit. To this end, we present Masked Logit Nudging, which uses the source image token maps to introduce a guidance step that aligns the model's predictions under the target prompt with these source token maps. Specifically, we convert the fixed source encodings into logits using the VAR encoding, nudging the model's predicted logits towards the targets along a semantic trajectory defined by the source-target prompts. Edits are applied only within spatial masks obtained through a dedicated masking scheme that leverages cross-attention differences between the source and edited prompts. Then, we introduce a refinement to correct quantization errors and improve reconstruction quality. Our approach achieves the best image editing performance on the PIE benchmark at 512px and 1024px resolutions. Beyond editing, our method delivers faithful reconstructions and outperforms previous methods on COCO at 512px and OpenImages at 1024px. Overall, our method outperforms VAR-related approaches and achieves comparable or even better performance than diffusion models, while being much faster. Code is available at 'https://github.com/AmirMaEl/MLN'.