VeloEdit: Training-Free Consistent and Continuous Instruction-Based Image Editing via Velocity Field Decomposition
arXiv cs.CV / 3/17/2026
📰 NewsModels & Research
Key Points
- VeloEdit presents a training-free approach to instruction-based image editing that maintains consistency in non-edited regions by partitioning velocity fields into source-preserving and editing components.
- It automatically identifies editing regions by measuring the discrepancy between the velocity fields responsible for preserving the source content and those driving the desired edits, enabling targeted control over where changes occur.
- The method enforces preservation-region consistency by substituting the editing velocity with the source-restoring velocity, while enabling continuous modulation of edit strength in target regions via velocity interpolation.
- Experiments on Flux.1 Kontext and Qwen-Image-Edit demonstrate improved visual consistency and editing continuity with negligible additional computational cost, and the code is released on GitHub.
Related Articles
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to
A Coding Implementation to Build an Uncertainty-Aware LLM System with Confidence Estimation, Self-Evaluation, and Automatic Web Research
MarkTechPost
DNA Memory: Making AI Agents Learn, Forget, and Evolve Like a Human Brain
Dev.to
Tinybox- offline AI device 120B parameters
Hacker News