Probing Visual Planning in Image Editing Models

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that visual planning is often treated as a language-driven problem in ML, and that fully visual methods can be inefficient due to step-by-step “planning-by-generation.”
  • It introduces EAR (editing-as-reasoning), which reformulates visual planning as a single-step image transformation to separate intrinsic reasoning from visual recognition.
  • To probe reasoning capabilities without conflating recognition, the study uses abstract puzzle tasks and presents the procedurally generated AMAZE dataset with Maze and Queen-style problems.
  • AMAZE enables automatic evaluation of both autoregressive and diffusion-based editing models using pixel-level fidelity and logical validity, and the authors test both proprietary and open-source models.
  • Results indicate models struggle in zero-shot settings, but fine-tuning on smaller in-domain scales yields strong generalization to larger and out-of-domain geometries, while still leaving a gap versus the zero-shot efficiency of human solvers.

Abstract

Visual planning represents a crucial facet of human intelligence, especially in tasks that require complex spatial reasoning and navigation. Yet, in machine learning, this inherently visual problem is often tackled through a verbal-centric lens. While recent research demonstrates the promise of fully visual approaches, they suffer from significant computational inefficiency due to the step-by-step planning-by-generation paradigm. In this work, we present EAR, an editing-as-reasoning paradigm that reformulates visual planning as a single-step image transformation. To isolate intrinsic reasoning from visual recognition, we employ abstract puzzles as probing tasks and introduce AMAZE, a procedurally generated dataset that features the classical Maze and Queen problems, covering distinct, complementary forms of visual planning. The abstract nature of AMAZE also facilitates automatic evaluation of autoregressive and diffusion-based models in terms of both pixel-wise fidelity and logical validity. We assess leading proprietary and open-source editing models. The results show that they all struggle in the zero-shot setting, finetuning on basic scales enables remarkable generalization to larger in-domain scales and out-of-domain scales and geometries. However, our best model that runs on high-end hardware fails to match the zero-shot efficiency of human solvers, highlighting a persistent gap in neural visual reasoning.