AI Navigate

MSRAMIE: Multimodal Structured Reasoning Agent for Multi-instruction Image Editing

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MSRAMIE, a training-free agent framework built on Multimodal Large Language Models to handle multi-instruction image editing tasks.
  • MSRAMIE uses existing editing models as plug-in components and coordinates between an MLLM-based Instructor and an image editing Actor through a novel Tree-of-States and Graph-of-References reasoning topology.
  • During inference, complex instructions are decomposed into multiple editing steps with state transitions, cross-step information aggregation, and recall of the original input to support progressive output refinement.
  • The framework provides a visualizable inference topology that yields interpretable and controllable decision pathways during editing.
  • Experimental results show over 15% improvement in instruction following and a 100% rate of completing all modifications in a single run, while preserving perceptual quality.

Abstract

Existing instruction-based image editing models perform well with simple, single-step instructions but degrade in realistic scenarios that involve multiple, lengthy, and interdependent directives. A main cause is the scarcity of training data with complex multi-instruction annotations. However, it is costly to collect such data and retrain these models. To address this challenge, we propose MSRAMIE, a training-free agent framework built on Multimodal Large Language Model (MLLM). MSRAMIE takes existing editing models as plug-in components and handle multi-instruction tasks via structured multimodal reasoning. It orchestrates iterative interactions between an MLLM-based Instructor and an image editing Actor, introducing a novel reasoning topology that comprises the proposed Tree-of-States and Graph-of-References. During inference, complex instructions are decomposed into multiple editing steps which enable state transitions, cross-step information aggregation, and original input recall, which enables systematic exploration of the image editing space and flexible progressive output refinement. The visualizable inference topology further provides interpretable and controllable decision pathways. Experiments show that as the instruction complexity increases, MSRAMIE can improve instruction following over 15% and increases the probability of finishing all modifications in a single run over 100%, while preserving perceptual quality and maintaining visual consistency.