ATP-Bench: Towards Agentic Tool Planning for MLLM Interleaved Generation

arXiv cs.AI / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that interleaved text-and-image generation for multimodal LLMs should advance toward agentic tool planning, where a model autonomously decides when and which tools to call to satisfy visual-critical intents.
  • It introduces ATP-Bench, a new benchmark with 7,702 QA pairs across eight categories and 25 visual-critical intents, including human-verified queries and ground truths.
  • To evaluate tool-planning quality without tying results to full end-to-end execution, it proposes a Multi-Agent MLLM-as-a-Judge (MAM) that scores tool-call precision, missed tool-use opportunities, and response quality without requiring ground-truth references.
  • Experiments across 10 state-of-the-art MLLMs show inconsistent tool-use behavior and difficulties with coherent interleaved planning, indicating significant opportunity for improving agentic multimodal generation.

Abstract

Interleaved text-and-image generation represents a significant frontier for Multimodal Large Language Models (MLLMs), offering a more intuitive way to convey complex information. Current paradigms rely on either image generation or retrieval augmentation, yet they typically treat the two as mutually exclusive paths, failing to unify factuality with creativity. We argue that the next milestone in this field is Agentic Tool Planning, where the model serves as a central controller that autonomously determines when, where, and which tools to invoke to produce interleaved responses for visual-critical queries. To systematically evaluate this paradigm, we introduce ATP-Bench, a novel benchmark comprising 7,702 QA pairs (including 1,592 VQA pairs) across eight categories and 25 visual-critical intents, featuring human-verified queries and ground truths. Furthermore, to evaluate agentic planning independent of end-to-end execution and changing tool backends, we propose a Multi-Agent MLLM-as-a-Judge (MAM) system. MAM evaluates tool-call precision, identifies missed opportunities for tool use, and assesses overall response quality without requiring ground-truth references. Our extensive experiments on 10 state-of-the-art MLLMs reveal that models struggle with coherent interleaved planning and exhibit significant variations in tool-use behavior, highlighting substantial room for improvement and providing actionable guidance for advancing interleaved generation. Dataset and code are available at https://github.com/Qwen-Applications/ATP-Bench.