ReactBench: A Benchmark for Topological Reasoning in MLLMs on Chemical Reaction Diagrams

arXiv cs.AI / 4/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ReactBench, a new benchmark designed to test structural (topological) reasoning in multimodal LLMs using chemical reaction diagrams rather than only semantic understanding of visuals.
  • It targets model weaknesses on complex graph structures such as branching paths, converging flows, and cyclic dependencies, including simple counting endpoint tasks.
  • The benchmark contains 1,618 expert-annotated QA pairs across four hierarchical task dimensions, enabling evaluation from localized recognition to holistic structural reasoning.
  • Experiments across 17 MLLMs show a performance drop of over 30% from anchor-based tasks to holistic structural reasoning tasks, indicating a bottleneck in reasoning rather than perception.
  • Ablation studies support the conclusion that the limitation is fundamentally about structural understanding, and the results suggest directions for improving visual/topological reasoning.

Abstract

Multimodal Large Language Models (MLLMs) excel at recognizing individual visual elements and reasoning over simple linear diagrams. However, when faced with complex topological structures involving branching paths, converging flows, and cyclic dependencies, their reasoning capabilities degrade sharply, even on tasks as basic as counting endpoints. Existing benchmarks fail to probe this gap, focusing on semantic comprehension rather than structural reasoning. We introduce ReactBench, a benchmark that reveals fundamental limitations in structural reasoning through chemical reaction diagrams. These real-world scientific diagrams offer an ideal testbed because they naturally span diverse structures from linear chains to cyclic graphs, while requiring both precise local recognition and coherent global reasoning. Our benchmark comprises 1,618 expert-annotated QA pairs across four hierarchical task dimensions. Extensive evaluation across 17 MLLMs reveals a significant performance gap exceeding 30% between anchor-based tasks and holistic structural reasoning tasks. Controlled ablations confirm this bottleneck lies in reasoning, not perception. These findings expose a fundamental deficit in structural understanding and establish directions for advancing visual reasoning.