MolQuest: A Benchmark for Agentic Evaluation of Abductive Reasoning in Chemical Structure Elucidation

arXiv cs.CL / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • MolQuest introduces an agent-based evaluation framework for molecular structure elucidation that uses authentic chemical experimental data rather than static, single-turn QA benchmarks.
  • The benchmark reframes structure elucidation as a multi-turn interactive task where models must plan experimental steps, combine heterogeneous spectral evidence (e.g., NMR, MS), and iteratively update hypotheses.
  • The paper focuses specifically on measuring abductive reasoning and strategic decision-making under realistic scientific constraints, targeting the gap in current LLM evaluation practices.
  • Experimental results indicate strong performance limitations in frontier LLMs on this benchmark, with SOTA accuracy around ~50% and most other models below 30%.
  • The authors position MolQuest as reproducible and extensible, aiming to guide future research toward LLMs that can actively participate in the scientific process.

Abstract

Large language models (LLMs) hold considerable potential for advancing scientific discovery, yet systematic assessment of their dynamic reasoning in real-world research remains limited. Current scientific evaluation benchmarks predominantly rely on static, single-turn Question Answering (QA) formats, which are inadequate for measuring model performance in complex scientific tasks that require multi-step iteration and experimental interaction. To address this gap, we introduce MolQuest, a novel agent-based evaluation framework for molecular structure elucidation built upon authentic chemical experimental data. Unlike existing datasets, MolQuest formalizes molecular structure elucidation as a multi-turn interactive task, requiring models to proactively plan experimental steps, integrate heterogeneous spectral sources (e.g., NMR, MS), and iteratively refine structural hypotheses. This framework systematically evaluates LLMs' abductive reasoning and strategic decision-making abilities within a vast and complex chemical space. Empirical results reveal that contemporary frontier models exhibit significant limitations in authentic scientific scenarios: notably, even state-of-the-art (SOTA) models achieve an accuracy of only approximately 50%, while the performance of most other models remains below the 30% threshold. This work provides a reproducible and extensible framework for science-oriented LLM evaluation, our findings highlight the critical gap in current LLMs' strategic scientific reasoning, setting a clear direction for future research toward AI that can actively participate in the scientific process.
広告

MolQuest: A Benchmark for Agentic Evaluation of Abductive Reasoning in Chemical Structure Elucidation | AI Navigate