Thinking Like a Botanist: Challenging Multimodal Language Models with Intent-Driven Chain-of-Inquiry

arXiv cs.CL / 4/24/2026

💬 OpinionSignals & Early TrendsModels & Research

Key Points

  • The paper argues that real-world botanical/plant-pathology vision analysis is multi-step and intent-driven, while current vision-language model benchmarks often only test single-turn question answering.
  • It introduces PlantInquiryVQA, a new benchmark and Chain of Inquiry framework that models diagnostic reasoning as ordered question-answer sequences conditioned on grounded visual cues and explicit epistemic intent.
  • The authors release a large, expert-curated dataset (24,950 plant images and 138,068 QA pairs) annotated with visual grounding, severity labels, and domain-specific reasoning templates.
  • Experiments with leading multimodal large language models show they can describe visual symptoms but have difficulty with safe clinical reasoning and accurate diagnosis, whereas structured inquiry improves correctness, reduces hallucinations, and boosts reasoning efficiency.
  • The work positions PlantInquiryVQA as a foundational benchmark for training diagnostic agents to perform expert-like, trajectory-based reasoning rather than acting as static classifiers.

Abstract

Vision evaluations are typically done through multi-step processes. In most contemporary fields, experts analyze images using structured, evidence-based adaptive questioning. In plant pathology, botanists inspect leaf images, identify visual cues, infer diagnostic intent, and probe further with targeted questions that adapt to species, symptoms, and severity. This structured probing is crucial for accurate disease diagnosis and treatment formulation. Yet current vision-language models are evaluated on single-turn question answering. To address this gap, we introduce PlantInquiryVQA, a benchmark for studying multi-step, intent-driven visual reasoning in botanical diagnosis. We formalize a Chain of Inquiry framework modeling diagnostic trajectories as ordered question-answer sequences conditioned on grounded visual cues and explicit epistemic intent. We release a dataset of 24,950 expert-curated plant images and 138,068 question-answer pairs annotated with visual grounding, severity labels, and domain-specific reasoning templates. Evaluations on top-tier Multimodal Large Language Models reveal that while they describe visual symptoms adequately, they struggle with safe clinical reasoning and accurate diagnosis. Importantly, structured question-guided inquiry significantly improves diagnostic correctness, reduces hallucination, and increases reasoning efficiency. We hope PlantInquiryVQA serves as a foundational benchmark in advancing research to train diagnostic agents to reason like expert botanists rather than static classifiers.