AI Navigate

AgriChat: A Multimodal Large Language Model for Agriculture Image Understanding

arXiv cs.CV / 3/19/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces Vision-to-Verified-Knowledge (V2VK), a generative AI–driven annotation pipeline that grounds training data in verified phytopathological literature to reduce hallucinations in agricultural multimodal models.
  • It presents AgriMM, a benchmark with over 3,000 agricultural classes and more than 607k VQAs across tasks such as plant species identification, disease symptom recognition, crop counting, and ripeness assessment.
  • Leveraging this verified data, AgriChat is developed as a specialized multimodal LLM that offers broad agricultural knowledge and detailed, explainable assessments across thousands of classes.
  • The authors evaluate AgriChat across diverse tasks and datasets, demonstrating superior performance over open-source models and underscoring the value of combining rich visuals with web-verified knowledge for trustworthy agricultural AI; the code and dataset are publicly available.

Abstract

The deployment of Multimodal Large Language Models (MLLMs) in agriculture is currently stalled by a critical trade-off: the existing literature lacks the large-scale agricultural datasets required for robust model development and evaluation, while current state-of-the-art models lack the verified domain expertise necessary to reason across diverse taxonomies. To address these challenges, we propose the Vision-to-Verified-Knowledge (V2VK) pipeline, a novel generative AI-driven annotation framework that integrates visual captioning with web-augmented scientific retrieval to autonomously generate the AgriMM benchmark, effectively eliminating biological hallucinations by grounding training data in verified phytopathological literature. The AgriMM benchmark contains over 3,000 agricultural classes and more than 607k VQAs spanning multiple tasks, including fine-grained plant species identification, plant disease symptom recognition, crop counting, and ripeness assessment. Leveraging this verifiable data, we present AgriChat, a specialized MLLM that presents broad knowledge across thousands of agricultural classes and provides detailed agricultural assessments with extensive explanations. Extensive evaluation across diverse tasks, datasets, and evaluation conditions reveals both the capabilities and limitations of current agricultural MLLMs, while demonstrating AgriChat's superior performance over other open-source models, including internal and external benchmarks. The results validate that preserving visual detail combined with web-verified knowledge constitutes a reliable pathway toward robust and trustworthy agricultural AI. The code and dataset are publicly available at https://github.com/boudiafA/AgriChat .