NeuroVLM-Bench: Evaluation of Vision-Enabled Large Language Models for Clinical Reasoning in Neurological Disorders

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • NeuroVLM-Bench is a comprehensive benchmark for vision-enabled large language models on 2D neuroimaging tasks, using curated MRI/CT datasets across neurological disorder categories and normal controls.
  • The evaluation covers multiple operational aspects—classification with abstention, calibration, structured-output validity, and computational efficiency—using a multi-phase framework to reduce selection bias and ensure fair model comparison.
  • Results indicate that modality and imaging plane identification are largely solved, but clinical diagnostic reasoning—particularly diagnosis subtype prediction—remains notably difficult, with tumors performing best and multiple sclerosis/rare abnormalities remaining challenging.
  • Few-shot prompting improves diagnostic performance for several models, but it increases token usage, latency, and cost, highlighting trade-offs between accuracy and operational efficiency.
  • Gemini-2.5-Pro and GPT-5-Chat lead overall diagnostic performance, Gemini-2.5-Flash is best for efficiency, and the open-weight MedGemma-1.5-4B shows strong promise by nearly matching some proprietary models under few-shot prompting while maintaining perfect structured outputs.

Abstract

Recent advances in multimodal large language models enable new possibilities for image-based decision support. However, their reliability and operational trade-offs in neuroimaging remain insufficiently understood. We present a comprehensive benchmarking study of vision-enabled large language models for 2D neuroimaging using curated MRI and CT datasets covering multiple sclerosis, stroke, brain tumors, other abnormalities, and normal controls. Models are required to generate multiple outputs simultaneously, including diagnosis, diagnosis subtype, imaging modality, specialized sequence, and anatomical plane. Performance is evaluated across four directions: discriminative classification with abstention, calibration, structured-output validity, and computational efficiency. A multi-phase framework ensures fair comparison while controlling for selection bias. Across twenty frontier multimodal models, the results show that technical imaging attributes such as modality and plane are nearly solved, whereas diagnostic reasoning, especially subtype prediction, remains challenging. Tumor classification emerges as the most reliable task, stroke is moderately solvable, while multiple sclerosis and rare abnormalities remain difficult. Few-shot prompting improves performance for several models but increases token usage, latency, and cost. Gemini-2.5-Pro and GPT-5-Chat achieve the strongest overall diagnostic performance, while Gemini-2.5-Flash offers the best efficiency-performance trade-off. Among open-weight architectures, MedGemma-1.5-4B demonstrates the most promising results, as under few-shot prompting, it approaches the zero-shot performance of several proprietary models, while maintaining perfect structured output. These findings provide practical insights into performance, reliability, and efficiency trade-offs, supporting standardized evaluation of multimodal LLMs in neuroimaging.

NeuroVLM-Bench: Evaluation of Vision-Enabled Large Language Models for Clinical Reasoning in Neurological Disorders | AI Navigate