A systematic evaluation of vision-language models for observational astronomical reasoning tasks

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study introduces AstroVLBench, a benchmark with 4,100+ expert-verified observational astronomy instances across five modalities (optical imaging, radio interferometry, multi-wavelength photometry, time-domain light curves, and optical spectroscopy).
  • Evaluating six state-of-the-art vision-language models shows performance varies strongly by modality, with Gemini 3 Pro the most consistently capable across tasks.
  • Results indicate that reliable scientific reasoning requires more than attending to salient visual features; models must ground those features in physical knowledge to avoid biased or physically imprecise explanations.
  • Mechanistic and prompting experiments find that phenomenological prompts help focus, but physical prompts (explaining why features matter) improve overall accuracy and produce more balanced, less class-biased classifications.
  • Providing underlying measurements as numerical tables instead of rendered plots improves accuracy by up to 13 percentage points, and analysis shows models can be correct for the wrong reasons without explicit physical grounding.

Abstract

Vision-language models (VLMs) are increasingly proposed as general-purpose tools for scientific data interpretation, yet their reliability on real astronomical observations across diverse modalities remains untested. We present AstroVLBench, a comprehensive benchmark comprising over 4,100 expert-verified instances across five tasks spanning optical imaging, radio interferometry, multi-wavelength photometry, time-domain light curves, and optical spectroscopy. Evaluating six frontier models, we find that performance is strongly modality-dependent: while one model (Gemini 3 Pro) emerges as the most consistently capable across tasks, task-specific strengths vary, and all models substantially underperform domain-specialized methods. Mechanistic ablations reveal that performance depends not only on directing attention to salient visual features but also on grounding those features in physical knowledge. Phenomenological prompts describing what to look for improve accuracy by sharpening model focus, but physical prompts explaining why those features matter perform better overall and yield more balanced classifications with reduced class-specific bias. Consistent with this picture, presenting the underlying one-dimensional measurements directly as numerical tables instead of rendered plots yields up to 13 percentage points improvement. Reasoning quality analysis further demonstrates that, without explicit physical grounding, models may reach correct predictions from phenomenologically plausible cues while providing physically imprecise justifications, establishing that accuracy alone is insufficient for trustworthy scientific deployment. These findings provide the first systematic, multi-modal baselines for VLMs in observational astronomy and identify the specific representation, grounding, and reasoning bottlenecks where current models fail.