WildFireVQA: A Large-Scale Radiometric Thermal VQA Benchmark for Aerial Wildfire Monitoring

arXiv cs.CV / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • WildFireVQA is a new large-scale benchmark for aerial wildfire monitoring that evaluates multimodal visual question answering grounded in radiometric thermal measurements, combining RGB and thermal data.
  • The dataset includes 6,097 RGB-thermal samples (with an RGB image, color-mapped thermal visualization, and radiometric thermal TIFF) paired with 34 multiple-choice questions each, totaling 207,298 questions across multiple wildfire-relevant task types.
  • To improve annotation reliability, the project uses multimodal LLM-based answer generation alongside sensor-driven deterministic labeling, manual verification, and consistency checks within and across frames.
  • The paper introduces a comprehensive evaluation protocol for representative MLLMs across RGB, Thermal, and retrieval-augmented settings using radiometric thermal statistics, and finds RGB is currently the strongest modality for most models.
  • Results indicate that adding retrieved thermal context can improve stronger MLLMs, while also exposing limitations of current MLLMs for safety-critical wildfire intelligence, and the dataset and code are open-sourced.

Abstract

Wildfire monitoring requires timely, actionable situational awareness from airborne platforms, yet existing aerial visual question answering (VQA) benchmarks do not evaluate wildfire-specific multimodal reasoning grounded in thermal measurements. We introduce WildFireVQA, a large-scale VQA benchmark for aerial wildfire monitoring that integrates RGB imagery with radiometric thermal data. WildFireVQA contains 6,097 RGB-thermal samples, where each sample includes an RGB image, a color-mapped thermal visualization, and a radiometric thermal TIFF, and is paired with 34 questions, yielding a total of 207,298 multiple-choice questions spanning presence and detection, classification, distribution and segmentation, localization and direction, cross-modal reasoning, and flight planning for operational wildfire intelligence. To improve annotation reliability, we combine multimodal large language model (MLLM)-based answer generation with sensor-driven deterministic labeling, manual verification, and intra-frame and inter-frame consistency checks. We further establish a comprehensive evaluation protocol for representative MLLMs under RGB, Thermal, and retrieval-augmented settings using radiometric thermal statistics. Experiments show that across task categories, RGB remains the strongest modality for current models, while retrieved thermal context yields gains for stronger MLLMs, highlighting both the value of temperature-grounded reasoning and the limitations of existing MLLMs in safety-critical wildfire scenarios. The dataset and benchmark code are open-source at https://github.com/mobiiin/WildFire_VQA.