Not a fragment, but the whole: Map-based evaluation of data-driven Fire Danger Index models

arXiv cs.LG / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper critiques conventional ML classifier metrics for Fire Danger Index (FDI) forecasting by arguing they may not reflect operational decision-making needs.
  • It proposes a map-based evaluation approach for daily FDI models that explicitly accounts for false positive rates (false alarms), which are operationally critical.
  • The study systematically evaluates model performance for both accurately predicting fire activity and minimizing false alarms.
  • It reports that an ensemble of machine-learning models improves both fire identification and the reduction of false positives.

Abstract

A growing body of literature has focused on predicting wildfire occurrence using machine learning methods, capitalizing on high-resolution data and fire predictors that canonical process-based frameworks largely ignore. Standard evaluation metrics for an ML classifier, while important, provide a potentially limited measure of the model's operational performance for the Fire Danger Index (FDI) forecast. Furthermore, model evaluation is frequently conducted without adequately accounting for false positive rates, despite their critical relevance in operational contexts. In this paper, we revisit the daily FDI model evaluation paradigm and propose a novel method for evaluating a forest fire forecasting model that is aligned with real-world decision-making. Furthermore, we systematically assess performance in accurately predicting fire activity and the false positives (false alarms). We further demonstrate that an ensemble of ML models improves both fire identification and reduces false positives.