From Prediction to Practice: A Task-Aware Evaluation Framework for Blood Glucose Forecasting

arXiv cs.LG / 5/4/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that standard aggregate metrics for clinical time-series forecasting can hide dangerous failures in high-risk regimes, motivating task-aware evaluation for blood glucose forecasting.
  • It introduces two evaluation arms tailored to downstream uses: hypoglycemia early warning measured with event-level recall and patient-day false alarms, and insulin dosing decision support that tests action-dependent effects.
  • Using real data from three clinical cohorts, the study finds models with high overall recall (above 0.9) can still perform poorly in the post-bolus period, where missed warnings have the greatest clinical consequences.
  • For insulin dosing support, the framework uses the FDA-accepted UVA/Padova simulator to run paired factual/counterfactual scenarios, showing that strong real-data forecasters may fail to predict intervention effects and may recommend poor insulin doses under a clinically motivated cost function.
  • The authors release a benchmark, a standardized preprocessing pipeline, and an interventional simulator-based dataset to enable reproducible, task-relevant model evaluation.

Abstract

Clinical time-series forecasting is increasingly studied for decision support, yet standard aggregate metrics can obscure whether a model is actually useful for the task it is meant to serve. In safety-critical settings, low average error can coexist with dangerous failures in exactly the high-risk regimes that matter most. We present a task-aware evaluation framework for blood glucose forecasting built around two downstream uses: hypoglycemia early warning and insulin dosing decision support. For early warning, we evaluate on real data from three clinical cohorts using event-level recall and false alarms per patient-day, metrics that reflect operational alarm burden rather than aggregate accuracy. We show that models appearing acceptable overall, with recall above 0.9 on the full test set, can fail badly in the post-bolus slice, where insulin-on-board is elevated and missed warnings carry the greatest clinical consequences. Standard forecasting evaluation, however, does not test whether a model can reason about the effects of actions, a requirement for supporting insulin dosing decisions. We therefore add a second, interventional arm using the FDA-accepted UVA/Padova simulator, where we evaluate whether forecasters can predict glucose responses to altered insulin plans in paired factual/counterfactual scenarios. We show that models that look strong on real-data forecasting often fail to predict the direction, magnitude, or ranking of intervention effects, and choose poor insulin doses when evaluated under a clinically motivated cost. Taken together, the two arms reveal a consistent gap between forecasting accuracy and task-relevant usefulness. We release the benchmark, the standardized preprocessing pipeline for public cohorts, and the simulator-based interventional dataset as a reproducible toolkit.