Beyond Coefficients: Forecast-Necessity Testing for Interpretable Causal Discovery in Nonlinear Time-Series Models
arXiv cs.AI / 4/22/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that in nonlinear time-series causal discovery, treating causal scores from regularized neural autoregressive models like regression coefficients can produce misleading significance claims.
- It proposes evaluating causal relevance via “forecast necessity,” asking whether a hypothesized causal edge is required for accurate prediction rather than focusing on coefficient magnitude.
- The authors introduce a practical, interpretable evaluation procedure using systematic edge ablation and forecast comparisons to test candidate causal relationships.
- Using Neural Additive Vector Autoregression, they apply the method to a multivariate panel time series of democracy indicators across 139 countries, showing that similar causal scores can imply very different predictive necessity.
- The findings aim to improve reliability of causal reasoning in applied AI, offering guidance for interpreting nonlinear time-series models in high-stakes settings.


