Causal-Audit: A Framework for Risk Assessment of Assumption Violations in Time-Series Causal Discovery
arXiv cs.LG / 4/6/2026
📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research
Key Points
- Causal-Audit proposes a framework for time-series causal discovery risk assessment when key assumptions (stationarity, regular sampling, bounded temporal dependence, etc.) may be violated, which can otherwise yield confident but incorrect causal graphs.
- The method computes calibrated effect-size diagnostics for five assumption families—stationarity, irregularity, persistence, nonlinearity, and confounding proxies—and aggregates them into four risk scores with uncertainty intervals.
- It includes an abstention-aware decision policy that recommends specific causal discovery methods only when evidence supports reliable inference, and otherwise opts to abstain to avoid misleading results.
- Experiments on a synthetic atlas of 500 DGPs show strong calibration (AUROC > 0.95), a 62% reduction in false positives among recommended datasets, and 78% abstention on severe violations.
- The framework’s recommend-or-abstain behavior is validated across 21 external evaluations (TimeGraph and CausalTime) and is provided with an open-source implementation.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.




