CausalCompass: Evaluating the Robustness of Time-Series Causal Discovery in Misspecified Scenarios
arXiv stat.ML / 5/1/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CausalCompass, a benchmark framework to evaluate time-series causal discovery (TSCD) methods when modeling assumptions are violated, addressing the lack of robustness-focused evaluation in existing benchmarks.
- Extensive experiments across eight assumption-violation scenarios show that no single TSCD method performs best in all settings.
- Across varied scenarios, the strongest overall performers are “almost invariably” deep learning–based approaches, supported by hyperparameter sensitivity analyses and ablation studies.
- An additional finding is that NTS-NOTEARS depends heavily on standardized preprocessing: it performs poorly in the default (vanilla) setting but improves substantially after standardization.
- The authors provide an implementation, documentation, and datasets to help researchers and practitioners systematically test TSCD robustness for broader real-world adoption.
Related Articles
Every handle invocation on BizNode gets a WFID — a universal transaction reference for accountability. Full audit trail,...
Dev.to
I deployed AI agents across AWS, GCP, and Azure without a VPN. Here is how it works.
Dev.to
Panduan Lengkap TestSprite MCP Server — Dokumentasi Getting Started dalam Bahasa Indonesia
Dev.to
AI made learning fun again
Dev.to
Every Telegram conversation becomes a qualified lead. BizNode captures name, email, and business details automatically while...
Dev.to