Observable Neural ODEs for Identifiable Causal Forecasting in Continuous Time
arXiv cs.LG / 4/30/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how to perform causal inference in continuous-time sequential decision problems when hidden confounders affect both interventions and outcomes.
- It shows that identifying dynamic treatment effects requires the latent dynamics to be observable from the observed data, connecting control-theoretic observability to causal identifiability.
- The authors derive a continuous-time adjustment formula that expresses potential outcome distributions under treatment trajectories using the measurement model, latent dynamics, and filtering over latent states.
- They introduce Observable Neural ODEs (ObsNODEs), which learn continuous-time dynamics in an observable normal form so that latent states can be reconstructed from observations for causal forecasting under alternative treatment paths.
- Experiments on synthetic cancer data, semi-synthetic MIMIC-IV-based data, and real sepsis data indicate that ObsNODEs outperform recent sequence models.
Related Articles
Vector DB and ANN vs PHE conflict, is there a practical workaround? [D]
Reddit r/MachineLearning

Agent Amnesia and the Case of Henry Molaison
Dev.to

Azure Weekly: Microsoft and OpenAI Restructure Partnership as GPT-5.5 Lands in Foundry
Dev.to

Proven Patterns for OpenAI Codex in 2026: Prompts, Validation, and Gateway Governance
Dev.to

Vibe coding is a tool, not a shortcut. Most people are using it wrong.
Dev.to