Learning Dynamic Representations and Policies from Multimodal Clinical Time-Series with Informative Missingness
arXiv cs.LG / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key gap in multimodal clinical time-series modeling by explicitly using “informative missingness,” where which data are observed depends on latent patient conditions.
- It proposes a framework that jointly learns multimodal patient representations from structured measurements and clinical notes while modeling the observation patterns, then updates a latent patient state via Bayesian filtering.
- The learned latent state is used for two downstream tasks: offline treatment policy learning and patient outcome prediction.
- Experiments on ICU sepsis cohorts using MIMIC-III, MIMIC-IV, and eICU show improved performance, including higher FQE (0.679 vs 0.528) for treatment policy learning and AUROC of 0.886 for post–72-hour mortality prediction on MIMIC-III.
- The results suggest that incorporating the missing-data-generating process can materially improve both decision-making and prognostic modeling from sparse, multimodal EHR data.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA