Conditional Imputation for Within-Modality Missingness in Multi-Modal Federated Learning
arXiv cs.LG / 4/28/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper addresses a common challenge in multimodal federated learning for clinical data: within-modality missingness caused by sensor dropouts or irregular sampling.
- It criticizes existing approaches that rely on architectural alignment or missing embeddings because they may not recover the true underlying data distribution, hurting performance.
- The proposed framework, CondI, uses conditional diffusion models in a two-phase training process: first imputing missing temporal components with multimodal context, then training modality-specific extractors and joint embedding spaces.
- Inference runs the imputed raw data through the learned extractors to produce more robust features for downstream tasks, improving resilience to severe incompleteness.
- Experiments on PTB-XL, SLEEP-EDF, and MIMIC-IV show CondI achieves results comparable to state-of-the-art baselines, and the authors provide code on GitHub.
Related Articles
How I Automate My Dev Workflow with Claude Code Hooks
Dev.to

Claude Haiku for Low-Cost AI Inference: Patterns from a Horse Racing Prediction System
Dev.to

How We Built an Ambient AI Clinical Documentation Pipeline (and Saved Doctors 8+ Hours a Week)
Dev.to

🦀 PicoClaw Deep Dive — A Field Guide to Building an Ultra-Light AI Agent in Go 🐹
Dev.to

Real-Time Monitoring for AI Agents: Beyond Log Streaming
Dev.to