Towards Multi-Source Domain Generalization for Sleep Staging with Noisy Labels
arXiv cs.LG / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper targets automatic sleep staging from multimodal signals (e.g., EEG/EOG) where performance is hindered by both cross-domain shifts and noisy labels.
- It introduces the first benchmark, NL-DGSS, to evaluate noisy-label robustness in multi-source domain-generalized sleep staging and finds that existing noisy-label methods fail when domain shift and label noise occur together.
- The proposed FF-TRUST framework uses domain-invariant multimodal learning with Joint Time-Frequency Early Learning Regularization (JTF-ELR) and confidence-diversity regularization to improve robustness under noisy supervision.
- Experiments across five public datasets show consistent state-of-the-art results under both symmetric and asymmetric noise scenarios.
- The authors plan to release the benchmark and code publicly via the provided GitHub repository.
Related Articles

Black Hat Asia
AI Business

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to

Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial