Pseudo-Labeling for Unsupervised Domain Adaptation with Kernel GLMs
arXiv stat.ML / 3/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Proposes a principled framework for unsupervised domain adaptation under covariate shift in kernel GLMs, covering kernelized linear, logistic, and Poisson regression with ridge regularization.
- Splits labeled source data into two batches: one to train a family of candidate models and one to build an imputation model that generates pseudo-labels for the target data, enabling robust model selection.
- Establishes non-asymptotic excess-risk bounds characterized by an 'effective labeled sample size' that accounts for unknown covariate shift, providing theoretical guarantees.
- Demonstrates empirical gains over source-only baselines on synthetic and real datasets, validating the approach.
Related Articles
How CVE-2026-25253 exposed every OpenClaw user to RCE — and how to fix it in one command
Dev.to
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
What CVE-2026-25253 Taught Me About Building Safe AI Assistants
Dev.to
Day 52: Building vs Shipping — Why We Had 711 Commits and 0 Users
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to