Learning Stable Predictors from Weak Supervision under Distribution Shift
arXiv cs.LG / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how weak/proxy supervision can lead to performance failures under distribution shift, formalizing this as “supervision drift” (changes in P(y|x,c) across contexts).
- Using CRISPR-Cas13d experiments, it infers guide efficacy indirectly from RNA-seq responses across two human cell lines and multiple time points via a controlled non-IID benchmark with explicit domain and temporal shifts.
- Models show strong in-domain accuracy but only partial cross-cell-line transfer, while temporal transfer fails for all models, including negative R² and near-zero rank correlation.
- Analyses suggest feature–label relationships stay stable across cell lines but change sharply over time, indicating that the transfer failure is driven by supervision drift rather than inherent model limitations.
- The authors propose a practical diagnostic: checking feature stability to detect situations where non-transferability is likely before deployment.
Related Articles

Black Hat Asia
AI Business

Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial