Variational Rectification Inference for Learning with Noisy Labels
arXiv cs.LG / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- We propose Variational Rectification Inference (VRI) to adapt loss rectification for learning with noisy labels within a meta-learning framework.
- VRI treats the rectifying vector as a latent variable in a hierarchical Bayesian model, enabling robust loss correction for noisy samples through extra randomness regularization.
- An amortized meta-network approximates the conditional posterior of the rectifying vector, preventing collapse to a Dirac delta and improving generalization.
- The framework uses a smooth prior and bi-level optimization to efficiently meta-learn rectification with a set of clean meta-data.
- Empirical results show improved robustness to label noise, including open-set noise, validating the effectiveness of VRI.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to