Variational Rectification Inference for Learning with Noisy Labels
arXiv cs.LG / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- We propose Variational Rectification Inference (VRI) to adapt loss rectification for learning with noisy labels within a meta-learning framework.
- VRI treats the rectifying vector as a latent variable in a hierarchical Bayesian model, enabling robust loss correction for noisy samples through extra randomness regularization.
- An amortized meta-network approximates the conditional posterior of the rectifying vector, preventing collapse to a Dirac delta and improving generalization.
- The framework uses a smooth prior and bi-level optimization to efficiently meta-learn rectification with a set of clean meta-data.
- Empirical results show improved robustness to label noise, including open-set noise, validating the effectiveness of VRI.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA