AI Navigate

Robust Self-Training with Closed-loop Label Correction for Learning from Noisy Labels

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a self-training label correction framework based on decoupled bilevel optimization where a classifier and a neural correction function co-evolve to robustly handle noisy labels.
  • It uses a small clean dataset along with noisy posterior simulation and intermediate features to transfer ground-truth knowledge, forming a closed-loop feedback system that mitigates error amplification.
  • The approach comes with theoretical guarantees on stability of the optimization process.
  • Empirical results on CIFAR and Clothing1M demonstrate state-of-the-art performance with reduced training time, showing practical applicability for learning from noisy labels.

Abstract

Training deep neural networks with noisy labels remains a significant challenge, often leading to degraded performance. Existing methods for handling label noise typically rely on either transition matrix, noise detection, or meta-learning techniques, but they often exhibit low utilization efficiency of noisy samples and incur high computational costs. In this paper, we propose a self-training label correction framework using decoupled bilevel optimization, where a classifier and neural correction function co-evolve. Leveraging a small clean dataset, our method employs noisy posterior simulation and intermediate features to transfer ground-truth knowledge, forming a closed-loop feedback system that prevents error amplification. Theoretical guarantees underpin the stability of our approach, and extensive experiments on benchmark datasets like CIFAR and Clothing1M confirm state-of-the-art performance with reduced training time, highlighting its practical applicability for learning from noisy labels.