Neyman-Pearson multiclass classification under label noise via empirical likelihood

arXiv stat.ML / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper studies Neyman-Pearson multiclass classification (NPMC) when training labels are corrupted and only noisy labels are observed, a scenario largely unexplored in prior NPMC work.
  • It proposes an empirical-likelihood (EL) approach that links noisy and true label distributions using an exponential tilting density-ratio model, enabling recovery of clean class proportions and posterior probabilities for error control.
  • The authors prove statistical guarantees for the maximum EL estimators, including consistency, asymptotic normality, and optimal convergence rates.
  • They show that, under mild conditions, the resulting classifier achieves asymptotic Neyman-Pearson-style oracle inequalities with respect to the unknown true labels.
  • An EM algorithm is presented for computation, and experiments indicate performance close to an oracle trained on clean labels and notably better than methods that ignore label noise.

Abstract

In many classification problems, the costs of misclassifying observations from different classes can be highly unequal. The Neyman-Pearson multiclass classification (NPMC) framework addresses this issue by minimizing a weighted misclassification risk while imposing upper bounds on class-specific error probabilities. Existing NPMC methods typically assume that training labels are correctly observed. In practice, however, labels are often corrupted due to measurement error or annotation, and the effect of such label noise on NPMC procedures remains largely unexplored. We study the NPMC problem when only noisy labels are available in the training data. We propose an empirical likelihood (EL)-based method that relates the distributions of noisy and true labels through an exponential tilting density ratio model. The resulting maximum EL estimators recover the class proportions and posterior probabilities of the clean labels required for error control. We establish consistency, asymptotic normality, and optimal convergence rates for these estimators. Under mild conditions, the resulting classifier satisfies NP oracle inequalities with respect to the true labels asymptotically. An expectation-maximization algorithm computes the maximum EL estimators. Simulations show that the proposed method performs comparably to the oracle classifier under clean labels and substantially improves over procedures that ignore label noise.