Learnability with Partial Labels and Adaptive Nearest Neighbors

arXiv stat.ML / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper mathematically characterizes when learning with partial labels (PLL) is feasible, clarifying the conditions under which PLL can succeed.
  • It introduces PLA-kNN, an adaptive nearest-neighbors algorithm for PLL that works well across general PLL settings and comes with strong theoretical guarantees.
  • PLA-kNN is shown to outperform state-of-the-art PLL methods in broad PLL scenarios based on empirical experiments.
  • The work combines theoretical analysis with empirical validation, expanding the applicability of PLL beyond niche cases.

Abstract

Prior work on partial labels learning (PLL) has shown that learning is possible even when each instance is associated with a bag of labels, rather than a single accurate but costly label. However, the necessary conditions for learning with partial labels remain unclear, and existing PLL methods are effective only in specific scenarios. In this work, we mathematically characterize the settings in which PLL is feasible. In addition, we present PL A-kNN, an adaptive nearest-neighbors algorithm for PLL that is effective in general scenarios and enjoys strong performance guarantees. Experimental results corroborate that PL A-kNN can outperform state-of-the-art methods in general PLL scenarios.