Feature-Label Modal Alignment for Robust Partial Multi-Label Learning

arXiv cs.LG / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses partial multi-label learning (PML) where candidate labels include both true labels and noisy labels that break the feature–label relationship and hurt classification performance.
  • It proposes PML-MA, treating features and labels as two complementary modalities and restoring their consistency via feature–pseudo-label modal alignment.
  • PML-MA uses low-rank orthogonal decomposition to filter noisy candidate labels and generate pseudo-labels that better approximate the true label distribution.
  • It then aligns features and pseudo-labels by projecting them into a shared subspace (global alignment) while preserving local neighborhood structure (local alignment).
  • The method further improves discriminability with multi-peak class prototype learning that leverages multi-label membership using pseudo-labels as soft weights, yielding strong accuracy and noise robustness on real and synthetic datasets.

Abstract

In partial multi-label learning (PML), each instance is associated with a set of candidate labels containing both ground-truth and noisy labels. The presence of noisy labels disrupts the correspondence between features and labels, degrading classification performance. To address this challenge, we propose a novel PML method based on feature-label modal alignment (PML-MA), which treats features and labels as two complementary modalities and restores their consistency through systematic alignment. Specifically, PML-MA first employs low-rank orthogonal decomposition to generate pseudo-labels that approximate the true label distribution by filtering noisy labels. It then aligns features and pseudo-labels through both global projection into a common subspace and local preservation of neighborhood structures. Finally, a multi-peak class prototype learning mechanism leverages the multi-label nature where instances simultaneously belong to multiple categories, using pseudo-labels as soft membership weights to enhance discriminability. By integrating modal alignment with prototype-guided refinement, PML-MA ensures pseudo-labels better reflect the true distribution while maintaining robustness against label noise. Extensive experiments on both real-world and synthetic datasets demonstrate that PML-MA significantly outperforms state-of-the-art methods, achieving superior classification accuracy and noise robustness.