List Sample Compression and Uniform Convergence

arXiv stat.ML / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper studies generalization in list learning (outputting multiple plausible labels per instance) by analyzing two classical PAC-learning principles: uniform convergence and sample compression.
  • It proves that, in the list PAC learning setting, uniform convergence remains equivalent to learnability, preserving the classical characterization underlying ERM.
  • It finds striking limitations for sample compression, showing that for a 3-label space (Y={0,1,2}), there exist 2-list-learnable concept classes that cannot be compressed, directly refuting the list version of the sample compression conjecture of Littlestone and Warmuth.
  • The authors strengthen the impossibility by proving that some 2-list-learnable classes cannot be compressed even when the reconstructed hypothesis is allowed to use lists of arbitrarily large size, and they extend related negative results to (1-list) PAC learnable classes with unbounded labels.
  • Overall, the work delineates which classical “Occam/ERM”-style principles transfer to list PAC learning and where they fundamentally break down.

Abstract

List learning is a variant of supervised classification where the learner outputs multiple plausible labels for each instance rather than just one. We investigate classical principles related to generalization within the context of list learning. Our primary goal is to determine whether classical principles in the PAC setting retain their applicability in the domain of list PAC learning. We focus on uniform convergence (which is the basis of Empirical Risk Minimization) and on sample compression (which is a powerful manifestation of Occam's Razor). In classical PAC learning, both uniform convergence and sample compression satisfy a form of `completeness': whenever a class is learnable, it can also be learned by a learning rule that adheres to these principles. We ask whether the same completeness holds true in the list learning setting. We show that uniform convergence remains equivalent to learnability in the list PAC learning setting. In contrast, our findings reveal surprising results regarding sample compression: we prove that when the label space is Y=\{0,1,2\}, then there are 2-list-learnable classes that cannot be compressed. This refutes the list version of the sample compression conjecture by Littlestone and Warmuth (1986). We prove an even stronger impossibility result, showing that there are 2-list-learnable classes that cannot be compressed even when the reconstructed function can work with lists of arbitrarily large size. We prove a similar result for (1-list) PAC learnable classes when the label space is unbounded. This generalizes a recent result by arXiv:2308.06424.