An Optimal Sauer Lemma Over $k$-ary Alphabets

arXiv cs.LG / 4/15/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proves a sharp Sauer-type inequality for multiclass and list prediction over $k$-ary alphabets, addressing known suboptimal bounds when $k>2$.
  • Instead of relying on Natarajan dimension, the result is formulated in terms of the Daniely–Shalev-Shwartz (DS) dimension and its list-DS extension, which better characterize multiclass and list PAC learnability.
  • The new bound is tight for all alphabet sizes $k$, list sizes $\ell$, and dimension values, replacing the exponential dependence on $\ell$ from Natarajan-based bounds with an optimal polynomial dependence and improving dependence on $k$.
  • The proof uses the polynomial method and notes that, unlike the classical VC case, no purely combinatorial proof is currently known in the DS setting.
  • As applications, the work derives improved sample-complexity and uniform-convergence bounds for list PAC learning, sharpening recent results from STOC 2023, COLT 2024, and NeurIPS 2024.

Abstract

The Sauer-Shelah-Perles Lemma is a cornerstone of combinatorics and learning theory, bounding the size of a binary hypothesis class in terms of its Vapnik-Chervonenkis (VC) dimension. For classes of functions over a k-ary alphabet, namely the multiclass setting, the Natarajan dimension has long served as an analogue of VC dimension, yet the corresponding Sauer-type bounds are suboptimal for alphabet sizes k>2. In this work, we establish a sharp Sauer inequality for multiclass and list prediction. Our bound is expressed in terms of the Daniely--Shalev-Shwartz (DS) dimension, and more generally with its extension, the list-DS dimension -- the combinatorial parameters that characterize multiclass and list PAC learnability. Our bound is tight for every alphabet size k, list size \ell, and dimension value, replacing the exponential dependence on \ell in the Natarajan-based bound by the optimal polynomial dependence, and improving the dependence on k as well. Our proof uses the polynomial method. In contrast to the classical VC case, where several direct combinatorial proofs are known, we are not aware of any purely combinatorial proof in the DS setting. This motivates several directions for future research, which are discussed in the paper. As consequences, we obtain improved sample complexity upper bounds for list PAC learning and for uniform convergence of list predictors, sharpening the recent results of Charikar et al.~(STOC~2023), Hanneke et al.~(COLT~2024), and Brukhim et al.~(NeurIPS~2024).