A One-Inclusion Graph Approach to Multi-Group Learning

arXiv cs.LG / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents new, tight upper bounds on the sample complexity for multi-group learning, aiming to characterize how much data is needed for reliable performance across groups.
  • It introduces an algorithm that extends the one-inclusion graph prediction strategy by using a generalized form of bipartite b-matching.
  • In the group-realizable setting, the authors provide a matching lower bound that verifies the proposed method’s \(\log n / n\) convergence rate is optimal in general.
  • Under a relaxed evaluation setting where the target group is selected obliviously of the sample, the algorithm is shown to achieve an improved optimal \(1/n\) convergence rate.
  • The work is positioned as a theoretical analysis that refines known learning-rate guarantees for multi-group settings with different evaluation assumptions.

Abstract

We prove the tightest-known upper bounds on the sample complexity of multi-group learning. Our algorithm extends the one-inclusion graph prediction strategy using a generalization of bipartite b-matching. In the group-realizable setting, we provide a lower bound confirming that our algorithm's \log n / n convergence rate is optimal in general. If one relaxes the learning objective such that the group on which we are evaluated is chosen obliviously of the sample, then our algorithm achieves the optimal 1/n convergence rate under group-realizability.