On the Learning Curves of Revenue Maximization

arXiv cs.LG / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies learning curves specifically for revenue maximization in single-item, single-buyer supervised settings, focusing on how prediction error decays as training data grows.
  • It shows that without any assumptions on the valuation distribution, a Bayes-consistent revenue-maximizing algorithm exists, but convergence to zero error can be arbitrarily slow even when the optimal revenue is finite.
  • It provides a faster, benchmark rate: when the optimal revenue can be obtained using a finite price, the learning curve’s error decays at about 1/√n.
  • For valuation distributions with support on discrete value sets, the paper establishes that learning curves can decay almost exponentially fast, achieving rates impossible under the distribution-free (PAC-style) framework.
  • Overall, the work replaces prior distribution-free learning-curve treatments with a distribution-shape-sensitive characterization, aiming to capture the actual form of learning dynamics rather than worst-case envelopes.

Abstract

Learning curves are a fundamental primitive in supervised learning, describing how an algorithm's performance improves with more data and providing a quantitative measure of its generalization ability. Formally, a learning curve plots the decay of an algorithm's error for a fixed underlying distribution as a function of the number of training samples. Prior work on revenue-maximizing learning algorithms, starting with the seminal work of Cole and Roughgarden [STOC, 2014], adopts a distribution-free perspective, which parallels the PAC learning framework in learning theory. This approach evaluates performance against the hardest possible sequence of valuation distributions, one for each sample size, effectively defining the upper envelope of learning curves over all possible distributions, thus leading to error bounds that do not capture the shape of the learning curves. In this work we initiate the study of learning curves for revenue maximization and provide a near-complete characterization of their rate of decay in the basic setting of a single item and a single buyer. In the absence of any restriction on the valuation distribution, we show that there exists a Bayes-consistent algorithm, meaning that its learning curve converges to zero for any arbitrary valuation distribution as the number of samples n \to \infty. However, this convergence must be arbitrarily slow, even if the optimal revenue is finite. In contrast, if the optimal revenue is achieved by a finite price, then the optimal rate of decay is roughly 1/\sqrt{n}. Finally, for distributions supported on discrete sets of values, we show that learning curves decay almost exponentially fast, a rate unattainable under the PAC framework.

On the Learning Curves of Revenue Maximization | AI Navigate