Distribution-dependent Generalization Bounds for Tuning Linear Regression Across Tasks

arXiv stat.ML / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper derives distribution-dependent generalization error bounds for linear regression when tuning regularization hyperparameters (L1/L2), covering ridge, lasso, and elastic net, across related tasks.
  • It argues that prior “uniform over all distributions” bounds degrade with feature dimension d, whereas the proposed bounds improve when the data distribution is “nice.”
  • Under assumptions that task instances are i.i.d. draws from well-studied distribution classes (e.g., sub-Gaussians), the bounds avoid worsening with increasing d and can be much tighter for very large feature spaces.
  • The results further extend to a generalized ridge setting, achieving tighter bounds by incorporating an estimate of the ground-truth distribution mean.

Abstract

Modern regression problems often involve high-dimensional data and a careful tuning of the regularization hyperparameters is crucial to avoid overly complex models that may overfit the training data while guaranteeing desirable properties like effective variable selection. We study the recently introduced direction of tuning regularization hyperparameters in linear regression across multiple related tasks. We obtain distribution-dependent bounds on the generalization error for the validation loss when tuning the L1 and L2 coefficients, including ridge, lasso and the elastic net. In contrast, prior work develops bounds that apply uniformly to all distributions, but such bounds necessarily degrade with feature dimension, d. While these bounds are shown to be tight for worst-case distributions, our bounds improve with the "niceness" of the data distribution. Concretely, we show that under additional assumptions that instances within each task are i.i.d. draws from broad well-studied classes of distributions including sub-Gaussians, our generalization bounds do not get worse with increasing d, and are much sharper than prior work for very large d. We also extend our results to a generalization of ridge regression, where we achieve tighter bounds that take into account an estimate of the mean of the ground truth distribution.