Choosing the Right Regularizer for Applied ML: Simulation Benchmarks of Popular Scikit-learn Regularization Frameworks

arXiv cs.LG / 4/7/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper surveys the evolution of regularization methods (from early stepwise regression to modern error-control, structured penalties, Bayesian approaches, and l0-based techniques).
  • It benchmarks four scikit-learn-relevant regularization frameworks—Ridge, Lasso, ElasticNet, and Post-Lasso OLS—over 134,400 simulations using a production-model-derived 7D data manifold.
  • When the sample-to-feature ratio is high (n/p >= 78), Ridge, Lasso, and ElasticNet show nearly interchangeable prediction accuracy.
  • However, Lasso’s performance is highly fragile under multicollinearity: at high condition numbers and low SNR, Lasso recall drops to 0.18 while ElasticNet remains around 0.93.
  • The authors provide an objective, feature-attribute-driven decision guide advising against using Lasso or Post-Lasso OLS in high-kappa, small-sample regimes.

Abstract

This study surveys the historical development of regularization, tracing its evolution from stepwise regression in the 1960s to recent advancements in formal error control, structured penalties for non-independent features, Bayesian methods, and l0-based regularization (among other techniques). We empirically evaluate the performance of four canonical frameworks -- Ridge, Lasso, ElasticNet, and Post-Lasso OLS -- across 134,400 simulations spanning a 7-dimensional manifold grounded in eight production-grade machine learning models. Our findings demonstrate that for prediction accuracy when the sample-to-feature ratio is sufficient (n/p >= 78), Ridge, Lasso, and ElasticNet are nearly interchangeable. However, we find that Lasso recall is highly fragile under multicollinearity; at high condition numbers (kappa) and low SNR, Lasso recall collapses to 0.18 while ElasticNet maintains 0.93. Consequently, we advise practitioners against using Lasso or Post-Lasso OLS at high kappa with small sample sizes. The analysis concludes with an objective-driven decision guide to assist machine learning engineers in selecting the optimal scikit-learn-supported framework based on observable feature space attributes.