Perturbing the Derivative: Doubly Wild Refitting for Model-Free Evaluation of Opaque Machine Learning Predictors
arXiv stat.ML / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses excess risk evaluation for empirical risk minimization (ERM) under convex losses, proposing a model-free approach that avoids using the global structure of the hypothesis class.
- It leverages “wild refitting” to produce “wild optimism” bounds by constructing two pseudo-outcome datasets via stochastic derivative perturbations with tuned scaling.
- Using only black-box access to the training algorithm and a single dataset (under a fixed design setting), the method refits the black box twice to obtain two wild predictors.
- The resulting framework yields an efficient upper bound on excess risk without requiring prior knowledge of the function class complexity, aiming to better support evaluation of opaque deep neural networks and generative models.
- The work is positioned as promising for theoretical evaluation where traditional learning-theory analyses can be infeasible for extremely complex modern models.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to