Beyond Consistency: Inference for the Relative risk functional in Deep Nonparametric Cox Models
arXiv stat.ML / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses theoretical gaps in deep neural network–based estimators for nonparametric Cox proportional hazards models, focusing on how optimization error affects population risk under partial likelihood.
- It proves nonasymptotic oracle inequalities that relate in-sample gradient-based training error to population risk for general trained networks without needing the exact empirical risk minimizer.
- The authors design a structured neural parameterization to achieve infinity-norm approximation rates and thereby control pointwise bias needed for valid statistical inference.
- Using Hajek–Hoeffding projection and an infinitesimal jackknife representation, the work establishes pointwise and multivariate asymptotic normality for subsampled ensemble estimators and enables Wald-type inference for relative risk contrasts like log-hazard ratios.
- The paper derives allowable subsample-size ranges that balance bias correction with domination of the Hajek–Hoeffding term, requiring weaker covariance decay assumptions than prior subsampling results, and validates the theory via simulations and a real-data application.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to