Understanding Overparametrization in Survival Models through Interpolation
arXiv stat.ML / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- Classical learning theory expects a U-shaped test-loss curve versus model capacity, but modern ML often shows a “double-descent” pattern where loss decreases again after an interpolation threshold.
- This paper studies whether double-descent and overparametrization effects arise in survival analysis, which has been less explored than regression/classification.
- The authors analyze four survival models (DeepSurv, PC-Hazard, Nnet-Survival, N-MTLR) by rigorously defining interpolation and finite-norm interpolation for loss-based training.
- They show that interpolation (and finite-norm interpolation) may exist or fail depending on likelihood-based losses and practical model implementation, implying overparametrization is not necessarily benign for survival models.
- Numerical experiments back the theory by demonstrating distinct generalization behaviors across the studied survival models.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to