Meta Additive Model: Interpretable Sparse Learning With Auto Weighting
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a new Meta Additive Model (MAM) that improves sparse additive modeling by learning how to reweight individual loss terms based on data rather than relying on fixed choices.
- Existing sparse additive models often optimize under mean-squared error and can degrade under complex, non-Gaussian noise such as outliers, noisy labels, and class imbalance; MAM targets these failure modes.
- MAM uses a bilevel optimization setup where an MLP parameterizes the loss-weighting function using meta data, enabling robust learning across multiple task types.
- The authors provide theoretical guarantees covering convergence, algorithmic generalization, and consistency of variable selection under mild assumptions.
- Experiments show MAM outperforms several state-of-the-art additive models on both synthetic and real datasets across different data corruption scenarios.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to