Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN

arXiv stat.ML / 3/26/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that attenuation bias—caused by measurement errors in input variables—also affects neural networks, leading them to systematically underestimate extreme values in astronomical regression tasks.
  • It generalizes a latent-variable approach previously used for linear regression by introducing LatentNN, which jointly learns network parameters and latent (error-free) input values via maximum joint likelihood.
  • LatentNN is validated on synthetic 1D and multivariate correlated-feature setups as well as a stellar spectroscopy application, showing reduced bias relative to standard neural networks, especially in low signal-to-noise regimes.
  • The method is reported to work best when measurement error is less than about half the intrinsic data range, with reduced effectiveness in extremely low signal-to-noise conditions with few informative features.
  • The authors provide an open-source implementation of LatentNN to support improved inference for astronomical data where measurement noise is substantial.

Abstract

Attenuation bias -- the systematic underestimation of regression coefficients due to measurement errors in input variables -- affects astronomical data-driven models. For linear regression, this problem was solved by treating the true input values as latent variables to be estimated alongside model parameters. In this paper, we show that neural networks suffer from the same attenuation bias and that the latent variable solution generalizes directly to neural networks. We introduce LatentNN, a method that jointly optimizes network parameters and latent input values by maximizing the joint likelihood of observing both inputs and outputs. We demonstrate the correction on one-dimensional regression, multivariate inputs with correlated features, and stellar spectroscopy applications. LatentNN reduces attenuation bias across a range of signal-to-noise ratios where standard neural networks show large bias. This provides a framework for improved neural network inference in the low signal-to-noise regime characteristic of astronomical data. This bias correction is most effective when measurement errors are less than roughly half the intrinsic data range; in the regime of very low signal-to-noise and few informative features. Code is available at https://github.com/tingyuansen/LatentNN.