Beyond the Mean: Distribution-Aware Loss Functions for Bimodal Regression

arXiv cs.AI / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key uncertainty-estimation problem in bimodal regression, where errors derived from learned representations follow a bimodal distribution due to both confident and ambiguous predictions.
  • It argues that standard regression losses implicitly assume unimodal Gaussian noise, which can cause “mean-collapse” and poor representation of predictive uncertainty in bimodal settings.
  • The authors propose distribution-aware loss functions that combine normalized RMSE with Wasserstein and Cramér distances to better model bimodal predictive distributions.
  • Experiments across four evaluation stages show the method can recover bimodal distributions using standard deep regression architectures without the optimization instability typical of Mixture Density Networks (MDNs).
  • Results indicate the Wasserstein-based loss achieves a Pareto efficiency benefit—maintaining MSE-like stability on unimodal tasks while reducing Jensen-Shannon Divergence by 45% on complex bimodal datasets, and outperforming MDNs in fidelity and robustness for aleatoric uncertainty estimation.

Abstract

Despite the strong predictive performance achieved by machine learning models across many application domains, assessing their trustworthiness through reliable estimates of predictive confidence remains a critical challenge. This issue arises in scenarios where the likelihood of error inferred from learned representations follows a bimodal distribution, resulting from the coexistence of confident and ambiguous predictions. Standard regression approaches often struggle to adequately express this predictive uncertainty, as they implicitly assume unimodal Gaussian noise, leading to mean-collapse behavior in such settings. Although Mixture Density Networks (MDNs) can represent different distributions, they suffer from severe optimization instability. We propose a family of distribution-aware loss functions integrating normalized RMSE with Wasserstein and Cram\'er distances. When applied to standard deep regression models, our approach recovers bimodal distributions without the volatility of mixture models. Validated across four experimental stages, our results show that the proposed Wasserstein loss establishes a new Pareto efficiency frontier: matching the stability of standard regression losses like MSE in unimodal tasks while reducing Jensen-Shannon Divergence by 45% on complex bimodal datasets. Our framework strictly dominates MDNs in both fidelity and robustness, offering a reliable tool for aleatoric uncertainty estimation in trustworthy AI systems.