On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors

arXiv stat.ML / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why Bayesian neural network (BNN) posteriors are traditionally viewed as hard to use in practice, focusing on how symmetries and non-identifiabilities complicate posterior geometry.
  • It argues that overparametrization and the choice of weight-space priors jointly reshape BNN posterior geometry, producing three core phenomena: balancedness, weight reallocation on equal-probability manifolds, and prior conformity.
  • The authors provide experimental validation using much larger posterior sampling budgets than prior studies to test these theoretical claims.
  • Results indicate that increased model redundancy leads to structured posterior weight distributions that align more closely with the imposed priors, improving interpretability of BNN posterior structure.
  • Overall, the work deepens understanding of the interplay between priors and overparameterized architectures in Bayesian inference settings.

Abstract

Bayesian neural network (BNN) posteriors are often considered impractical for inference, as symmetries fragment them, non-identifiabilities inflate dimensionality, and weight-space priors are seen as meaningless. In this work, we study how overparametrization and priors together reshape BNN posteriors and derive implications allowing us to better understand their interplay. We show that redundancy introduces three key phenomena that fundamentally reshape the posterior geometry: balancedness, weight reallocation on equal-probability manifolds, and prior conformity. We validate our findings through extensive experiments with posterior sampling budgets that far exceed those of earlier works, and demonstrate how overparametrization induces structured, prior-aligned weight posterior distributions.