On the Interplay of Priors and Overparametrization in Bayesian Neural Network Posteriors
arXiv stat.ML / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper examines why Bayesian neural network (BNN) posteriors are traditionally viewed as hard to use in practice, focusing on how symmetries and non-identifiabilities complicate posterior geometry.
- It argues that overparametrization and the choice of weight-space priors jointly reshape BNN posterior geometry, producing three core phenomena: balancedness, weight reallocation on equal-probability manifolds, and prior conformity.
- The authors provide experimental validation using much larger posterior sampling budgets than prior studies to test these theoretical claims.
- Results indicate that increased model redundancy leads to structured posterior weight distributions that align more closely with the imposed priors, improving interpretability of BNN posterior structure.
- Overall, the work deepens understanding of the interplay between priors and overparameterized architectures in Bayesian inference settings.

