The Bernstein-von Mises theorem for Bayesian one-pass online learning
arXiv stat.ML / 5/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies Bayesian online learning in the challenging one-pass setting, where standard theory breaks down because it usually requires mini-batch sizes to grow without bound.
- It introduces a new one-pass Bayesian online learning algorithm that uses a warm-start phase to stabilize sequential posterior updates.
- The authors prove that the sequentially updated posterior achieves the optimal convergence rate under their method.
- They further derive an online version of the Bernstein–von Mises theorem, providing reliable uncertainty quantification without needing diverging mini-batch sample sizes.
- Experiments on generalized linear models indicate the method’s performance is comparable to batch estimation and better than existing online approaches.
Related Articles

Why Autonomous Coding Agents Keep Failing — And What Actually Works
Dev.to

Why Enterprise AI Pilots Fail
Dev.to

The PDF Feature Nobody Asked For (That I Use Every Day)
Dev.to

How to Fix OpenClaw Tool Calling Issues
Dev.to

Mistral's new flagship Medium 3.5 folds chat, reasoning, and code into one model
THE DECODER