Retraining as Approximate Bayesian Inference
arXiv cs.AI / 3/27/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that model retraining should be viewed as approximate Bayesian inference rather than just routine maintenance, connecting continuously updated beliefs to deployed models that may be “frozen.”
- It introduces the concept of “learning debt” as the gap between the belief state and the deployed model, framing retraining as minimizing decision costs under computational constraints.
- Katz proposes a decision-theoretic framework that derives evidence-based triggers from the model’s loss function, replacing calendar-based retraining schedules.
- The approach aims to make retraining governance more auditable by using explicit triggers tied to evidence and cost minimization.
- The article includes a glossary to support readers who may not be familiar with Bayesian and decision-theoretic terminology.
広告




