Retraining as Approximate Bayesian Inference

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that model retraining should be viewed as approximate Bayesian inference rather than just routine maintenance, connecting continuously updated beliefs to deployed models that may be “frozen.”
  • It introduces the concept of “learning debt” as the gap between the belief state and the deployed model, framing retraining as minimizing decision costs under computational constraints.
  • Katz proposes a decision-theoretic framework that derives evidence-based triggers from the model’s loss function, replacing calendar-based retraining schedules.
  • The approach aims to make retraining governance more auditable by using explicit triggers tied to evidence and cost minimization.
  • The article includes a glossary to support readers who may not be familiar with Bayesian and decision-theoretic terminology.

Abstract

Model retraining is usually treated as an ongoing maintenance task. But as Harrison Katz now argues, retraining can be better understood as approximate Bayesian inference under computational constraints. The gap between a continuously updated belief state and your frozen deployed model is "learning debt," and the retraining decision is a cost minimization problem with a threshold that falls out of your loss function. In this article Katz provides a decision-theoretic framework for retraining policies. The result is evidence-based triggers that replace calendar schedules and make governance auditable. For readers less familiar with the Bayesian and decision-theoretic language, key terms are defined in a glossary at the end of the article.
広告