Why MLOps Retraining Schedules Fail — Models Don’t Forget, They Get Shocked

Towards Data Science / 4/11/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article argues that calendar-based MLOps retraining schedules often fail because the “problem” is not forgetting but models being unexpectedly shocked by distribution or behavior changes.
  • By fitting an Ebbinghaus forgetting curve to 555,000 real fraud transactions and obtaining an R² of −0.31, it shows that traditional forgetting assumptions do not match observed production data.
  • It proposes a practical shock-detection approach intended to trigger retraining only when meaningful shifts occur, aiming to improve reliability in real fraud systems.
  • The findings emphasize validating retraining triggers with real-world metrics rather than relying on fixed time intervals.
  • Overall, the piece frames retraining strategy as a signal-detection problem tied to production dynamics, not a schedule-only operational practice.

We fitted the Ebbinghaus forgetting curve to 555,000 real fraud transactions and got R² = −0.31 — worse than a flat line. This result explains why calendar-based retraining fails in production and introduces a practical shock-detection approach that works in real systems.

The post Why MLOps Retraining Schedules Fail — Models Don’t Forget, They Get Shocked appeared first on Towards Data Science.