Mistake gating leads to energy and memory efficient continual learning

arXiv cs.AI / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “memorized mistake-gated learning,” a biologically plausible synaptic update rule that allows weight updates only when classification errors occur.
  • By gating updates based on current and past errors, the method reduces the number of parameter updates by about 50% to 80%.
  • The approach is especially effective for continual/incremental learning, where new knowledge is learned alongside previously acquired knowledge.
  • It is also well suited to online learning and replay settings because fewer updates can reduce the required size of data storage buffers.
  • The authors report the rule is simple to implement (few lines of code), introduces no new hyperparameters, and incurs negligible computational overhead.

Abstract

Synaptic plasticity is metabolically expensive, yet animals continuously update their internal models without exhausting energy reserves. However, when artificial neural networks are trained, the network parameters are typically updated on every sample that is presented, even if the sample was classified correctly. Inspired by the human negativity bias and error-related negativity, we propose 'memorized mistake-gated learning' -- a biologically plausible plasticity rule where synaptic updates are strictly gated by current and past classification errors. This reduces the number of updates the network needs to make by 50\%\sim80\%. Mistake gating is particularly well suited in two cases: 1) For incremental learning where new knowledge is acquired on a background of pre-existing knowledge, 2) For online learning scenarios when data needs to be stored for later replay, as mistake-gating reduces storage buffer requirements. The algorithm can be implemented in a few lines of code, adds no hyper-parameters, and comes at negligible computational overhead. Learning on mistakes is an energy efficient and biologically relevant modification to commonly used learning rules that is well suited for continual learning.