Learning Inference Concurrency in DynamicGate MLP Structural and Mathematical Justification

arXiv cs.LG / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard neural networks cannot safely update parameters during inference because it makes the inference function ill-defined and outputs unstable.
  • It proposes DynamicGate MLP as a structural workaround by separating routing (gating) parameters from representation (prediction) parameters, enabling online adaptation without destabilizing inference.
  • The authors provide mathematical sufficient conditions under which learning-inference concurrency is well-defined, including scenarios with asynchronous or partial updates.
  • They show that, at each time step, the output can be interpreted as the forward pass of a valid model “snapshot,” even when updates occur.
  • The work positions DynamicGate MLP as a practical foundation for online adaptive and on-device learning systems.

Abstract

Conventional neural networks strictly separate learning and inference because if parameters are updated during inference, outputs become unstable and even the inference function itself is not well defined [1, 2, 3]. This paper shows that DynamicGate MLP structurally permits learning inference concurrency [4, 5]. The key idea is to separate routing (gating) parameters from representation (prediction) parameters, so that the gate can be adapted online while inference stability is preserved, or weights can be selectively updated only within the inactive subspace [4, 5, 6, 7]. We mathematically formalize sufficient conditions for concurrency and show that even under asynchronous or partial updates, the inference output at each time step can always be interpreted as a forward computation of a valid model snapshot [8, 9, 10]. This suggests that DynamicGate MLP can serve as a practical foundation for online adaptive and on device learning systems [11, 12].