AI Navigate

From Gradients to Riccati Geometry: Kalman World Models for Single-Pass Learning

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Kalman World Models (KWM), a gradient-free framework for training state-space models via recursive Bayesian filtering instead of backpropagation.
  • It replaces parameter learning with Kalman-style gain adaptation, turning training into online filtering and making error signals act as innovations.
  • The approach is extended to transformer-based large language models (LLMs), where internal activations are treated as latent dynamical states corrected via innovation terms for gradient-free training and adaptation.
  • The authors derive stability conditions, analyze computational complexity, and report empirical results on sequence modeling that show competitive performance with improved robustness and continual adaptation.
  • This work presents a control-theory grounded alternative to traditional gradient-based learning for sequential models, with potential implications for online learning and model robustness.

Abstract

Backpropagation dominates modern machine learning, yet it is not the only principled method for optimizing dynamical systems. We propose Kalman World Models (KWM), a class of learned state-space models trained via recursive Bayesian filtering rather than reverse-mode automatic differentiation. Instead of gradient descent updates, we replace parameter learning with Kalman-style gain adaptation. Training becomes online filtering; error signals become innovations. We further extend this framework to transformer-based large language models (LLMs), where internal activations are treated as latent dynamical states corrected via innovation terms. This yields a gradient-free training and adaptation paradigm grounded in control theory. We derive stability conditions, analyze computational complexity, and provide empirical results on sequence modeling tasks demonstrating competitive performance with improved robustness and continual adaptation properties.