From Gradients to Riccati Geometry: Kalman World Models for Single-Pass Learning
arXiv cs.LG / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Kalman World Models (KWM), a gradient-free framework for training state-space models via recursive Bayesian filtering instead of backpropagation.
- It replaces parameter learning with Kalman-style gain adaptation, turning training into online filtering and making error signals act as innovations.
- The approach is extended to transformer-based large language models (LLMs), where internal activations are treated as latent dynamical states corrected via innovation terms for gradient-free training and adaptation.
- The authors derive stability conditions, analyze computational complexity, and report empirical results on sequence modeling that show competitive performance with improved robustness and continual adaptation.
- This work presents a control-theory grounded alternative to traditional gradient-based learning for sequential models, with potential implications for online learning and model robustness.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA