Lyapunov-Certified Direct Switching Theory for Q-Learning

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper provides a new analysis of constant-stepsize Q-learning by rewriting it as a direct stochastic switching system.
  • It shows that the Bellman maximization error can be represented exactly using a stochastic policy, yielding a switched linear conditional-mean recursion with martingale-difference noise.
  • The algorithm’s intrinsic convergence drift rate is characterized by the joint spectral radius (JSR) of the switching family, which can be tighter (smaller) than standard row-sum-based rates.
  • The authors derive finite-time bounds for the final iterate using a JSR-induced Lyapunov function, and further present a computable quadratic-certificate form for practical verification.

Abstract

Q-learning is one of the most fundamental algorithms in reinforcement learning. We analyze constant-stepsize Q-learning through a direct stochastic switching system representation. The key observation is that the Bellman maximization error can be represented exactly by a stochastic policy. Therefore, the Q-learning error admits a switched linear conditional-mean recursion with martingale-difference noise. The intrinsic drift rate is the joint spectral radius (JSR) of the direct switching family, which can be strictly smaller than the standard row-sum rate. Using this representation, we derive a finite-time final-iterate bound via a JSR-induced Lyapunov function and then give a computable quadratic-certificate version.