AI Navigate

Operator-Theoretic Foundations and Policy Gradient Methods for General MDPs with Unbounded Costs

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an operator-theoretic view of MDPs that optimizes over linear operators on general function spaces and uses perturbation theory to express derivatives of the objective as functions of these operators.
  • It extends reinforcement learning theory from finite-state finite-action MDPs to general state and action spaces, including settings with unbounded costs.
  • The framework yields low-complexity PPO-type reinforcement learning algorithms applicable to general state and action spaces.
  • By unifying existing RL results under an operator-theoretic perspective, the work highlights new theoretical and practical directions for general MDPs.

Abstract

Markov decision processes (MDPs) is viewed as an optimization of an objective function over certain linear operators over general function spaces. Using the well-established perturbation theory of linear operators, this viewpoint allows one to identify derivatives of the objective function as a function of the linear operators. This leads to generalization of many well-known results in reinforcement learning to cases with generate state and action spaces. Prior results of this type were only established in the finite-state finite-action MDP settings and in settings with certain linear function approximations. The framework also leads to new low-complexity PPO-type reinforcement learning algorithms for general state and action space MDPs.