Global Optimality for Constrained Exploration via Penalty Regularization

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies efficient exploration in reinforcement learning under real-world constraints (safety, resource limits, and imitation), where maximizing entropy of the state-action occupancy measure becomes difficult.
  • It notes that existing approaches often fail because entropy maximization lacks additive structure, breaking Bellman-equation-based methods, and because policy parameterization introduces non-convexity in both objectives and constraints.
  • The authors propose Policy Gradient Penalty (PGP), a single-loop policy-space method that enforces general convex occupancy-measure constraints using quadratic-penalty regularization and pseudo-rewards to obtain gradients via the Policy Gradient Theorem.
  • They provide theoretical results including regularity/smoothness of the penalized objective and global last-iterate convergence, achieving ε-optimal constrained entropy with ε-bounded constraint violation.
  • Empirically, PGP is validated on a grid-world benchmark via ablations and shown to scale on two challenging continuous-control tasks.

Abstract

Efficient exploration is a central problem in reinforcement learning and is often formalized as maximizing the entropy of the state-action occupancy measure. While unconstrained maximum-entropy exploration is relatively well understood, real-world exploration is often constrained by safety, resource, or imitation requirements. This constrained setting is particularly challenging because entropy maximization lacks additive structure, rendering Bellman-equation-based methods inapplicable. Moreover, scalable approaches require policy parameterization, inducing non-convexity in both the objective and the constraints. To our knowledge, the only prior model-free policy-gradient approach for this setting under general policy parameterization is due to Ying et al. (2025). Unfortunately, their guarantees are limited to weak regret and ergodic averages, which do not imply that the final output is a single deployable policy that is near-optimal and nearly feasible. In this work we take a different approach to this problem, and propose Policy Gradient Penalty (PGP) method, a single-loop policy-space method that enforces general convex occupancy-measure constraints via quadratic-penalty regularization. PGP constructs pseudo-rewards that yield gradient estimates of the penalized objective, subsequently exploiting the classical Policy Gradient Theorem. We further establish the regularity of the penalized objective, providing the smoothness properties needed to justify the convergence of PGP. Leveraging hidden convexity and strong duality, we then establish global last-iterate convergence guarantees, attaining an \epsilon-optimal constrained entropy value with \epsilon bounded constraint violation despite policy-induced non-convexity. We validate PGP through ablations on a grid-world benchmark and further demonstrate scalability on two challenging continuous-control tasks.