AI Navigate

Maximum-Entropy Exploration with Future State-Action Visitation Measures

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors introduce intrinsic rewards proportional to the entropy of the discounted distribution of future state-action features to guide exploration in reinforcement learning.
  • They prove that the expected sum of these intrinsic rewards lower-bounds the entropy of the discounted feature distribution over trajectories from the initial states, relating to a maximum entropy objective.
  • They show that the underlying feature visitation distribution is a fixed point of a contraction operator, enabling off-policy estimation of the objective.
  • Empirical results indicate faster convergence for exploration-only agents and improved within-trajectory visitation, with similar control performance to baselines on the evaluated benchmarks.

Abstract

Maximum entropy reinforcement learning motivates agents to explore states and actions to maximize the entropy of some distribution, typically by providing additional intrinsic rewards proportional to that entropy function. In this paper, we study intrinsic rewards proportional to the entropy of the discounted distribution of state-action features visited during future time steps. This approach is motivated by two results. First, we show that the expected sum of these intrinsic rewards is a lower bound on the entropy of the discounted distribution of state-action features visited in trajectories starting from the initial states, which we relate to an alternative maximum entropy objective. Second, we show that the distribution used in the intrinsic reward definition is the fixed point of a contraction operator and can therefore be estimated off-policy. Experiments highlight that the new objective leads to improved visitation of features within individual trajectories, in exchange for slightly reduced visitation of features in expectation over different trajectories, as suggested by the lower bound. It also leads to improved convergence speed for learning exploration-only agents. Control performance remains similar across most methods on the considered benchmarks.