Multi-objective Reinforcement Learning With Augmented States Requires Rewards After Deployment

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights an overlooked distinction in multi-objective reinforcement learning (MORL) versus more conventional single-objective RL, emphasizing how optimal MORL policies depend on past reward information when using non-linear utility functions.
  • It explains that the common “augmented state” approach—concatenating the current environment state with a discounted sum of previous rewards—implies the agent must still access the reward signal after deployment.
  • The note clarifies the underlying reason augmented-state policies need post-deployment reward (or an equivalent proxy), even when further training or learning is not performed.
  • It discusses practical repercussions for deploying MORL systems, since continued reward availability becomes an operational requirement rather than only a training-time detail.

Abstract

This research note identifies a previously overlooked distinction between multi-objective reinforcement learning (MORL), and more conventional single-objective reinforcement learning (RL). It has previously been noted that the optimal policy for an MORL agent with a non-linear utility function is required to be conditioned on both the current environmental state and on some measure of the previously accrued reward. This is generally implemented by concatenating the observed state of the environment with the discounted sum of previous rewards to create an augmented state. While augmented states have been widely-used in the MORL literature, one implication of their use has not previously been reported -- namely that they require the agent to have continued access to the reward signal (or a proxy thereof) after deployment, even if no further learning is required. This note explains why this is the case, and considers the practical repercussions of this requirement.