PA2D-MORL: Pareto Ascent Directional Decomposition based Multi-Objective Reinforcement Learning

arXiv cs.AI / 3/23/2026

📰 NewsModels & Research

Key Points

  • The PA2D-MORL method introduces Pareto ascent directional decomposition to select scalarization weights and guide the multi-objective policy gradient for joint improvements across objectives.
  • It employs an evolutionary framework to optimize multiple policies in parallel, enabling exploration of Pareto frontier directions and diverse solutions.
  • A Pareto adaptive fine-tuning step is proposed to enhance the density and spread of the Pareto frontier approximation.
  • Experimental results on multi-objective robot control tasks show the method outperforms state-of-the-art algorithms in both quality and stability of the outcomes.

Abstract

Multi-objective reinforcement learning (MORL) provides an effective solution for decision-making problems involving conflicting objectives. However, achieving high-quality approximations to the Pareto policy set remains challenging, especially in complex tasks with continuous or high-dimensional state-action space. In this paper, we propose the Pareto Ascent Directional Decomposition based Multi-Objective Reinforcement Learning (PA2D-MORL) method, which constructs an efficient scheme for multi-objective problem decomposition and policy improvement, leading to a superior approximation of Pareto policy set. The proposed method leverages Pareto ascent direction to select the scalarization weights and computes the multi-objective policy gradient, which determines the policy optimization direction and ensures joint improvement on all objectives. Meanwhile, multiple policies are selectively optimized under an evolutionary framework to approximate the Pareto frontier from different directions. Additionally, a Pareto adaptive fine-tuning approach is applied to enhance the density and spread of the Pareto frontier approximation. Experiments on various multi-objective robot control tasks show that the proposed method clearly outperforms the current state-of-the-art algorithm in terms of both quality and stability of the outcomes.