AI Navigate

Taming the Adversary: Stable Minimax Deep Deterministic Policy Gradient via Fractional Objectives

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MMDDPG (Minimax Deep Deterministic Policy Gradient with fractional objectives) is proposed to learn disturbance-resilient policies for continuous control tasks.
  • The training is formulated as a minimax game between a user policy and an adversarial disturbance policy, where the user minimizes the objective and the adversary maximizes it.
  • A fractional objective is introduced to balance task performance and disturbance magnitude, preventing overly aggressive disturbances and stabilizing learning.
  • Experimental results in MuJoCo demonstrate significantly improved robustness against external force perturbations and model parameter variations.

Abstract

Reinforcement learning (RL) has achieved remarkable success in a wide range of control and decision-making tasks. However, RL agents often exhibit unstable or degraded performance when deployed in environments subject to unexpected external disturbances and model uncertainties. Consequently, ensuring reliable performance under such conditions remains a critical challenge. In this paper, we propose minimax deep deterministic policy gradient (MMDDPG), a framework for learning disturbance-resilient policies in continuous control tasks. The training process is formulated as a minimax optimization problem between a user policy and an adversarial disturbance policy. In this problem, the user learns a robust policy that minimizes the objective function, while the adversary generates disturbances that maximize it. To stabilize this interaction, we introduce a fractional objective that balances task performance and disturbance magnitude. This objective prevents excessively aggressive disturbances and promotes robust learning. Experimental evaluations in MuJoCo environments demonstrate that the proposed MMDDPG achieves significantly improved robustness against both external force perturbations and model parameter variations.