Distributional Value Estimation Without Target Networks for Robust Quality-Diversity

arXiv cs.LG / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper presents QDHUAC, a target-free, distributional reinforcement learning algorithm designed to improve Quality-Diversity (QD) search for complex locomotion tasks.
  • Standard high Update-to-Data (UTD) methods often rely on target networks for training stability, but the authors argue this adds a major computational bottleneck that limits practical use in resource-heavy QD settings.
  • QDHUAC aims to provide dense, low-variance gradient signals to enable stable training at high UTD ratios and to run Dominated Novelty Search more sample-efficiently.
  • Experiments on high-dimensional Brax environments show stable high-UTD training with competitive coverage and fitness, using an order of magnitude fewer environment steps than baseline approaches.
  • The authors conclude that pairing target-free distributional critics with dominance-based selection can be a key ingredient for the next generation of sample-efficient evolutionary reinforcement learning algorithms.

Abstract

Quality-Diversity (QD) algorithms excel at discovering diverse repertoires of skills, but are hindered by poor sample efficiency and often require tens of millions of environment steps to solve complex locomotion tasks. Recent advances in Reinforcement Learning (RL) have shown that high Update-to-Data (UTD) ratios accelerate Actor-Critic learning. While effective, standard high-UTD algorithms typically utilise target networks to stabilise training. This requirement introduces a significant computational bottleneck, rendering them impractical for resource-intensive Quality-Diversity (QD) tasks where sample efficiency and rapid population adaptation are critical. In this paper, we introduce QDHUAC, a sample-efficient, target-free and distributional QD-RL algorithm that provides dense and low-variance gradient signals, which enables high-UTD training for Dominated Novelty Search whilst requiring an order of magnitude fewer environment steps. We demonstrate that our method enables stable training at high UTD ratios, achieving competitive coverage and fitness on high-dimensional Brax environments with an order of magnitude fewer samples than baselines. Our results suggest that combining target-free distributional critics with dominance-based selection is a key enabler for the next generation of sample-efficient evolutionary RL algorithms.