PACE: Parameter Change for Unsupervised Environment Design

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Unsupervised Environment Design (UED) can improve reinforcement learning generalization, but it depends on reliable evaluation signals that current proxy-based methods struggle to provide.
  • The proposed Parameter Change Environment Design (PACE) evaluates environments by measuring the policy parameter change they induce during training, aligning the evaluation with realized learning progress.
  • PACE uses a first-order approximation of the policy optimization objective, turning environment value into a quantity proportional to the squared L2 norm of the induced parameter update, which reduces variance and avoids extra rollouts.
  • Experiments on MiniGrid and Craftax show PACE improves over existing UED baselines, yielding better IQM and smaller Optimality Gap in out-of-distribution evaluations (e.g., IQM 96.4% and Optimality Gap 17.2% on MiniGrid).

Abstract

Unsupervised Environment Design (UED) offers a promising paradigm for improving reinforcement learning generalization by adaptively shaping training environments, but it requires reliable environment evaluation to remain effective. However, existing UED methods evaluate environments using indirect proxy signals such as regret, value-based errors, or Monte Carlo, which suffer from bias, high variance, or substantial computational overhead and fail to reflect agent realized learning progress. To address these limitations, we propose Parameter Change Environment Design (PACE), which evaluates an environment through the policy parameter change induced by training on that environment, directly grounding environment selection in realized learning progress. Specifically, PACE assigns environment value using a first-order approximation of the policy optimization objective, where the improvement induced by an environment is proportional to the squared L2 norm of the corresponding parameter update, enabling low-variance and computation-efficient evaluation without additional rollouts. Experiments on MiniGrid and Craftax show that PACE consistently outperforms established UED baselines, achieving higher IQM and smaller Optimality Gap on OOD evaluations, including an IQM of 96.4% and an Optimality Gap of 17.2% on MiniGrid.