Does "Do Differentiable Simulators Give Better Policy Gradients?'' Give Better Policy Gradients?

arXiv cs.RO / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates policy gradient reinforcement learning settings where differentiable simulators can enable fast 1st-order gradient estimates, but discontinuous dynamics introduce bias that hurts performance.
  • It finds that prior fixes based on confidence intervals around the noisy, derivative-free REINFORCE estimator often require task-specific hyperparameter tuning and suffer from poor sample efficiency.
  • The authors propose DDCG, a lightweight estimator-switching test that detects nonsmooth regions and switches methods, achieving robust results with only one hyperparameter and good behavior under small sample regimes.
  • They also introduce IVW-H for differentiable robotics control tasks, using per-step inverse-variance weighting to stabilize variance without explicit discontinuity detection, leading to strong empirical performance.
  • Overall, the results suggest that while switching estimators can improve robustness in controlled experiments, in real deployments variance control may be the dominant factor for effectiveness.

Abstract

In policy gradient reinforcement learning, access to a differentiable model enables 1st-order gradient estimation that accelerates learning compared to relying solely on derivative-free 0th-order estimators. However, discontinuous dynamics cause bias and undermine the effectiveness of 1st-order estimators. Prior work addressed this bias by constructing a confidence interval around the REINFORCE 0th-order gradient estimator and using these bounds to detect discontinuities. However, the REINFORCE estimator is notoriously noisy, and we find that this method requires task-specific hyperparameter tuning and has low sample efficiency. This paper asks whether such bias is the primary obstacle and what minimal fixes suffice. First, we re-examine standard discontinuous settings from prior work and introduce DDCG, a lightweight test that switches estimators in nonsmooth regions; with a single hyperparameter, DDCG achieves robust performance and remains reliable with small samples. Second, on differentiable robotics control tasks, we present IVW-H, a per-step inverse-variance implementation that stabilizes variance without explicit discontinuity detection and yields strong results. Together, these findings indicate that while estimator switching improves robustness in controlled studies, careful variance control often dominates in practical deployments.