Tune to Learn: How Controller Gains Shape Robot Policy Learning

arXiv cs.RO / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that when state-conditioned robot policies are used with position controllers, controller gains should be chosen based on how learnable the resulting closed-loop system is rather than only by target compliance or stiffness.
  • It systematically studies how position controller gains affect behavior cloning, reinforcement learning from scratch, and sim-to-real transfer across multiple tasks and robot embodiments.
  • The results show behavior cloning performs best under compliant and overdamped gain regimes, while reinforcement learning can work across gain regimes if hyperparameters are tuned appropriately.
  • For sim-to-real transfer, both stiff and overdamped gain regimes can reduce transfer performance, indicating a tradeoff between learnability and real-world robustness.
  • Overall, the optimal gain-setting strategy depends on the learning paradigm used, not solely on the desired low-level control characteristics.

Abstract

Position controllers have become the dominant interface for executing learned manipulation policies. Yet a critical design decision remains understudied: how should we choose controller gains for policy learning? The conventional wisdom is to select gains based on desired task compliance or stiffness. However, this logic breaks down when controllers are paired with state-conditioned policies: effective stiffness emerges from the interplay between learned reactions and control dynamics, not from gains alone. We argue that gain selection should instead be guided by learnability: how amenable different gain settings are to the learning algorithm in use. In this work, we systematically investigate how position controller gains affect three core components of modern robot learning pipelines: behavior cloning, reinforcement learning from scratch, and sim-to-real transfer. Through extensive experiments across multiple tasks and robot embodiments, we find that: (1) behavior cloning benefits from compliant and overdamped gain regimes, (2) reinforcement learning can succeed across all gain regimes given compatible hyperparameter tuning, and (3) sim-to-real transfer is harmed by stiff and overdamped gain regimes. These findings reveal that optimal gain selection depends not on the desired task behavior, but on the learning paradigm employed. Project website: https://younghyopark.me/tune-to-learn