Deep Reinforcement Learning for Robotic Manipulation under Distribution Shift with Bounded Extremum Seeking

arXiv cs.RO / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a common limitation of reinforcement learning in robotics: performance drops when deployment conditions differ from the training distribution, especially in contact-rich manipulation tasks like pushing and pick-and-place.
  • It proposes a hybrid control approach that combines a deep deterministic policy gradient (DDPG) learned policy with a bounded extremum seeking (ES) component during deployment.
  • The RL policy is used to generate fast manipulation behavior, while the bounded ES is designed to maintain robustness to time variations and other shifts when the system moves out-of-distribution.
  • Experiments evaluate the controller under multiple out-of-distribution scenarios, including time-varying goals and spatially varying friction patches.
  • The overall contribution is a method to improve robustness of learned robotic manipulation policies under distribution shift without retraining for each deployment variation.

Abstract

Reinforcement learning has shown strong performance in robotic manipulation, but learned policies often degrade in performance when test conditions differ from the training distribution. This limitation is especially important in contact-rich tasks such as pushing and pick-and-place, where changes in goals, contact conditions, or robot dynamics can drive the system out-of-distribution at inference time. In this paper, we investigate a hybrid controller that combines reinforcement learning with bounded extremum seeking to improve robustness under such conditions. In the proposed approach, deep deterministic policy gradient (DDPG) policies are trained under standard conditions on the robotic pushing and pick-and-place tasks, and are then combined with bounded ES during deployment. The RL policy provides fast manipulation behavior, while bounded ES ensures robustness of the overall controller to time variations when operating conditions depart from those seen during training. The resulting controller is evaluated under several out-of-distribution settings, including time-varying goals and spatially varying friction patches.