When Policies Cannot Be Retrained: A Unified Closed-Form View of Post-Training Steering in Offline Reinforcement Learning

arXiv cs.LG / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper studies how to adapt deployment-time objectives for offline RL when the trained actor is frozen and cannot be retrained, using Product-of-Experts (PoE) composition with a goal-conditioned prior.
  • The authors find that PoE-style steering shows graceful degradation under degraded or random priors, whereas additive or prior-only adaptation can collapse in performance.
  • They derive a closed-form equivalence: for diagonal-Gaussian policies, PoE with coefficient α matches the deterministic policy of KL-regularized adaptation with β = α/(1-α), differing mainly in posterior covariance scaling.
  • Empirically, across multiple D4RL and AntMaze settings, medium-expert frozen actors reach an “actor-competence ceiling,” and some cases (e.g., behavior-cloned frozen actors on AntMaze) yield zero success across composition rules.
  • The work frames PoE and KL-regularized adaptation as essentially the same actor-anchored safety mechanism for deployment-time steering rather than a universal performance booster.

Abstract

Offline reinforcement learning (RL) can learn effective policies from fixed datasets, but deployment objectives may change after training, and in many applications the trained actor cannot be retrained because of data, cost, or governance constraints. We study deployment-time adaptation for frozen offline actors using Product-of-Experts (PoE) composition with a goal-conditioned prior. Our main practical finding is graceful degradation rather than universal performance gain: under degraded or random priors, precision-weighted composition remains anchored to the frozen actor, while additive and prior-only adaptation collapse, and a KL-budget selector often recovers a near-oracle operating point. We also make explicit a closed-form identity in the frozen-actor setting: for diagonal-Gaussian actors and priors, PoE with coefficient alpha yields the same deterministic policy as KL-regularized adaptation with beta = alpha / (1 - alpha), with posterior covariances differing only by a global scalar factor. Empirically, across four D4RL environments (3,900 MuJoCo episodes), we observe a 4/5/3 HELP/FROZEN/HURT split. Extending the analysis to six harder cells and two AntMaze diagnostics reveals an actor-competence ceiling: medium-expert remains HURT in all 9 cells at every tested alpha, while AntMaze with a behavior-cloned frozen actor yields zero success for all composition rules. Overall, PoE and KL-regularized adaptation are best viewed as a single actor-anchored safety mechanism for deployment-time steering.