Preserve Support, Not Correspondence: Dynamic Routing for Offline Reinforcement Learning

arXiv cs.AI / 4/27/2026

💬 OpinionModels & Research

Key Points

  • The paper addresses a core limitation of one-step offline reinforcement learning: keeping policy improvement aligned with actions supported by the dataset while still optimizing toward higher critic (Q) values.
  • It argues that existing one-step extraction methods can over-constrain learning by asking a single student output to satisfy both “higher Q” and “stay near paired endpoint” objectives, causing compromise losses even when better locally supported actions exist.
  • The proposed method, DROL, is a latent-conditioned one-step actor that samples K candidate actions and performs top-1 dynamic routing by assigning each dataset action to its nearest candidate.
  • Training updates only the routed “winner” using a combination of Behavior Cloning and critic guidance, and because routing is recomputed as the candidate geometry changes, supported regions can shift during learning.
  • Experiments on OGBench and D4RL show DROL is competitive with a one-step FQL baseline, improving many OGBench task groups while maintaining strong performance on AntMaze and Adroit.

Abstract

One-step offline RL actors are attractive because they avoid backpropagating through long iterative samplers and keep inference cheap, but they still have to improve under a critic without drifting away from actions that the dataset can support. In recent one-step extraction pipelines, a strong iterative teacher provides one target action for each latent draw, and the same student output is asked to do both jobs: move toward higher Q and stay near that paired endpoint. If those two directions disagree, the loss resolves them as a compromise on that same sample, even when a nearby better action remains locally supported by the data. We propose DROL, a latent-conditioned one-step actor trained with top-1 dynamic routing. For each state, the actor samples K candidate actions from a bounded latent prior, assigns each dataset action to its nearest candidate, and updates only that winner with Behavior Cloning and critic guidance. Because the routing is recomputed from the current candidate geometry, ownership of a supported region can shift across candidates over the course of learning. This gives a one-step actor room to make local improvements that pointwise extraction struggles to capture, while retaining single-pass inference at test time. On OGBench and D4RL, DROL is competitive with the one-step FQL baseline, improving many OGBench task groups while remaining strong on both AntMaze and Adroit. Project page: https://muzhancun.github.io/preprints/DROL.