Towards Efficient and Expressive Offline RL via Flow-Anchored Noise-conditioned Q-Learning

arXiv cs.LG / 5/5/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces Flow-Anchored Noise-conditioned Q-Learning (FAN), an offline reinforcement learning (RL) algorithm designed to be both efficient and high-performing.
  • FAN reduces the computational cost of flow policies and distributional critics by using only a single flow-policy iteration and a single Gaussian noise sample instead of many iterative samples/quantiles.
  • The authors provide theoretical convergence and performance bounds, arguing that these efficiency-oriented simplifications also improve task performance.
  • Experiments on robotic manipulation and locomotion show FAN achieves state-of-the-art results while substantially lowering both training and inference runtimes.
  • The authors release an implementation on GitHub, enabling others to reproduce and build upon the method.

Abstract

We propose Flow-Anchored Noise-conditioned Q-Learning (FAN), a highly efficient and high-performing offline reinforcement learning (RL) algorithm. Recent work has shown that expressive flow policies and distributional critics improve offline RL performance, but at a high computational cost. Specifically, flow policies require iterative sampling to produce a single action, and distributional critics require computation over multiple samples (e.g., quantiles) to estimate value. To address these inefficiencies while maintaining high performance, we introduce FAN. Our method employs a behavior regularization technique that utilizes only a single flow policy iteration and requires only a single Gaussian noise sample for distributional critics. Our theoretical analysis of convergence and performance bounds demonstrates that these simplifications not only improve efficiency but also lead to superior task performance. Experiments on robotic manipulation and locomotion tasks demonstrate that FAN achieves state-of-the-art performance while significantly reducing both training and inference runtimes. We release our code at https://github.com/brianlsy98/FAN.