Low-Rank Adaptation for Critic Learning in Off-Policy Reinforcement Learning

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses overfitting and instability in off-policy reinforcement learning caused by scaling critic networks, especially when using replay-buffer-based bootstrap training.
  • It proposes using Low-Rank Adaptation (LoRA) for off-policy critics by freezing randomly initialized base weights and training only low-rank adapters, effectively restricting updates to a low-dimensional subspace.
  • The method builds on SimbaV2 and introduces a LoRA formulation compatible with SimbaV2 that preserves its hyperspherical normalization geometry during frozen-backbone training.
  • Experiments on DeepMind Control and IsaacLab robotics benchmarks using SAC and FastTD3 show that LoRA yields lower critic loss and better policy performance than alternatives.
  • Overall, the authors argue that adaptive low-rank updates provide a simple, scalable structural regularization technique for critic learning in off-policy RL.

Abstract

Scaling critic capacity is a promising direction for enhancing off-policy reinforcement learning (RL). However, larger critics are prone to overfitting and unstable in replay-buffer-based bootstrap training. This paper leverages Low-Rank Adaptation (LoRA) as a structural-sparsity regularizer for off-policy critics. Our approach freezes randomly initialized base matrices and solely optimizes low-rank adapters, thereby constraining critic updates to a low-dimensional subspace. Built on top of SimbaV2, we further develop a LoRA formulation, compatible with SimbaV2, that preserves its hyperspherical normalization geometry under frozen-backbone training. We evaluate our method with SAC and FastTD3 on DeepMind Control locomotion and IsaacLab robotics benchmarks. LoRA consistently achieves lower critic loss during training and stronger policy performance. Extensive experiments demonstrate that adaptive low-rank updates provide a simple, scalable, and effective structural regularization for critic learning in off-policy RL.