AI Navigate

Efficient Soft Actor-Critic with LLM-Based Action-Level Guidance for Continuous Control

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • GuidedSAC introduces an LLM-based supervisor that provides action-level guidance to the Soft Actor-Critic algorithm, enabling targeted exploration in large state-action spaces.
  • The LLM-based supervisor analyzes the most recent trajectory using current state information and visual replays to provide action-level interventions that guide exploration.
  • Theoretical analysis shows GuidedSAC preserves SAC's convergence guarantees while accelerating convergence.
  • Empirical results on discrete and continuous tasks, including MuJoCo benchmarks, show GuidedSAC outperforms standard SAC and exploration-enhanced methods (RND, ICM, E3B) in sample efficiency and final performance.

Abstract

We present GuidedSAC, a novel reinforcement learning (RL) algorithm that facilitates efficient exploration in vast state-action spaces. GuidedSAC leverages large language models (LLMs) as intelligent supervisors that provide action-level guidance for the Soft Actor-Critic (SAC) algorithm. The LLM-based supervisor analyzes the most recent trajectory using state information and visual replays, offering action-level interventions that enable targeted exploration. Furthermore, we provide a theoretical analysis of GuidedSAC, proving that it preserves the convergence guarantees of SAC while improving convergence speed. Through experiments in both discrete and continuous control environments, including toy text tasks and complex MuJoCo benchmarks, we demonstrate that GuidedSAC consistently outperforms standard SAC and state-of-the-art exploration-enhanced variants (e.g., RND, ICM, and E3B) in terms of sample efficiency and final performance.