AI Navigate

SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • SPAARS is a curriculum learning framework for offline-to-online reinforcement learning (RL) that enables safer and more efficient policy fine-tuning by initially constraining exploration to a low-dimensional latent space and then transitioning to the raw action space.
  • The method addresses the exploitation gap found in CVAE-based explorations by bypassing the decoder bottleneck, improving policy performance beyond prior limitations.
  • SPAARS has two variants: a CVAE-based version that works with unordered state-action pairs without requiring trajectory segmentation, and SPAARS-SUPE, which incorporates temporal skill pretraining for enhanced exploration at the cost of needing trajectory chunks.
  • The framework demonstrates significant empirical improvements in sample efficiency and normalized returns on benchmark RL tasks like kitchen-mixed-v0, hopper-medium-v2, and walker2d-medium-v2 compared to baseline methods such as SUPE and IQL.
  • Theoretical contributions include proving an upper bound on the exploitation gap, variance reduction guarantees for latent-space policy gradients, and stability control via concurrent behavioral cloning during curriculum transitions.

Computer Science > Machine Learning

arXiv:2603.09378 (cs)
[Submitted on 10 Mar 2026]

Title:SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space

View a PDF of the paper titled SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space, by Swaminathan S K and Aritra Hazra
View PDF HTML (experimental)
Abstract:Offline-to-online reinforcement learning (RL) offers a promising paradigm for robotics by pre-training policies on safe, offline demonstrations and fine-tuning them via online interaction. However, a fundamental challenge remains: how to safely explore online without deviating from the behavioral support of the offline data? While recent methods leverage conditional variational autoencoders (CVAEs) to bound exploration within a latent space, they inherently suffer from an exploitation gap -- a performance ceiling imposed by the decoder's reconstruction loss. We introduce SPAARS, a curriculum learning framework that initially constrains exploration to the low-dimensional latent manifold for sample-efficient, safe behavioral improvement, then seamlessly transfers control to the raw action space, bypassing the decoder bottleneck. SPAARS has two instantiations: the CVAE-based variant requires only unordered (s,a) pairs and no trajectory segmentation; SPAARS-SUPE pairs SPAARS with OPAL temporal skill pretraining for stronger exploration structure at the cost of requiring trajectory chunks. We prove an upper bound on the exploitation gap using the Performance Difference Lemma, establish that latent-space policy gradients achieve provable variance reduction over raw-space exploration, and show that concurrent behavioral cloning during the latent phase directly controls curriculum transition stability. Empirically, SPAARS-SUPE achieves 0.825 normalized return on kitchen-mixed-v0 versus 0.75 for SUPE, with 5x better sample efficiency; standalone SPAARS achieves 92.7 and 102.9 normalized return on hopper-medium-v2 and walker2d-medium-v2 respectively, surpassing IQL baselines of 66.3 and 78.3 respectively, confirming the utility of the unordered-pair CVAE instantiation.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO)
Cite as: arXiv:2603.09378 [cs.LG]
  (or arXiv:2603.09378v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09378
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Aritra Hazra [view email]
[v1] Tue, 10 Mar 2026 08:52:15 UTC (800 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space, by Swaminathan S K and Aritra Hazra
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.