SPAARS: 抽象的探索と洗練された行動空間の活用による安全な強化学習ポリシー整合

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • SPAARSは、オフラインからオンライン強化学習(RL)へのカリキュラム学習フレームワークであり、探索を低次元潜在空間に制限して開始し、その後生の行動空間に移行することで、より安全で効率的なポリシーの微調整を可能にする。
  • 本手法は、CVAEに基づく探索に存在する活用ギャップに対処し、デコーダボトルネックを迂回することで、従来の制約を超えたポリシー性能の改善を図る。
  • SPAARSには2つのバリエーションがある:軌跡の分割を必要とせず、順不同の状態-行動ペアで動作するCVAEベース版と、軌跡チャンクが必要だが探索を強化する時間的スキル事前学習を組み込んだSPAARS-SUPEである。
  • 本フレームワークは、kitchen-mixed-v0、hopper-medium-v2、walker2d-medium-v2などのベンチマーク強化学習タスクにおいて、SUPEやIQLといった基準手法に比べて、サンプル効率および正規化リターンで大幅な実証的改善を示した。
  • 理論的貢献として、活用ギャップの上限証明、潜在空間ポリシー勾配の分散削減保証、カリキュラム遷移時の並行的な行動クローンによる安定性制御を含む。

Computer Science > Machine Learning

arXiv:2603.09378 (cs)
[Submitted on 10 Mar 2026]

Title:SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space

View a PDF of the paper titled SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space, by Swaminathan S K and Aritra Hazra
View PDF HTML (experimental)
Abstract:Offline-to-online reinforcement learning (RL) offers a promising paradigm for robotics by pre-training policies on safe, offline demonstrations and fine-tuning them via online interaction. However, a fundamental challenge remains: how to safely explore online without deviating from the behavioral support of the offline data? While recent methods leverage conditional variational autoencoders (CVAEs) to bound exploration within a latent space, they inherently suffer from an exploitation gap -- a performance ceiling imposed by the decoder's reconstruction loss. We introduce SPAARS, a curriculum learning framework that initially constrains exploration to the low-dimensional latent manifold for sample-efficient, safe behavioral improvement, then seamlessly transfers control to the raw action space, bypassing the decoder bottleneck. SPAARS has two instantiations: the CVAE-based variant requires only unordered (s,a) pairs and no trajectory segmentation; SPAARS-SUPE pairs SPAARS with OPAL temporal skill pretraining for stronger exploration structure at the cost of requiring trajectory chunks. We prove an upper bound on the exploitation gap using the Performance Difference Lemma, establish that latent-space policy gradients achieve provable variance reduction over raw-space exploration, and show that concurrent behavioral cloning during the latent phase directly controls curriculum transition stability. Empirically, SPAARS-SUPE achieves 0.825 normalized return on kitchen-mixed-v0 versus 0.75 for SUPE, with 5x better sample efficiency; standalone SPAARS achieves 92.7 and 102.9 normalized return on hopper-medium-v2 and walker2d-medium-v2 respectively, surpassing IQL baselines of 66.3 and 78.3 respectively, confirming the utility of the unordered-pair CVAE instantiation.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Robotics (cs.RO)
Cite as: arXiv:2603.09378 [cs.LG]
  (or arXiv:2603.09378v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09378
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Aritra Hazra [view email]
[v1] Tue, 10 Mar 2026 08:52:15 UTC (800 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled SPAARS: Safer RL Policy Alignment through Abstract Exploration and Refined Exploitation of Action Space, by Swaminathan S K and Aritra Hazra
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

SPAARS: 抽象的探索と洗練された行動空間の活用による安全な強化学習ポリシー整合 | AI Navigate