AI Navigate

スキルベースカリキュラムを用いた多層メタ強化学習

arXiv cs.AI / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • 本論文では、階層的構造を持つ逐次的意思決定問題に対して、異なる階層レベルでマルコフ決定過程(MDP)を圧縮する多層メタ強化学習フレームワークを導入している。
  • あるレベルのポリシーを上位レベルでは単一のアクションとして扱い、意味的な意味合いを保持しつつ複雑さを軽減することで、反復回数を減らした効率的な長期ポリシー最適化を可能にしている。
  • フレームワークはカリキュラム学習を統合しており、教師がタスクの難易度を段階的に上げていくことで、異なる問題やレベル間およびカリキュラム内外でのスキルの移転を促進する。
  • 多層アプローチは時空間スケールの粗視化を可能にし、サブタスクを分離し、ポリシー最適化における確率的要素と探索空間を削減し、一貫性と利点の理論的保証を提供する。
  • 著者らはMazeBase+などの例で手法の抽象化、転移性、カリキュラム学習の有効性を実証している。

Computer Science > Machine Learning

arXiv:2603.08773 (cs)
[Submitted on 9 Mar 2026]

Title:Multi-level meta-reinforcement learning with skill-based curriculum

Authors:Sichen Yang (Johns Hopkins University), Mauro Maggioni (Johns Hopkins University)
View a PDF of the paper titled Multi-level meta-reinforcement learning with skill-based curriculum, by Sichen Yang (Johns Hopkins University) and 1 other authors
View PDF
Abstract:We consider problems in sequential decision making with natural multi-level structure, where sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure has remained a longstanding challenge; we describe an efficient multi-level procedure for repeatedly compressing Markov decision processes (MDPs), wherein a parametric family of policies at one level is treated as single actions in the compressed MDPs at higher levels, while preserving the semantic meanings and structure of the original MDP, and mimicking the natural logic to address a complex MDP. Higher-level MDPs are themselves independent MDPs with less stochasticity, and may be solved using existing algorithms. As a byproduct, spatial or temporal scales may be coarsened at higher levels, making it more efficient to find long-term optimal policies. The multi-level representation delivered by this procedure decouples sub-tasks from each other and usually greatly reduces unnecessary stochasticity and the policy search space, leading to fewer iterations and computations when solving the MDPs. A second fundamental aspect of this work is that these multi-level decompositions plus the factorization of policies into embeddings (problem-specific) and skills (including higher-order functions) yield new transfer opportunities of skills across different problems and different levels. This whole process is framed within curriculum learning, wherein a teacher organizes the student agent's learning process in a way that gradually increases the difficulty of tasks and and promotes transfer across MDPs and levels within and across curricula. The consistency of this framework and its benefits can be guaranteed under mild assumptions. We demonstrate abstraction, transferability, and curriculum learning in examples, including MazeBase+, a more complex variant of the MazeBase example.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
MSC classes: 90C40 (Primary), 68T05 (Secondary), 90C39 (Secondary)
ACM classes: I.2.6; I.2.8; F.2.2
Cite as: arXiv:2603.08773 [cs.LG]
  (or arXiv:2603.08773v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.08773
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Sichen Yang [view email]
[v1] Mon, 9 Mar 2026 17:59:39 UTC (35,807 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Multi-level meta-reinforcement learning with skill-based curriculum, by Sichen Yang (Johns Hopkins University) and 1 other authors
  • View PDF
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.