AI Navigate

Multi-level meta-reinforcement learning with skill-based curriculum

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper addresses sequential decision-making problems with inherent multi-level structures by introducing a multi-level meta-reinforcement learning framework that compresses Markov decision processes (MDPs) at different hierarchical levels.
  • Policies at one level are treated as single actions at higher levels, preserving semantic meaning and reducing complexity, which enables more efficient long-term policy optimization with fewer iterations.
  • The framework integrates curriculum learning where a teacher incrementally increases task difficulty, facilitating skill transfer across different problems and levels within and across curricula.
  • The multi-level approach allows spatial and temporal scale coarsening, decouples sub-tasks, reduces stochasticity and search space in policy optimization, and provides theoretical guarantees of consistency and benefits.
  • The authors demonstrate their approach on examples like MazeBase+, showcasing abstraction, transferability, and curriculum learning effectiveness.

Computer Science > Machine Learning

arXiv:2603.08773 (cs)
[Submitted on 9 Mar 2026]

Title:Multi-level meta-reinforcement learning with skill-based curriculum

Authors:Sichen Yang (Johns Hopkins University), Mauro Maggioni (Johns Hopkins University)
View a PDF of the paper titled Multi-level meta-reinforcement learning with skill-based curriculum, by Sichen Yang (Johns Hopkins University) and 1 other authors
View PDF
Abstract:We consider problems in sequential decision making with natural multi-level structure, where sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure has remained a longstanding challenge; we describe an efficient multi-level procedure for repeatedly compressing Markov decision processes (MDPs), wherein a parametric family of policies at one level is treated as single actions in the compressed MDPs at higher levels, while preserving the semantic meanings and structure of the original MDP, and mimicking the natural logic to address a complex MDP. Higher-level MDPs are themselves independent MDPs with less stochasticity, and may be solved using existing algorithms. As a byproduct, spatial or temporal scales may be coarsened at higher levels, making it more efficient to find long-term optimal policies. The multi-level representation delivered by this procedure decouples sub-tasks from each other and usually greatly reduces unnecessary stochasticity and the policy search space, leading to fewer iterations and computations when solving the MDPs. A second fundamental aspect of this work is that these multi-level decompositions plus the factorization of policies into embeddings (problem-specific) and skills (including higher-order functions) yield new transfer opportunities of skills across different problems and different levels. This whole process is framed within curriculum learning, wherein a teacher organizes the student agent's learning process in a way that gradually increases the difficulty of tasks and and promotes transfer across MDPs and levels within and across curricula. The consistency of this framework and its benefits can be guaranteed under mild assumptions. We demonstrate abstraction, transferability, and curriculum learning in examples, including MazeBase+, a more complex variant of the MazeBase example.
Comments:
Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)
MSC classes: 90C40 (Primary), 68T05 (Secondary), 90C39 (Secondary)
ACM classes: I.2.6; I.2.8; F.2.2
Cite as: arXiv:2603.08773 [cs.LG]
  (or arXiv:2603.08773v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.08773
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Sichen Yang [view email]
[v1] Mon, 9 Mar 2026 17:59:39 UTC (35,807 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Multi-level meta-reinforcement learning with skill-based curriculum, by Sichen Yang (Johns Hopkins University) and 1 other authors
  • View PDF
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.