Maximum Entropy Semi-Supervised Inverse Reinforcement Learning

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses apprenticeship learning by formulating it as an inverse reinforcement learning (IRL) problem using the maximum entropy principle.
  • It focuses on a semi-supervised setting where, besides expert trajectories, the learner also has access to additional unsupervised trajectories.
  • The authors propose MESSI, an algorithm that combines MaxEnt-IRL with semi-supervised learning by incorporating unsupervised data via a pairwise penalty on trajectories.
  • Experiments on highway driving and grid-world benchmarks show that MESSI can leverage unsupervised trajectories to outperform standard MaxEnt-IRL.

Abstract

A popular approach to apprenticeship learning (AL) is to formulate it as an inverse reinforcement learning (IRL) problem. The MaxEnt-IRL algorithm successfully integrates the maximum entropy principle into IRL and unlike its predecessors, it resolves the ambiguity arising from the fact that a possibly large number of policies could match the expert's behavior. In this paper, we study an AL setting in which in addition to the expert's trajectories, a number of unsupervised trajectories is available. We introduce MESSI, a novel algorithm that combines MaxEnt-IRL with principles coming from semi-supervised learning. In particular, MESSI integrates the unsupervised data into the MaxEnt-IRL framework using a pairwise penalty on trajectories. Empirical results in a highway driving and grid-world problems indicate that MESSI is able to take advantage of the unsupervised trajectories and improve the performance of MaxEnt-IRL.