Adversarial Imitation Learning with General Function Approximation: Theoretical Analysis and Practical Algorithms

arXiv cs.LG / 5/5/2026

📰 NewsModels & Research

Key Points

  • The paper addresses a gap in adversarial imitation learning (AIL) theory by analyzing online AIL under general (neural-network-like) function approximation rather than simplified tabular/linear settings.
  • It introduces a new framework, optimization-based AIL (OPT-AIL), which couples online reward-learning optimization with optimism-regularized optimization for policy learning.
  • The authors develop two variants—model-free OPT-AIL and model-based OPT-AIL—and prove polynomial expert sample and interaction complexity for learning near-expert policies.
  • The work claims to be the first provably efficient AIL approach under general function approximation, with practical algorithms that only require approximate optimization of two objectives.
  • Experiments show OPT-AIL outperforms prior state-of-the-art deep AIL methods on multiple difficult tasks.

Abstract

Adversarial imitation learning (AIL), a prominent approach in imitation learning, has achieved significant practical success powered by neural network approximation. However, existing theoretical analyses of AIL are primarily confined to simplified settings, such as tabular and linear function approximation, and involve complex algorithmic designs that impede practical implementation. This creates a substantial gap between theory and practice. This paper bridges this gap by exploring the theoretical underpinnings of online AIL with general function approximation. We introduce a novel framework called optimization-based AIL (OPT-AIL), which performs online optimization for reward learning coupled with optimism-regularized optimization for policy learning. Within this framework, we develop two concrete methods: model-free OPT-AIL and model-based OPT-AIL. Our theoretical analysis demonstrates that both variants achieve polynomial expert sample complexity and interaction complexity for learning near-expert policies. To the best of our knowledge, they represent the first provably efficient AIL methods under general function approximation. From a practical standpoint, OPT-AIL requires only the approximate optimization of two objectives, thereby facilitating practical implementation. Empirical studies demonstrate that OPT-AIL outperforms previous state-of-the-art deep AIL methods across several challenging tasks.