A Systematic Review and Taxonomy of Reinforcement Learning-Model Predictive Control Integration for Linear Systems

arXiv cs.RO / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper provides a systematic literature review of how Reinforcement Learning (RL) is integrated with Model Predictive Control (MPC) specifically for linear and linearized systems, covering studies published up to 2025.
  • It organizes the existing work using a multi-dimensional taxonomy that captures RL functional roles, RL algorithm classes, MPC formulations, cost-function structures, and application domains.
  • The authors perform a cross-dimensional synthesis to uncover recurring design patterns and common relationships across these dimensions in the reviewed literature.
  • The review identifies key methodological trends and persistent practical challenges, including computational burden, sample efficiency, robustness, and the need for closed-loop guarantees.
  • The resulting structured reference is intended to help researchers and practitioners design and analyze RL–MPC architectures grounded in linear or linearized predictive control approaches.

Abstract

The integration of Model Predictive Control (MPC) and Reinforcement Learning (RL) has emerged as a promising paradigm for constrained decision-making and adaptive control. MPC offers structured optimization, explicit constraint handling, and established stability tools, whereas RL provides data-driven adaptation and performance improvement in the presence of uncertainty and model mismatch. Despite the rapid growth of research on RL--MPC integration, the literature remains fragmented, particularly for control architectures built on linear or linearized predictive models. This paper presents a comprehensive Systematic Literature Review (SLR) of RL--MPC integrations for linear and linearized systems, covering peer-reviewed and formally indexed studies published until 2025. The reviewed studies are organized through a multi-dimensional taxonomy covering RL functional roles, RL algorithm classes, MPC formulations, cost-function structures, and application domains. In addition, a cross-dimensional synthesis is conducted to identify recurring design patterns and reported associations among these dimensions within the reviewed corpus. The review highlights methodological trends, commonly adopted integration strategies, and recurring practical challenges, including computational burden, sample efficiency, robustness, and closed-loop guarantees. The resulting synthesis provides a structured reference for researchers and practitioners seeking to design or analyze RL--MPC architectures based on linear or linearized predictive control formulations.