Application of Deep Reinforcement Learning to Event-Triggered Control for Networked Artificial Pancreas Systems

arXiv stat.ML / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a deep reinforcement learning (DRL) approach to design an event-triggered controller for networked artificial pancreas systems.
  • It targets the need in networked control systems to reduce communication frequency for energy efficiency, rather than assuming periodic control updates as many prior DRL-based AP controllers do.
  • Instead of jointly learning both insulin dosing and the timing of communication updates (which would greatly increase learning complexity), the method uses a blood-glucose-change-driven, rule-based criterion to trigger decisions at irregular intervals.
  • By treating the resulting irregular decision times via a semi-Markov decision process (SMDP) formulation, the authors extend a standard DRL algorithm to match the problem structure.
  • Numerical experiments indicate the proposed controller improves communication efficiency while keeping glucose control performance comparable to baseline approaches.

Abstract

This paper proposes a deep reinforcement learning (DRL)-based event-triggered controller design for networked artificial pancreas (AP) systems. Although existing DRL-based AP controllers typically assume periodic control updates, networked control systems (NCSs) require a reduction in communication frequency to achieve energy-efficient operation, which is directly tied to control updates. However, jointly learning both insulin dosing and update timing significantly increases the complexity of the learning problem. To alleviate this complexity, we develop a practical DRL-based controller design that avoids explicitly learning update timing by introducing a rule-based criterion defined by changes in blood glucose. As a result, decision-making occurs at irregular intervals, and the problem is naturally formulated as a semi-Markov decision process (SMDP), for which we extend a standard DRL algorithm. Numerical experiments demonstrate that the proposed method improves communication efficiency while maintaining control performance.