AI Navigate

Distilling Deep Reinforcement Learning into Interpretable Fuzzy Rules: An Explainable AI Framework

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • It proposes a Hierarchical Takagi-Sugeno-Kang (TSK) Fuzzy Classifier System to distill deep reinforcement learning policies into human-readable IF-THEN rules, addressing opacity in continuous control tasks.
  • The framework uses K-Means clustering for state partitioning and Ridge Regression for local action inference, and introduces metrics (Fuzzy Rule Activation Density, Fuzzy Set Coverage, Action Space Granularity) to quantify explanation quality and control diversity.
  • Empirical evaluation on Lunar Lander Continuous shows a triangular membership variant achieving 81.48% ± 0.43% policy fidelity, outperforming Decision Trees by 21 percentage points, with statistically superior interpretability (FRAD 0.814 vs. 0.723, p < 0.001) and low MSE/DTW distances (0.0053 and 1.05).
  • Extracted rules such as 'IF lander drifting left at high altitude THEN apply upward thrust with rightward correction' demonstrate practical verifiability and a pathway toward trustworthy autonomous systems.

Abstract

Deep Reinforcement Learning (DRL) agents achieve remarkable performance in continuous control but remain opaque, hindering deployment in safety-critical domains. Existing explainability methods either provide only local insights (SHAP, LIME) or employ over-simplified surrogates failing to capture continuous dynamics (decision trees). This work proposes a Hierarchical Takagi-Sugeno-Kang (TSK) Fuzzy Classifier System (FCS) distilling neural policies into human-readable IF-THEN rules through K-Means clustering for state partitioning and Ridge Regression for local action inference. Three quantifiable metrics are introduced: Fuzzy Rule Activation Density (FRAD) measuring explanation focus, Fuzzy Set Coverage (FSC) validating vocabulary completeness, and Action Space Granularity (ASG) assessing control mode diversity. Dynamic Time Warping (DTW) validates temporal behavioral fidelity. Empirical evaluation on \textit{Lunar Lander(Continuous)} shows the Triangular membership function variant achieves 81.48\% \pm 0.43\% fidelity, outperforming Decision Trees by 21 percentage points. The framework exhibits statistically superior interpretability (FRAD = 0.814 vs. 0.723 for Gaussian, p < 0.001) with low MSE (0.0053) and DTW distance (1.05). Extracted rules such as ``IF lander drifting left at high altitude THEN apply upward thrust with rightward correction'' enable human verification, establishing a pathway toward trustworthy autonomous systems.