Explainable Representation of Finite-Memory Policies for POMDPs using Decision Trees

arXiv cs.RO / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge that optimal POMDP policies may require infinite memory, making them hard to implement and sometimes undecidable, motivating the use of finite-memory policies.
  • It proposes an explainable representation of finite-memory policies by combining Mealy machine models (to switch among components) with decision trees (to capture interpretable stationary behavior).
  • The authors develop a translation from the standard finite-state-controller (FSC) policy form to their decision-tree-based representation and show the approach generalizes to other finite-memory policy variants.
  • They exploit structural properties of recently used “attractor-based” policies to further simplify and reduce the size of the resulting representations.
  • The method’s increased explainability is demonstrated through case studies, indicating practical benefits for understanding and analyzing finite-memory POMDP behavior.

Abstract

Partially Observable Markov Decision Processes (POMDPs) are a fundamental framework for decision-making under uncertainty and partial observability. Since in general optimal policies may require infinite memory, they are hard to implement and often render most problems undecidable. Consequently, finite-memory policies are mostly considered instead. However, the algorithms for computing them are typically very complex, and so are the resulting policies. Facing the need for their explainability, we provide a representation of such policies, both (i) in an interpretable formalism and (ii) typically of smaller size, together yielding higher explainability. To that end, we combine models of Mealy machines and decision trees; the latter describing simple, stationary parts of the policies and the former describing how to switch among them. We design a translation for policies of the finite-state-controller (FSC) form from standard literature and show how our method smoothly generalizes to other variants of finite-memory policies. Further, we identify specific properties of recently used "attractor-based" policies, which allow us to construct yet simpler and smaller representations. Finally, we illustrate the higher explainability in a few case studies.