Interpretable experiential learning based on state history and global feedback

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an interpretable experiential learning model that learns behavioral dynamics as a transition graph over state sets.
  • Each transition in the graph is annotated with utility and evidence counts, aiming to improve interpretability compared with opaque function approximators.
  • The approach is designed for reinforcement learning in resource-constrained environments, where model efficiency is critical.
  • Experiments on the OpenAI Gym Atari Breakout benchmark show performance comparable to some existing neural network-based solutions.
  • The work is presented as a new arXiv submission, providing an early research contribution that others can build on and compare against.

Abstract

A new interpretable experiential learning model based on state history and global feedback is presented. It is capable of learning a behavioral model represented by a transition graph between sets of states, with transitions attributed with utility and evidence count. This model is expected to be suitable for solving reinforcement learning problem in resource-constrained environments. The model was thoroughly evaluated on the OpenAI Gym Atari Breakout benchmark, demonstrating performance comparable to some known neural network-based solutions.