Hierarchical Active Inference using Successor Representations

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a scalable form of Active Inference for large real-world problems by introducing hierarchical planning inspired by multi-scale representations in the brain.
  • It combines hierarchical environment models with successor representations to make action planning more computationally efficient.
  • The authors show that lower-level successor representations can be used to learn higher-level abstract states.
  • They further demonstrate that performing lower-level Active Inference planning can bootstrap higher-level abstract actions and states, improving planning efficiency.
  • Experiments across multiple planning and reinforcement learning tasks—including four rooms, key-based navigation, partially observable planning, Mountain Car, and PointMaze—support the approach, and the work claims to be the first learned hierarchical state/action abstraction applied to Active Inference within FEP-based brain theories.

Abstract

Active inference, a neurally-inspired model for inferring actions based on the free energy principle (FEP), has been proposed as a unifying framework for understanding perception, action, and learning in the brain. Active inference has previously been used to model ecologically important tasks such as navigation and planning, but scaling it to solve complex large-scale problems in real-world environments has remained a challenge. Inspired by the existence of multi-scale hierarchical representations in the brain, we propose a model for planning of actions based on hierarchical active inference. Our approach combines a hierarchical model of the environment with successor representations for efficient planning. We present results demonstrating (1) how lower-level successor representations can be used to learn higher-level abstract states, (2) how planning based on active inference at the lower-level can be used to bootstrap and learn higher-level abstract actions, and (3) how these learned higher-level abstract states and actions can facilitate efficient planning. We illustrate the performance of the approach on several planning and reinforcement learning (RL) problems including a variant of the well-known four rooms task, a key-based navigation task, a partially observable planning problem, the Mountain Car problem, and PointMaze, a family of navigation tasks with continuous state and action spaces. Our results represent, to our knowledge, the first application of learned hierarchical state and action abstractions to active inference in FEP-based theories of brain function.