The Global Neural World Model: Spatially Grounded Discrete Topologies for Action-Conditioned Planning

arXiv cs.LG / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces the Global Neural World Model (GNWM), a self-stabilizing framework that performs topological quantization by applying balanced continuous entropy constraints.
  • GNWM uses a continuous, action-conditioned Joint-Embedding Predictive Architecture (JEPA) to map environments onto a discrete 2D grid while enforcing translational equivariance, without relying on pixel-level reconstruction.
  • The authors report that “grid snapping” functions as a native error-correction mechanism, helping prevent manifold drift during autoregressive rollouts.
  • Training with maximum-entropy exploration via random walks is claimed to learn generalized transition dynamics rather than memorizing specific expert trajectories.
  • Experiments across passive observation, active control, and abstract sequence settings suggest GNWM can serve as a causal discovery model that organizes continuous, predictable concepts into structured topological maps.

Abstract

We present the Global Neural World Model (GNWM), a self-stabilizing framework that achieves topological quantization through balanced continuous entropy constraints. Operating as a continuous, action-conditioned Joint-Embedding Predictive Architecture (JEPA), the GNWM maps environments onto a discrete 2D grid, enforcing translational equivariance without pixel-level reconstruction. Our results show this architecture prevents manifold drift during autoregressive rollouts by using grid ``snapping'' as a native error-correction mechanism. Furthermore, by training via maximum entropy exploration (random walks), the model learns generalized transition dynamics rather than memorizing specific expert trajectories. We validate the GNWM across passive observation, active agent control, and abstract sequence regimes, demonstrating its capacity to act not just as a spatial physics simulator, but as a causal discovery model capable of organizing continuous, predictable concepts into structured topological maps.