Agentic World Modeling: Foundations, Capabilities, Laws, and Beyond

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as AI agents shift from text generation to sustained goal achievement, predictive environment “world models” become a core bottleneck for tasks like acting in the real world, navigating software, coordinating with others, and running experiments.
  • It proposes a “levels × laws” taxonomy that classifies world models by capability level (L1 Predictor, L2 Simulator, L3 Evolver) and by governing-law regime (physical, digital, social, scientific), explaining how these dimensions shape constraints and failure points.
  • The authors synthesize 400+ related works and analyze 100+ representative systems across domains such as model-based reinforcement learning, video generation, web/GUI agents, multi-agent social simulation, and AI-driven scientific discovery.
  • They compare methods, failure modes, and evaluation practices across level–regime combinations, and recommend decision-centric evaluation principles plus a minimal reproducible evaluation package.
  • The work provides architectural guidance and identifies open problems and governance challenges, aiming to connect previously separate research communities and move toward world models that can simulate and eventually reshape agent environments.

Abstract

As AI systems move from generating text to accomplishing goals through sustained interaction, the ability to model environment dynamics becomes a central bottleneck. Agents that manipulate objects, navigate software, coordinate with others, or design experiments require predictive environment models, yet the term world model carries different meanings across research communities. We introduce a "levels x laws" taxonomy organized along two axes. The first defines three capability levels: L1 Predictor, which learns one-step local transition operators; L2 Simulator, which composes them into multi-step, action-conditioned rollouts that respect domain laws; and L3 Evolver, which autonomously revises its own model when predictions fail against new evidence. The second identifies four governing-law regimes: physical, digital, social, and scientific. These regimes determine what constraints a world model must satisfy and where it is most likely to fail. Using this framework, we synthesize over 400 works and summarize more than 100 representative systems spanning model-based reinforcement learning, video generation, web and GUI agents, multi-agent social simulation, and AI-driven scientific discovery. We analyze methods, failure modes, and evaluation practices across level-regime pairs, propose decision-centric evaluation principles and a minimal reproducible evaluation package, and outline architectural guidance, open problems, and governance challenges. The resulting roadmap connects previously isolated communities and charts a path from passive next-step prediction toward world models that can simulate, and ultimately reshape, the environments in which agents operate.