Zero-shot World Models Are Developmentally Efficient Learners

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a computational hypothesis called the Zero-shot Visual World Model (ZWM) to explain how young children achieve flexible physical understanding with very limited training data.
  • ZWM is built on three principles: a sparse temporally factored predictor that separates appearance from dynamics, zero-shot estimation via approximate causal inference, and compositional inference to scale toward more complex abilities.
  • The authors report that ZWM can be learned from a single child’s first-person experience and then rapidly performs across multiple physical understanding benchmarks.
  • Results are claimed to both match behavioral signatures of child development and produce brain-like internal representations, positioning the approach as a blueprint for data-efficient AI learning from human-scale data.

Abstract

Young children demonstrate early abilities to understand their physical world, estimating depth, motion, object coherence, interactions, and many other aspects of physical scene understanding. Children are both data-efficient and flexible cognitive systems, creating competence despite extremely limited training data, while generalizing to myriad untrained tasks -- a major challenge even for today's best AI systems. Here we introduce a novel computational hypothesis for these abilities, the Zero-shot Visual World Model (ZWM). ZWM is based on three principles: a sparse temporally-factored predictor that decouples appearance from dynamics; zero-shot estimation through approximate causal inference; and composition of inferences to build more complex abilities. We show that ZWM can be learned from the first-person experience of a single child, rapidly generating competence across multiple physical understanding benchmarks. It also broadly recapitulates behavioral signatures of child development and builds brain-like internal representations. Our work presents a blueprint for efficient and flexible learning from human-scale data, advancing both a computational account for children's early physical understanding and a path toward data-efficient AI systems.