Safety, Security, and Cognitive Risks in World Models

arXiv cs.LG / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • World models are increasingly used as learned simulators for autonomous robotics, vehicles, and agentic AI, but they create specific safety, security, and cognitive risks beyond standard ML failure modes.
  • The paper explains how adversaries can corrupt training data, poison latent representations, and leverage compounding rollout errors to trigger catastrophic failures in safety-critical deployments.
  • It highlights governance-relevant issues such as goal misgeneralisation, deceptive alignment, reward hacking, automation bias, and miscalibrated human trust when operators cannot effectively audit world-model predictions.
  • The authors propose a formal threat framing (including trajectory persistence and representational risk), define a five-profile attacker taxonomy, and extend existing frameworks (MITRE ATLAS and OWASP LLM Top 10) to cover the world-model stack.
  • Empirically, they demonstrate trajectory-persistent adversarial attacks with reported effects such as amplification for a GRU-RSSM variant and confirmed action drift in a DreamerV3 checkpoint, and they outline mitigation directions spanning hardening, alignment engineering, NIST AI RMF/EU AI Act governance, and human-factors design.

Abstract

World models -- learned internal simulators of environment dynamics -- are rapidly becoming foundational to autonomous decision-making in robotics, autonomous vehicles, and agentic AI. Yet this predictive power introduces a distinctive set of safety, security, and cognitive risks. Adversaries can corrupt training data, poison latent representations, and exploit compounding rollout errors to cause catastrophic failures in safety-critical deployments. World model-equipped agents are more capable of goal misgeneralisation, deceptive alignment, and reward hacking precisely because they can simulate the consequences of their own actions. Authoritative world model predictions further foster automation bias and miscalibrated human trust that operators lack the tools to audit. This paper surveys the world model landscape; introduces formal definitions of trajectory persistence and representational risk; presents a five-profile attacker capability taxonomy; and develops a unified threat model extending MITRE ATLAS and the OWASP LLM Top 10 to the world model stack. We provide an empirical proof-of-concept on trajectory-persistent adversarial attacks (GRU-RSSM: A_1 = 2.26x amplification, -59.5% reduction under adversarial fine-tuning; stochastic RSSM proxy: A_1 = 0.65x; DreamerV3 checkpoint: non-zero action drift confirmed). We illustrate risks through four deployment scenarios and propose interdisciplinary mitigations spanning adversarial hardening, alignment engineering, NIST AI RMF and EU AI Act governance, and human-factors design. We argue that world models must be treated as safety-critical infrastructure requiring the same rigour as flight-control software or medical devices.