AI Navigate

A Survey of Reasoning in Autonomous Driving Systems: Open Challenges and Emerging Paradigms

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article argues that robust reasoning is the primary bottleneck for high-level autonomous driving, not just perception, and current systems struggle in long-tail scenarios and complex social interactions requiring human-like judgment.
  • It proposes a Cognitive Hierarchy to decompose driving tasks by cognitive and interactive complexity and derives seven core reasoning challenges, including the responsiveness-reasoning trade-off and social-game reasoning.
  • It reviews both system-centric agent architectures and evaluation practices, highlighting a trend toward holistic, interpretable glass-box agents and improved validation methods.
  • It highlights a fundamental tension between the high-latency, deliberative reasoning of LLMs and the millisecond-scale safety requirements of vehicle control, calling for verifiable neuro-symbolic architectures and robust reasoning under uncertainty.

Abstract

The development of high-level autonomous driving (AD) is shifting from perception-centric limitations to a more fundamental bottleneck, namely, a deficit in robust and generalizable reasoning. Although current AD systems manage structured environments, they consistently falter in long-tail scenarios and complex social interactions that require human-like judgment. Meanwhile, the advent of large language and multimodal models (LLMs and MLLMs) presents a transformative opportunity to integrate a powerful cognitive engine into AD systems, moving beyond pattern matching toward genuine comprehension. However, a systematic framework to guide this integration is critically lacking. To bridge this gap, we provide a comprehensive review of this emerging field and argue that reasoning should be elevated from a modular component to the system's cognitive core. Specifically, we first propose a novel Cognitive Hierarchy to decompose the monolithic driving task according to its cognitive and interactive complexity. Building on this, we further derive and systematize seven core reasoning challenges, such as the responsiveness-reasoning trade-off and social-game reasoning. Furthermore, we conduct a dual-perspective review of the state-of-the-art, analyzing both system-centric approaches to architecting intelligent agents and evaluation-centric practices for their validation. Our analysis reveals a clear trend toward holistic and interpretable "glass-box" agents. In conclusion, we identify a fundamental and unresolved tension between the high-latency, deliberative nature of LLM-based reasoning and the millisecond-scale, safety-critical demands of vehicle control. For future work, a primary objective is to bridge the symbolic-to-physical gap by developing verifiable neuro-symbolic architectures, robust reasoning under uncertainty, and scalable models for implicit social negotiation.