General-purpose LLMs as Models of Human Driver Behavior: The Case of Simplified Merging

arXiv cs.AI / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper evaluates whether general-purpose LLMs can act as standalone models of human driver behavior in a simplified 1D merging scenario used for virtual AV safety assessment.
  • It embeds two closed-loop LLM driver agents—OpenAI o3 and Google Gemini 2.5 Pro—and compares their quantitative/qualitative behavior against human driving data.
  • The LLMs reproduce some human-like traits, including intermittent control and tactical dependencies on spatial cues.
  • However, both models fail to consistently capture human responses to dynamic velocity cues, leading to sharp divergences in safety performance versus human data.
  • A prompt ablation study shows that prompt components function as model-specific inductive biases that do not transfer across LLMs, implying limited portability and highlighting important validity/failure-mode concerns.

Abstract

Human behavior models are essential as behavior references and for simulating human agents in virtual safety assessment of automated vehicles (AVs), yet current models face a trade-off between interpretability and flexibility. General-purpose large language models (LLMs) offer a promising alternative: a single model potentially deployable without parameter fitting across diverse scenarios. However, what LLMs can and cannot capture about human driving behavior remains poorly understood. We address this gap by embedding two general-purpose LLMs (OpenAI o3 and Google Gemini 2.5 Pro) as standalone, closed-loop driver agents in a simplified one-dimensional merging scenario and comparing their behavior against human data using quantitative and qualitative analyses. Both models reproduce human-like intermittent operational control and tactical dependencies on spatial cues. However, neither consistently captures the human response to dynamic velocity cues, and safety performance diverges sharply between models. A systematic prompt ablation study reveals that prompt components act as model-specific inductive biases that do not transfer across LLMs. These findings suggest that general-purpose LLMs could potentially serve as standalone, ready-to-use human behavior models in AV evaluation pipelines, but future research is needed to better understand their failure modes and ensure their validity as models of human driving behavior.