Frozen LLMs as Map-Aware Spatio-Temporal Reasoners for Vehicle Trajectory Prediction

arXiv cs.CV / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes a framework to evaluate how well frozen large language models (LLMs) can reason about both dynamic traffic behaviors and static road-network topology for vehicle trajectory prediction.
  • It uses a traffic encoder to extract spatial scene features from observed agent trajectories and a lightweight CNN to encode local high-definition (HD) map information.
  • Scene features are converted into LLM-compatible tokens via a “reprogramming adapter,” while the LLM performs most of the prediction reasoning and a simple linear decoder outputs future trajectories.
  • The framework supports quantitative study of multimodal inputs—especially how map semantics affect prediction accuracy—and demonstrates broad generalizability across different LLM architectures with minimal adaptation.
  • Overall, it aims to provide a unified evaluation platform for understanding intrinsic LLM reasoning capability in autonomous-driving perception-and-prediction settings.

Abstract

Large language models (LLMs) have recently demonstrated strong reasoning capabilities and attracted increasing research attention in the field of autonomous driving (AD). However, safe application of LLMs on AD perception and prediction still requires a thorough understanding of both the dynamic traffic agents and the static road infrastructure. To this end, this study introduces a framework to evaluate the capability of LLMs in understanding the behaviors of dynamic traffic agents and the topology of road networks. The framework leverages frozen LLMs as the reasoning engine, employing a traffic encoder to extract spatial-level scene features from observed trajectories of agents, while a lightweight Convolutional Neural Network (CNN) encodes the local high-definition (HD) maps. To assess the intrinsic reasoning ability of LLMs, the extracted scene features are then transformed into LLM-compatible tokens via a reprogramming adapter. By residing the prediction burden with the LLMs, a simpler linear decoder is applied to output future trajectories. The framework enables a quantitative analysis of the influence of multi-modal information, especially the impact of map semantics on trajectory prediction accuracy, and allows seamless integration of frozen LLMs with minimal adaptation, thereby demonstrating strong generalizability across diverse LLM architectures and providing a unified platform for model evaluation.