AI Navigate

ExpressMind: A Multimodal Pretrained Large Language Model for Expressway Operation

arXiv cs.AI / 3/18/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • ExpressMind is introduced as a multimodal pretrained LLM tailored for expressway operation, addressing the limitations of general LLMs in regulatory and causal reasoning for unconventional expressway scenarios.
  • The paper proposes a dual-layer pre-training paradigm based on self-supervised training and unsupervised learning, plus a Graph-Augmented RAG framework to dynamically index expressway knowledge.
  • It constructs the industry's first full-stack expressway dataset, including traffic knowledge texts, emergency reasoning chains, and annotated video events to tackle data scarcity.
  • A cross-modal encoder aligns dynamic feature sequences across video and text, and a RL-aligned Chain-of-Thought mechanism enforces consistency between model reasoning and expert problem-solving heuristics for incident handling.
  • Experiments on the new multimodal expressway benchmark show ExpressMind outperforms baselines in event detection, safety response generation, and complex traffic analysis, with code and data released at the provided URL.

Abstract

The current expressway operation relies on rule-based and isolated models, which limits the ability to jointly analyze knowledge across different systems. Meanwhile, Large Language Models (LLMs) are increasingly applied in intelligent transportation, advancing traffic models from algorithmic to cognitive intelligence. However, general LLMs are unable to effectively understand the regulations and causal relationships of events in unconventional scenarios in the expressway field. Therefore, this paper constructs a pre-trained multimodal large language model (MLLM) for expressways, ExpressMind, which serves as the cognitive core for intelligent expressway operations. This paper constructs the industry's first full-stack expressway dataset, encompassing traffic knowledge texts, emergency reasoning chains, and annotated video events to overcome data scarcity. This paper proposes a dual-layer LLM pre-training paradigm based on self-supervised training and unsupervised learning. Additionally, this study introduces a Graph-Augmented RAG framework to dynamically index the expressway knowledge base. To enhance reasoning for expressway incident response strategies, we develop a RL-aligned Chain-of-Thought (RL-CoT) mechanism that enforces consistency between model reasoning and expert problem-solving heuristics for incident handling. Finally, ExpressMind integrates a cross-modal encoder to align the dynamic feature sequences under the visual and textual channels, enabling it to understand traffic scenes in both video and image modalities. Extensive experiments on our newly released multi-modal expressway benchmark demonstrate that ExpressMind comprehensively outperforms existing baselines in event detection, safety response generation, and complex traffic analysis. The code and data are available at: https://wanderhee.github.io/ExpressMind/.