VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents

arXiv cs.CL / 3/26/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • VehicleMemBench is introduced as an executable, multi-user long-context memory benchmark for in-vehicle agents, addressing limitations of prior single-user, static QA benchmarks.
  • The benchmark uses an in-vehicle simulation where evaluations are objective and reproducible by comparing post-action environment states to predefined target states without relying on LLM-judged or human scoring.
  • Each task includes over 80 historical memory events and 23 tool modules, explicitly testing temporal preference evolution, inter-user conflict handling, and tool-interactive decision making.
  • Experiments indicate strong models can handle direct instructions but degrade on scenarios requiring memory evolution, especially when user preferences change dynamically.
  • The study finds even advanced memory systems struggle with domain-specific memory needs in this setting, motivating more robust, specialized memory management for long-term adaptive driving companion agents.

Abstract

With the growing demand for intelligent in-vehicle experiences, vehicle-based agents are evolving from simple assistants to long-term companions. This evolution requires agents to continuously model multi-user preferences and make reliable decisions in the face of inter-user preference conflicts and changing habits over time. However, existing benchmarks are largely limited to single-user, static question-answer settings, failing to capture the temporal evolution of preferences and the multi-user, tool-interactive nature of real vehicle environments. To address this gap, we introduce VehicleMemBench, a multi-user long-context memory benchmark built on an executable in-vehicle simulation environment. The benchmark evaluates tool use and memory by comparing the post-action environment state with a predefined target state, enabling objective and reproducible evaluation without LLM-based or human scoring. VehicleMemBench includes 23 tool modules, and each sample contains over 80 historical memory events. Experiments show that powerful models perform well on direct instruction tasks but struggle in scenarios involving memory evolution, particularly when user preferences change dynamically. Even advanced memory systems struggle to handle domain-specific memory requirements in this environment. These findings highlight the need for more robust and specialized memory management mechanisms to support long-term adaptive decision-making in real-world in-vehicle systems. To facilitate future research, we release the data and code.