EgoDyn-Bench: Evaluating Ego-Motion Understanding in Vision-Centric Foundation Models for Autonomous Driving

arXiv cs.CV / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces EgoDyn-Bench, a benchmark designed to test whether vision-centric foundation models can semantically understand ego-motion physics in autonomous-driving settings.
  • Using a deterministic oracle to map continuous vehicle kinematics to discrete motion concepts, the authors separate a model’s “physical logic” from its visual perception to diagnose where failures occur.
  • A large audit across 20+ models—including closed-source MLLMs, open-source VLMs at multiple scales, and specialized VLAs—finds a consistent “Perception Bottleneck,” where models’ physical concepts do not accurately align with visual observations and often underperform geometric non-learned baselines.
  • The issue is structural across scales and domain-specific training, and adding explicit trajectory encodings significantly improves physical consistency, suggesting that current systems derive ego-motion logic mainly from the language modality while visual inputs add little signal.
  • The authors propose EgoDyn-Bench as a standardized diagnostic tool and outline a practical path toward physically aligned embodied AI by explicitly integrating trajectory/kinematic information.

Abstract

While Vision-Language Models (VLMs) have advanced highlevel reasoning in autonomous driving, their ability to ground this reasoning in the underlying physics of ego-motion remains poorly understood. We introduce EgoDyn-Bench, a diagnostic benchmark for evaluating the semantic ego-motion understanding of vision-centric foundation models. By mapping continuous vehicle kinematics to discrete motion concepts via a deterministic oracle, we decouple a model's internal physical logic from its visual perception. Our large-scale empirical audit spanning 20 + models, including closed-source MLLMs, open-source VLMs across multiple scales, and specialized VLAs, identifies a significant Perception Bottleneck: while models exhibit logical physical concepts, they consistently fail to accurately align them with visual observations, frequently underperforming classical non-learned geometric baselines. This failure persists across model scales and domain-specific training, indicating a structural deficit in how current architectures couple visual perception with physical reasoning. We demonstrate that providing explicit trajectory encodings substantially restores physical consistency across all evaluated models, revealing a functional disentanglement between vision and language: egomotion logic is derived almost exclusively from the language modality, while visual observations contribute negligible additional signal. This structural finding provides a standardized diagnostic framework and a practical pathway toward physically aligned embodied AI. Keywords: Ego-motion - Physical Reasoning - Foundation Models