AI Navigate

Zero-Shot Cross-City Generalization in End-to-End Autonomous Driving: Self-Supervised versus Supervised Representations

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates zero-shot cross-city generalization in end-to-end trajectory planning and compares self-supervised versus supervised visual representations.
  • It integrates self-supervised backbones (I-JEPA, DINOv2, and MAE) into planning frameworks and evaluates on nuScenes and NAVSIM with strict geographic splits.
  • Open-loop results show large generalization gaps for supervised backbones when transferring from Boston to Singapore (L2 displacement ratio 9.77x, collision ratio 19.43x), which self-supervised pretraining reduces to 1.20x and 0.75x respectively.
  • Closed-loop results indicate self-supervised pretraining improves PDMS by up to 4 percent across single-city training cities, signaling better cross-city robustness.
  • The authors conclude that representation learning strongly influences cross-city planning robustness and that zero-shot geographic transfer should be a standard evaluation test for end-to-end autonomous driving systems.

Abstract

End-to-end autonomous driving models are typically trained on multi-city datasets using supervised ImageNet-pretrained backbones, yet their ability to generalize to unseen cities remains largely unexamined. When training and evaluation data are geographically mixed, models may implicitly rely on city-specific cues, masking failure modes that would occur under real domain shifts when generalizing to new locations. In this work we investigate zero-shot cross-city generalization in end-to-end trajectory planning and ask whether self-supervised visual representations improve transfer across cities. We conduct a comprehensive study by integrating self-supervised backbones (I-JEPA, DINOv2, and MAE) into planning frameworks. We evaluate performance under strict geographic splits on nuScenes in the open-loop setting and on NAVSIM in the closed-loop evaluation protocol. Our experiments reveal a substantial generalization gap when transferring models relying on traditional supervised backbones across cities with different road topologies and driving conventions, particularly when transferring from right-side to left-side driving environments. Self-supervised representation learning reduces this gap. In open-loop evaluation, a supervised backbone exhibits severe inflation when transferring from Boston to Singapore (L2 displacement ratio 9.77x, collision ratio 19.43x), whereas domain-specific self-supervised pretraining reduces this to 1.20x and 0.75x respectively. In closed-loop evaluation, self-supervised pretraining improves PDMS by up to 4 percent for all single-city training cities. These results show that representation learning strongly influences the robustness of cross-city planning and establish zero-shot geographic transfer as a necessary test for evaluating end-to-end autonomous driving systems.