Drive My Way: Preference Alignment of Vision-Language-Action Model for Personalized Driving
arXiv cs.RO / 3/27/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Drive My Way (DMW), a personalized Vision-Language-Action (VLA) framework for autonomous driving that adapts to individual long-term driving habits rather than using generic objectives or fixed driving modes.
- DMW learns a user embedding from a multi-driver personalized dataset and conditions its planning policy on this embedding to represent each driver’s style across varied scenarios.
- It combines user embeddings with natural-language instructions to incorporate both long-term preference alignment and real-time intent from the driver.
- Closed-loop experiments on the Bench2Drive benchmark show improved adaptation to style-related instructions, and user studies indicate the resulting behaviors are recognizable as matching each driver’s own style.
- The authors release the data and code to support reproducibility and further research (dmw-cvpr.github.io).
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Sector HQ Daily AI Intelligence - March 27, 2026
Dev.to

AI Crawler Management: The Definitive Guide to robots.txt for AI Bots
Dev.to