AI Navigate

What Makes VLMs Robust? Towards Reconciling Robustness and Accuracy in Vision-Language Models

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study shows adversarial robustness in Vision-Language Models is concentrated in shallow layers due to a low-frequency spectral bias and input-insensitive attention, challenging the assumption that deeper layers drive robustness.
  • Updates to deep layers tend to undermine both clean accuracy and robust generalization, indicating robustness varies non-uniformly across network depth.
  • They propose Adversarial Robustness Adaptation (R-Adapt), freezing pre-trained weights and adapting only initial layers to balance robustness and clean accuracy.
  • R-Adapt enables training-free, model-guided, and data-driven deployment and generalizes to large VLMs such as LLaVA and Qwen-VL, achieving strong robustness under attacks.
  • The approach is validated across 18 datasets with state-of-the-art performance under various adversarial attacks, and a project page is provided.

Abstract

Achieving adversarial robustness in Vision-Language Models (VLMs) inevitably compromises accuracy on clean data, presenting a long-standing and challenging trade-off. In this work, we revisit this trade-off by investigating a fundamental question: What makes VLMs robust? Through a detailed analysis of adversarially fine-tuned models, we examine how robustness mechanisms function internally and how they interact with clean accuracy. Our analysis reveals that adversarial robustness is not uniformly distributed across network depth. Instead, unexpectedly, it is primarily localized within the shallow layers, driven by a low-frequency spectral bias and input-insensitive attention patterns. Meanwhile, updates to the deep layers tend to undermine both clean accuracy and robust generalization. Motivated by these insights, we propose Adversarial Robustness Adaptation (R-Adapt), a simple yet effective framework that freezes all pre-trained weights and introduces minimal, insight-driven adaptations only in the initial layers. This design achieves an exceptional balance between adversarial robustness and clean accuracy. R-Adapt further supports training-free, model-guided, and data-driven paradigms, offering flexible pathways to seamlessly equip standard models with robustness. Extensive evaluations on 18 datasets and diverse tasks demonstrate our state-of-the-art performance under various attacks. Notably, R-Adapt generalizes efficiently to large vision-language models (e.g., LLaVA and Qwen-VL) to enhance their robustness. Our project page is available at https://summu77.github.io/R-Adapt.