Hierarchically Robust Zero-shot Vision-language Models

arXiv cs.AI / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a weakness of vision-language models (VLMs) in zero-shot classification: they can be vulnerable to adversarial attacks.
  • It argues that prior robust fine-tuning methods that align fixed text embeddings with image embeddings can hurt both natural performance and robustness.
  • The authors propose a hierarchical adversarial fine-tuning framework that uses hierarchical embeddings and performs multi-level robust alignment between image and text modalities.
  • They introduce additional mechanisms to place visual embeddings at the appropriate depth in the class hierarchy and provide a theoretical link between hierarchy depth and the maximum feasible margin size.
  • Experiments on multiple datasets show the method improves adversarial robustness, including by aligning across multiple hierarchy trees to increase semantic variety.

Abstract

Vision-Language Models (VLMs) can perform zero-shot classification but are susceptible to adversarial attacks. While robust fine-tuning improves their robustness, existing approaches align fixed text embeddings with an image embedding, sacrificing natural performance and robustness. A robustness degradation also occurs when a model faces adversarial attacks targeting superclasses (parent classes, e.g., mammal) in addition to their base (leaf) classes (e.g., cat). Thus, to enhance adversarial robustness and leverage the inherent hierarchical properties of class space, we propose a novel adversarial fine-tuning framework based on hierarchical embeddings and several levels of adversarially robust alignment of image-text modalities. Additional mechanisms place visual embeddings at the desired depth of hierarchy, and we provide a theoretical connection between the depth of embedding in the hierarchy and the maximum viable margin size. Our model naturally realizes several margin sizes, boosting generalization of adversaries for robustification. As various trees with different parent labels can share the same leaf labels, we also consider aligning over multiple trees to boost semantic variety. Experiments across several datasets are performed.