When Negation Is a Geometry Problem in Vision-Language Models

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Vision-language embedding models like CLIP are shown to struggle with interpreting negation in text queries (e.g., failing to properly handle “no” in “a blue shirt with no logos”).
  • Prior data-centric fixes using synthetic negation datasets are criticized for relying on retrieval metrics that may not actually measure whether negation is truly understood.
  • The paper proposes an alternative evaluation approach using multimodal LLMs as judges to answer yes/no content questions, aiming to more reliably assess negation understanding.
  • It presents evidence that a “negation direction” exists in CLIP’s embedding space and demonstrates test-time steering via representation engineering to improve negation-aware behavior without fine-tuning.
  • The study evaluates negation performance on out-of-distribution image-text samples to examine generalization under distribution shifts.

Abstract

Joint Vision-Language Embedding models such as CLIP typically fail at understanding negation in text queries - for example, failing to distinguish "no" in the query: "a plain blue shirt with no logos". Prior work has largely addressed this limitation through data-centric approaches, fine-tuning CLIP on large-scale synthetic negation datasets. However, these efforts are commonly evaluated using retrieval-based metrics that cannot reliably reflect whether negation is actually understood. In this paper, we identify two key limitations of such evaluation metrics and investigate an alternative evaluation framework based on Multimodal LLMs-as-a-judge, which typically excel at understanding simple yes/no questions about image content, providing a fair evaluation of negation understanding in CLIP models. We then ask whether there already exists a direction in the CLIP embedding space associated with negation. We find evidence that such a direction exists, and show that it can be manipulated through test-time intervention via representation engineering to steer CLIP toward negation-aware behavior without any fine-tuning. Finally, we test negation understanding on non-common image-text samples to evaluate generalization under distribution shifts.