AI Navigate

Mastering Negation: Boosting Grounding Models via Grouped Opposition-Based Learning

arXiv cs.AI / 3/16/2026

💬 OpinionModels & Research

Key Points

  • Introduces the D-Negation dataset, providing objects annotated with both positive and negative semantic descriptions to better capture negation in vision-language grounding.
  • Proposes a grouped opposition-based learning framework that organizes opposing semantic descriptions into groups and uses two complementary loss functions to learn negation-aware representations from limited samples.
  • Demonstrates integration of the dataset and learning strategy into a state-of-the-art language-based grounding model, with fine-tuning of fewer than 10% of the model parameters.
  • Reports gains of up to 4.4 mAP on positive semantics and 5.7 mAP on negative semantics, indicating improved robustness and localization accuracy.

Abstract

Current vision-language detection and grounding models predominantly focus on prompts with positive semantics and often struggle to accurately interpret and ground complex expressions containing negative semantics. A key reason for this limitation is the lack of high-quality training data that explicitly captures discriminative negative samples and negation-aware language descriptions. To address this challenge, we introduce D-Negation, a new dataset that provides objects annotated with both positive and negative semantic descriptions. Building upon the observation that negation reasoning frequently appears in natural language, we further propose a grouped opposition-based learning framework that learns negation-aware representations from limited samples. Specifically, our method organizes opposing semantic descriptions from D-Negation into structured groups and formulates two complementary loss functions that encourage the model to reason about negation and semantic qualifiers. We integrate the proposed dataset and learning strategy into a state-of-the-art language-based grounding model. By fine-tuning fewer than 10 percent of the model parameters, our approach achieves improvements of up to 4.4 mAP and 5.7 mAP on positive and negative semantic evaluations, respectively. These results demonstrate that explicitly modeling negation semantics can substantially enhance the robustness and localization accuracy of vision-language grounding models.