Concrete Jungle: Towards Concreteness Paved Contrastive Negative Mining for Compositional Understanding

arXiv cs.LG / 4/16/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models struggle with compositional reasoning because contrastive pretraining lacks enough informative negative samples to distinguish subtle semantic differences like word order and attribute binding.
  • It proposes that negative mining should be driven by lexical concreteness: replacing highly concrete terms creates stronger perceptual and structural mismatches, yielding a more effective learning signal.
  • The method introduces ConcretePlant/Slipform to systematically manipulate perceptually grounded concepts for contrastive negative mining, including an analysis of InfoNCE showing severe gradient imbalance.
  • To address optimization degradation from overly easy pairs dominating training, it formulates a margin-based “Cement loss” that dynamically calibrates penalties using psycholinguistic concreteness scores correlated with sample difficulty.
  • Experiments report state-of-the-art results on compositional understanding benchmarks as well as improved cross-modal retrieval and linear probing tasks for both single- and multi-label settings.

Abstract

Vision-Language Models demonstrate remarkable capabilities but often struggle with compositional reasoning, exhibiting vulnerabilities regarding word order and attribute binding. This limitation arises from a scarcity of informative samples needed to differentiate subtle semantic variations during contrastive pretraining. Although hard negative mining offers a promising remedy, existing methods lack explicit mechanisms to dictate which linguistic elements undergo modification. Instead of engineering generative architectures, this study establishes lexical concreteness as a fundamental determinant of negative sample efficacy. Modifying highly concrete terms generates more pronounced structural and visual discrepancies, providing a substantially stronger learning signal. Leveraging this principle, ConcretePlant is proposed to systematically isolate and manipulate perceptually grounded concepts. Analyses of the InfoNCE further reveals a severe gradient imbalance, where easily distinguishable pairs disproportionately overwhelm the optimization process and restrict the bandwidth available for nuanced learning. To resolve this degradation, the Cement loss is formulated utilizing a margin-based approach. By correlating psycholinguistic scores with sample difficulty, this objective dynamically calibrates the penalization applied to individual training pairs. Comprehensive evaluations substantiate these theoretical claims. The integrated framework, designated as Slipform, achieves state-of-the-art accuracy across diverse compositional evaluation benchmarks, general cross-modal retrieval, single and multi label linear probing.