CGC: Compositional Grounded Contrast for Fine-Grained Multi-Image Understanding

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Compositional Grounded Contrast (CGC) to improve multimodal LLMs’ fine-grained multi-image understanding, addressing issues like spatial hallucination, attention leakage, and object non-constancy.
  • CGC is designed as a low-cost framework that builds compositional multi-image training instances from existing single-image grounding annotations using Inter-Image and Intra-Image contrastive learning.
  • It adds a rule-based spatial reward integrated into the GRPO framework to strengthen source-image attribution, spatial alignment, and the validity of structured outputs under a Think-before-Grounding strategy.
  • Experiments report state-of-the-art performance on fine-grained multi-image benchmarks (MIG-Bench, VLM2-Bench) and transferable gains on broader multimodal reasoning tasks, improving over the Qwen3-VL-8B base across several benchmarks.

Abstract

Although Multimodal Large Language Models (MLLMs) have advanced rapidly, they still face notable challenges in fine-grained multi-image understanding, often exhibiting spatial hallucination, attention leakage, and failures in object constancy. In addition, existing approaches typically rely on expensive human annotations or large-scale chain-of-thought (CoT) data generation. We propose Compositional Grounded Contrast (abbr. CGC), a low-cost full framework for boosting fine-grained multi-image understanding of MLLMs. Built on existing single-image grounding annotations, CGC constructs compositional multi-image training instances through Inter-Image Contrast and Intra-Image Contrast, which introduce semantically decoupled distractor contexts for cross-image discrimination and correlated cross-view samples for object constancy, respectively. CGC further introduces a Rule-Based Spatial Reward within the GRPO framework to improve source-image attribution, spatial alignment, and structured output validity under a Think-before-Grounding paradigm. Experiments show that CGC achieves state-of-the-art results on fine-grained multi-image benchmarks, including MIG-Bench and VLM2-Bench. The learned multi-image understanding capability also transfers to broader multimodal understanding and reasoning tasks, yielding consistent gains over the Qwen3-VL-8B base model on MathVista (+2.90), MuirBench (+2.88), MMStar (+1.93), MMMU (+1.77), and BLINK (+1.69).