MM-StanceDet: Retrieval-Augmented Multi-modal Multi-agent Stance Detection

arXiv cs.AI / 5/1/2026

📰 NewsModels & Research

Key Points

  • The paper addresses the challenges of multimodal stance detection, particularly how to reliably fuse text and images when signals conflict.
  • It introduces MM-StanceDet, a retrieval-augmented, multi-agent framework designed to improve contextual grounding and cross-modal interpretation.
  • The approach combines specialized multimodal analysis agents with a reasoning-enhanced debate stage to explore different viewpoints before deciding.
  • It further adds self-reflection to make final adjudication more robust against errors from fragile single-pass reasoning.
  • Experiments across five datasets show MM-StanceDet significantly outperforms existing state-of-the-art baselines, supporting the effectiveness of the structured multi-agent design.

Abstract

Multimodal Stance Detection (MSD) is crucial for understanding public discourse, yet effectively fusing text and image, especially with conflicting signals, remains challenging. Existing methods often face difficulties with contextual grounding, cross-modal interpretation ambiguity, and single-pass reasoning fragility. To address these, we propose Retrieval-Augmented Multi-modal Multi-agent Stance Detection (MM-StanceDet), a novel multi-agent framework integrating Retrieval Augmentation for contextual grounding, specialized Multimodal Analysis agents for nuanced interpretation, a Reasoning-Enhanced Debate stage for exploring perspectives, and Self-Reflection for robust adjudication. Extensive experiments on five datasets demonstrate MM-StanceDet significantly outperforms state-of-the-art baselines, validating the efficacy of its multi-agent architecture and structured reasoning stages in addressing complex multimodal stance challenges.