AI Navigate

PRISM of Opinions: A Persona-Reasoned Multimodal Framework for User-centric Conversational Stance Detection

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations in current Multimodal Conversational Stance Detection (MCSD) research by highlighting issues like pseudo-multimodality and ignoring user diversity in stance expression.
  • A novel dataset named U-MStance is introduced, featuring over 40,000 annotated comments on six real-world targets, designed to capture user-centric multimodal conversational data.
  • The authors propose PRISM, a Persona-Reasoned Multimodal Stance Model that derives user personas from historical data and integrates textual and visual cues using Chain-of-Thought reasoning.
  • PRISM employs a mutual task reinforcement mechanism to jointly optimize stance detection and stance-aware response generation, facilitating better bidirectional knowledge transfer.
  • Experiments demonstrate that PRISM significantly outperforms existing baselines, emphasizing the importance of user-centered and contextually grounded multimodal reasoning in stance understanding.

Computer Science > Computation and Language

arXiv:2511.12130 (cs)
[Submitted on 15 Nov 2025 (v1), last revised 10 Mar 2026 (this version, v2)]

Title:PRISM of Opinions: A Persona-Reasoned Multimodal Framework for User-centric Conversational Stance Detection

View a PDF of the paper titled PRISM of Opinions: A Persona-Reasoned Multimodal Framework for User-centric Conversational Stance Detection, by Bingbing Wang and 8 other authors
View PDF HTML (experimental)
Abstract:The rapid proliferation of multimodal social media content has driven research in Multimodal Conversational Stance Detection (MCSD), which aims to interpret users' attitudes toward specific targets within complex discussions. However, existing studies remain limited by: **1) pseudo-multimodality**, where visual cues appear only in source posts while comments are treated as text-only, misaligning with real-world multimodal interactions; and **2) user homogeneity**, where diverse users are treated uniformly, neglecting personal traits that shape stance expression. To address these issues, we introduce **U-MStance**, the first user-centric MCSD dataset, containing over 40k annotated comments across six real-world targets. We further propose **PRISM**, a **P**ersona-**R**easoned mult**I**modal **S**tance **M**odel for MCSD. PRISM first derives longitudinal user personas from historical posts and comments to capture individual traits, then aligns textual and visual cues within conversational context via Chain-of-Thought to bridge semantic and pragmatic gaps across modalities. Finally, a mutual task reinforcement mechanism is employed to jointly optimize stance detection and stance-aware response generation for bidirectional knowledge transfer. Experiments on U-MStance demonstrate that PRISM yields significant gains over strong baselines, underscoring the effectiveness of user-centric and context-grounded multimodal reasoning for realistic stance understanding.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2511.12130 [cs.CL]
  (or arXiv:2511.12130v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2511.12130
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Bingbing Wang [view email]
[v1] Sat, 15 Nov 2025 09:35:58 UTC (2,549 KB)
[v2] Tue, 10 Mar 2026 10:16:47 UTC (2,552 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled PRISM of Opinions: A Persona-Reasoned Multimodal Framework for User-centric Conversational Stance Detection, by Bingbing Wang and 8 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.