URMF: Uncertainty-aware Robust Multimodal Fusion for Multimodal Sarcasm Detection
arXiv cs.CV / 4/9/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces URMF (Uncertainty-aware Robust Multimodal Fusion) to improve multimodal sarcasm detection by explicitly modeling which modality (text, image, or their interaction) is reliable rather than assuming all inputs are equally trustworthy.
- URMF injects visual evidence into text using multi-head cross-attention, then refines incongruity-aware reasoning with multi-head self-attention over the fused semantic space.
- It uses aleatoric uncertainty modeling by representing each modality (and interaction-aware latent states) as a learnable Gaussian posterior, and dynamically suppresses unreliable modalities during fusion.
- The training strategy combines task supervision with modality-prior regularization, cross-modal distribution alignment, and uncertainty-driven self-sampling contrastive learning.
- Experiments on public multimodal sarcasm detection benchmarks report that URMF outperforms strong unimodal, multimodal, and MLLM-based baselines in both accuracy and robustness.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to