Revealing Multi-View Hallucination in Large Vision-Language Models
arXiv cs.CV / 3/26/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies a failure mode in large vision-language models used with multi-view inputs, where the model mismatches visual evidence across instances or viewpoints, which the authors call “multi-view hallucination.”
- It introduces MVH-Bench with 4.8k question-answer pairs to systematically measure two hallucination types: cross-instance hallucination and cross-view hallucination.
- Experiments show that recent LVLMs have difficulty correctly linking the right visual evidence to the corresponding instance/viewpoint.
- The authors propose Reference Shift Contrastive Decoding (RSCD), a training-free decoding method that reduces visual interference by generating negative logits via attention masking.
- RSCD improves results on MVH-Bench using Qwen2.5-VL and LLaVA-OneVision, achieving gains up to 21.1 and 34.6 points over existing mitigation approaches.
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Why I Switched from Cloud AI to a Dedicated AI Box (And Why You Should Too)
Dev.to
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to