Mitigating Hallucinations in Large Vision-Language Models without Performance Degradation
arXiv cs.CV / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Large Vision-Language Models (LVLMs) can generate hallucinations that reduce the reliability of their outputs, and while hallucination-free fine-tuning is effective, it is often computationally expensive.
- Prior representation-based mitigation approaches are efficient but can still weaken general generation because they incompletely isolate hallucination-related components and update parameters in an overly broad way.
- The paper introduces MPD, a dual-stage framework that mitigates hallucinations without degrading overall generation by (1) disentangling hallucination components in a semantic-aware manner and (2) applying interpretable, selective parameter updates.
- Experiments show MPD achieves state-of-the-art results, cutting hallucinations by 23.4% while preserving 97.4% of general generative capability on benchmarks like LLaVA-Bench and MME, with no added computational cost.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to