Aligning with Your Own Voice: Self-Corrected Preference Learning for Hallucination Mitigation in LVLMs

arXiv cs.AI / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses frequent hallucinations in large vision-language models (LVLMs) and argues that preference-learning methods can be less efficient due to distribution mismatch when using proprietary models to build preference datasets.
  • It proposes AVES-DPO (Alignment via VErified Self-correction DPO), which aligns LVLMs using in-distribution data derived from the model’s own intrinsic knowledge rather than relying on external proprietary systems.
  • AVES-DPO uses a consensus-based verification mechanism to identify a variety of hallucination types and then trains the model to self-correct.
  • Because the preference pairs are generated to match the model’s internal distribution, the method improves hallucination mitigation efficiency.
  • Experiments reportedly show AVES-DPO outperforms existing baselines while needing only 5.2k samples, indicating strong sample efficiency.

Abstract

Large Vision-Language Models (LVLMs) frequently suffer from hallucinations. Existing preference learning-based approaches largely rely on proprietary models to construct preference datasets. We identify that this reliance introduces a distributional mismatch between the proprietary and target models that hinders efficient alignment. To address this, we propose Alignment via VErified Self-correction DPO (AVES-DPO), a framework that aligns LVLMs using in-distribution data derived from the model's intrinsic knowledge. Our approach employs a consensus-based verification mechanism to diagnose diverse hallucinations and guides the model to self-correct, thereby generating preference pairs strictly compatible with its internal distribution. Extensive experiments demonstrate that AVES-DPO surpasses existing baselines in hallucination mitigation while requiring only 5.2k samples.