First Logit Boosting: Visual Grounding Method to Mitigate Object Hallucination in Large Vision-Language Models

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses persistent object hallucination in large vision-language models (LVLMs) and notes that existing fixes often require costly retraining or complex grounding structures.
  • It proposes First Logit Boosting (FLB), a training-free method that saves the logit of the first generated token and adds it to later token predictions to prevent long-term decay of visual grounding.
  • FLB is designed to keep visual information active throughout generation and reduce hallucinated words, leveraging the stabilizing effect associated with the “The” token.
  • Experiments report significant reductions in object hallucination across multiple tasks, benchmarks, and LVLM backbone models, with negligible inference overhead.
  • The authors provide an implementation at a public GitHub repository, suggesting straightforward adoption for real-time multimodal systems.

Abstract

Recent Large Vision-Language Models (LVLMs) have demonstrated remarkable performance across various multimodal tasks that require understanding both visual and linguistic inputs. However, object hallucination -- the generation of nonexistent objects in answers -- remains a persistent challenge. Although several approaches such as retraining and external grounding methods have been proposed to mitigate this issue, they still suffer from high data costs or structural complexity. Training-free methods such as Contrastive Decoding (CD) are more cost-effective, avoiding additional training or external models, but still suffer from long-term decay, where visual grounding weakens and language priors dominate as the generation progresses. In this paper, we propose First Logit Boosting (FLB), a simple yet effective training-free technique designed to alleviate long-term decay in LVLMs. FLB stores the logit of the first generated token and adds it to subsequent token predictions, effectively mitigating long-term decay of visual information. We observe that FLB (1) sustains the visual information embedded in the first token throughout generation, and (2) suppresses hallucinated words through the stabilizing effect of the ``The'' token. Experimental results show that FLB significantly reduces object hallucination across various tasks, benchmarks, and backbone models. Notably, it causes negligible inference overhead, making it highly applicable to real-time multimodal systems. Code is available at https://github.com/jiwooha20/FLB