Steering the Verifiability of Multimodal AI Hallucinations

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that multimodal LLM hallucinations differ in how easily humans can detect them, dividing them into "obvious" and "elusive" types by verifiability.
  • It builds a dataset using 4,470 human responses to AI-generated hallucinations and labels hallucinations according to whether users can reliably verify them.
  • The authors propose an activation-space intervention method that trains separate probes targeting obvious versus elusive hallucinations.
  • Experimental results show that the interventions can be tuned for fine-grained regulation of verifiability, and that mixing interventions enables scenario-dependent control.

Abstract

AI applications driven by multimodal large language models (MLLMs) are prone to hallucinations and pose considerable risks to human users. Crucially, such hallucinations are not equally problematic: some hallucination contents could be detected by human users(i.e., obvious hallucinations), while others are often missed or require more verification effort(i.e., elusive hallucinations). This indicates that multimodal AI hallucinations vary significantly in their verifiability. Yet, little research has explored how to control this property for AI applications with diverse security and usability demands. To address this gap, we construct a dataset from 4,470 human responses to AI-generated hallucinations and categorize these hallucinations into obvious and elusive types based on their verifiability by human users. Further, we propose an activation-space intervention method that learns separate probes for obvious and elusive hallucinations. We reveal that obvious and elusive hallucinations elicit different intervention probes, allowing for fine-grained control over the model's verifiability. Empirical results demonstrate the efficacy of this approach and show that targeted interventions yield superior performance in regulating corresponding verifiability. Moreover, simply mixing these interventions enables flexible control over the verifiability required for different scenarios.