FREAK: A Fine-grained Hallucination Evaluation Benchmark for Advanced MLLMs

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FREAK is introduced as a comprehensive multimodal benchmark for fine-grained hallucination assessment in multimodal LLMs to address limitations of existing benchmarks.
  • It uses high-quality photorealistic images with fine-grained counter-commonsense edits to evaluate hallucinations in precise visual perception.
  • Extensive experiments on FREAK show severe hallucination issues in state-of-the-art models regarding detailed visual perception.
  • The benchmark includes a controlled subset to indirectly evaluate the model's ability to perceive detailed information and analyzes Chain-of-Thought prompting to reveal patterns in hallucinations and model reasoning.

Abstract

Multimodal Large Language Models (MLLMs) suffer from hallucinations. Existing hallucination evaluation benchmarks are often limited by over-simplified tasks leading to saturated metrics, or insufficient diversity that fails to adequately assess the hallucination extent in state-of-the-art multimodal models. To address this gap, we propose FREAK, a comprehensive multimodal benchmark designed for fine-grained hallucination assessment in MLLMs. Through high-quality photorealistic images featuring fine-grained counter-commonsense edits, FREAK innovatively evaluates hallucination phenomena in detailed visual perception of MLLMs. Extensive experiments on FREAK show severe hallucination issues in SOTA models regarding detailed visual perception. To enable deeper investigation, we curate a controlled subset to indirectly evaluate the model's ability to perceive target detailed information. Through systematic evaluation of prevailing Chain-of-Thought (CoT) prompting techniques within this task, we reveal critical insights regarding hallucination patterns and model reasoning processes.