Chain of Evidence: Pixel-Level Visual Attribution for Iterative Retrieval-Augmented Generation

arXiv cs.CV / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Chain of Evidence (CoE), a retriever-agnostic framework for pixel-level visual attribution in Iterative Retrieval-Augmented Generation (iRAG).
  • CoE addresses two limitations of current iRAG systems: coarse-grained text-level citation that forces users to manually search documents, and visual semantic loss caused by converting layouts (slides/PDFs) into plain text.
  • Instead of relying on parsed text, CoE uses Vision-Language Models to reason directly over screenshots of retrieved document candidates and outputs precise bounding boxes.
  • The authors evaluate CoE on two benchmarks, Wiki-CoE (web pages derived from 2WikiMultiHopQA) and SlideVQA (presentation slides with complex diagrams and layouts).
  • Fine-tuned Qwen3-VL-8B-Instruct delivers strong results, outperforming text-based baselines for tasks requiring visual layout understanding, and the code is released on GitHub.

Abstract

Iterative Retrieval-Augmented Generation (iRAG) has emerged as a powerful paradigm for answering complex multi-hop questions by progressively retrieving and reasoning over external documents. However, current systems predominantly operate on parsed text, which creates two critical bottlenecks: (1) \textit{Coarse-grained attribution}, where users are burdened with manually locating evidence within lengthy documents based on vague text-level citations; and (2) \textit{Visual semantic loss}, where the conversion of visually rich documents (e.g., slides, PDFs with charts) into text discards spatial logic and layout cues essential for reasoning. To bridge this gap, we present \textbf{Chain of Evidence (CoE)}, a retriever-agnostic visual attribution framework that leverages Vision-Language Models to reason directly over screenshots of retrieved document candidates. CoE eliminates format-specific parsing and outputs precise bounding boxes, visualizing the complete reasoning chain within the retrieved candidate set. We evaluate CoE on two distinct benchmarks: \textbf{Wiki-CoE}, a large-scale dataset of structured web pages derived from 2WikiMultiHopQA, and \textbf{SlideVQA}, a challenging dataset of presentation slides featuring complex diagrams and free-form layouts. Experiments demonstrate that fine-tuned Qwen3-VL-8B-Instruct achieves robust performance, significantly outperforming text-based baselines in scenarios requiring visual layout understanding, while establishing a retriever-agnostic solution for pixel-level interpretable iRAG. Our code is available at https://github.com/PeiYangLiu/CoE.git.