WikiSeeker: Rethinking the Role of Vision-Language Models in Knowledge-Based Visual Question Answering

arXiv cs.CV / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes WikiSeeker, a multi-modal Retrieval-Augmented Generation (RAG) framework for Knowledge-Based Visual Question Answering (KB-VQA) that addresses limitations in existing approaches that mainly use images as the retrieval key.
  • WikiSeeker redefines Vision-Language Models (VLMs) as two specialized agents: a Refiner that rewrites the textual query based on the input image to improve multimodal retrieval, and an Inspector that decides when to route reliable retrieved context to an LLM for answer generation.
  • When retrieval is unreliable, the Inspector allows the system to fall back on the VLM’s internal knowledge, enabling a decoupled generation strategy that better handles retrieval failures.
  • Experiments on EVQA, InfoSeek, and M2KR report state-of-the-art results with significant gains in both retrieval accuracy and answer quality.
  • The authors state that code will be released on GitHub, supporting reproducibility and potential adoption of the framework.

Abstract

Multi-modal Retrieval-Augmented Generation (RAG) has emerged as a highly effective paradigm for Knowledge-Based Visual Question Answering (KB-VQA). Despite recent advancements, prevailing methods still primarily depend on images as the retrieval key, and often overlook or misplace the role of Vision-Language Models (VLMs), thereby failing to leverage their potential fully. In this paper, we introduce WikiSeeker, a novel multi-modal RAG framework that bridges these gaps by proposing a multi-modal retriever and redefining the role of VLMs. Rather than serving merely as answer generators, we assign VLMs two specialized agents: a Refiner and an Inspector. The Refiner utilizes the capability of VLMs to rewrite the textual query according to the input image, significantly improving the performance of the multimodal retriever. The Inspector facilitates a decoupled generation strategy by selectively routing reliable retrieved context to another LLM for answer generation, while relying on the VLM's internal knowledge when retrieval is unreliable. Extensive experiments on EVQA, InfoSeek, and M2KR demonstrate that WikiSeeker achieves state-of-the-art performance, with substantial improvements in both retrieval accuracy and answer quality. Our code will be released on https://github.com/zhuyjan/WikiSeeker.