MAB-DQA: Addressing Query Aspect Importance in Document Question Answering with Multi-Armed Bandits

arXiv cs.CL / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets multimodal Document Question Answering where multimodal RAG over page images often retrieves only a small top-K set, missing useful but less visually salient pages.
  • It introduces MAB-DQA, which decomposes a query into aspect-aware subqueries and retrieves an aspect-specific candidate set for each subquery.
  • MAB-DQA uses a multi-armed bandit strategy, treating each aspect subquery as an “arm,” to estimate aspect utility from rewards derived from reasoning on a few representative pages.
  • An exploration–exploitation policy dynamically reallocates the retrieval budget toward higher-value aspects, using both informative pages and their correlations to produce final expected answers.
  • Experiments on four benchmarks show 5%–18% average improvements over state-of-the-art baselines, and the authors provide released code on GitHub.

Abstract

Document Question Answering (DQA) involves generating answers from a document based on a user's query, representing a key task in document understanding. This task requires interpreting visual layouts, which has prompted recent studies to adopt multimodal Retrieval-Augmented Generation (RAG) that processes page images for answer generation. However, in multimodal RAG, visual DQA struggles to utilize a large number of images effectively, as the retrieval stage often retains only a few candidate pages (e.g., Top-4), causing informative but less visually salient content to be overlooked in favor of common yet low-information pages. To address this issue, we propose a Multi-Armed Bandit-based DQA framework (MAB-DQA) to explicitly model the varying importance of multiple implicit aspects in a query. Specifically, MAB-DQA decomposes a query into aspect-aware subqueries and retrieves an aspect-specific candidate set for each. It treats each subquery as an arm and uses preliminary reasoning results from a small number of representative pages as reward signals to estimate aspect utility. Guided by an exploration-exploitation policy, MAB-DQA dynamically reallocates retrieval budgets toward high-value aspects. With the most informative pages and their correlations, MAB-DQA generates the expected results. On four benchmarks, MAB-DQA shows an average improvement of 5%-18% over the state-of-the-art method, consistently enhancing document understanding. Code at https://github.com/ElephantOH/MAB-DQA.

MAB-DQA: Addressing Query Aspect Importance in Document Question Answering with Multi-Armed Bandits | AI Navigate