UniDoc-RL: Coarse-to-Fine Visual RAG with Hierarchical Actions and Dense Rewards

arXiv cs.CV / 4/17/2026

📰 NewsModels & Research

Key Points

  • UniDoc-RL is a new reinforcement-learning framework for visual RAG that enables an LVLM agent to jointly handle retrieval, reranking, active visual perception, and reasoning.
  • The method uses a hierarchical action space that refines visual evidence progressively, moving from coarse document retrieval to fine-grained image selection and region-level cropping to reduce irrelevant content.
  • It introduces a dense, multi-reward training scheme that gives task-aware supervision for each action, improving end-to-end learning for sequential visual acquisition.
  • UniDoc-RL is trained with Group Relative Policy Optimization (GRPO), avoiding the need for a separate value network while aligning behavior with multiple objectives.
  • Experiments across three benchmarks show consistent state-of-the-art improvements, reaching up to 17.7% gains over prior RL-based visual RAG approaches.

Abstract

Retrieval-Augmented Generation (RAG) extends Large Vision-Language Models (LVLMs) with external visual knowledge. However, existing visual RAG systems typically rely on generic retrieval signals that overlook the fine-grained visual semantics essential for complex reasoning. To address this limitation, we propose UniDoc-RL, a unified reinforcement learning framework in which an LVLM agent jointly performs retrieval, reranking, active visual perception, and reasoning. UniDoc-RL formulates visual information acquisition as a sequential decision-making problem with a hierarchical action space. Specifically, it progressively refines visual evidence from coarse-grained document retrieval to fine-grained image selection and active region cropping, allowing the model to suppress irrelevant content and attend to information-dense regions. For effective end-to-end training, we introduce a dense multi-reward scheme that provides task-aware supervision for each action. Based on Group Relative Policy Optimization (GRPO), UniDoc-RL aligns agent behavior with multiple objectives without relying on a separate value network. To support this training paradigm, we curate a comprehensive dataset of high-quality reasoning trajectories with fine-grained action annotations. Experiments on three benchmarks demonstrate that UniDoc-RL consistently surpasses state-of-the-art baselines, yielding up to 17.7% gains over prior RL-based methods.