AI Navigate

MURE: Hierarchical Multi-Resolution Encoding via Vision-Language Models for Visual Document Retrieval

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Visual Document Retrieval requires representations that capture both fine-grained visual details and global document structure, but existing models either lose fine details or incur high indexing costs and retrieval latency.
  • The authors introduce the X-VisEmb paradigm featuring multi-resolution sampling and encoding, cross-granularity feature fusion, and adaptive representation distillation to fuse cues across scales.
  • Building on X-VisEmb, MURE uses vision-language models as a hierarchical multi-resolution encoder, introduces Matryoshka-style resolution-level representation learning for effective feature fusion, and applies semantic-aware hierarchical clustering to compress visual tokens.
  • Experiments on two VDR benchmarks show MURE consistently beats strong baselines and outperforms ColPali with only 50% of its visual token budget, reducing indexing overhead and retrieval latency.

Abstract

Visual Document Retrieval (VDR) requires representations that capture both fine-grained visual details and global document structure to ensure retrieval efficacy while maintaining computational efficiency. Existing VDR models struggle to balance effectiveness and efficiency when processing high-resolution documents: they often either lose fine-grained information or generate an excessive number of visual tokens, resulting in significant indexing overhead and high retrieval latency. In this work, we rethink the visual encoding mechanism and propose a new X-VisEmb paradigm that progresses from multi-resolution sampling and encoding, through cross-granularity feature fusion, to adaptive representation distillation. A preliminary study validates its feasibility and effectiveness in capturing complementary visual cues at varying scales. Building on the insights, we develop MURE, a novel framework that employs VLMs as a hierarchical multi-resolution encoder, integrates resolution-level Matryoshka representation learning (RMRL) for effective feature fusion, and applies a semantic-aware hierarchical clustering mechanism for visual token compression. Experiments on two widely used VDR benchmarks show that our MURE framework consistently beats strong baselines. Furthermore, it significantly outperforms ColPali with only 50% of its visual token budget.