LITTA: Late-Interaction and Test-Time Alignment for Visually-Grounded Multimodal Retrieval

arXiv cs.AI / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • LITTA is a test-time, query-expansion-centric framework for multimodal evidence page retrieval from visually complex documents like textbooks and manuals, where long context and weak lexical overlap make retrieval difficult.
  • It uses a large language model to generate complementary query variants, then retrieves candidate pages with a frozen vision retriever using late-interaction scoring.
  • Candidate lists from expanded queries are combined via reciprocal rank fusion to improve coverage and reduce dependence on any single query phrasing.
  • Experiments on three domains (computer science, pharmaceuticals, industrial manuals) show that multi-query retrieval improves top-k accuracy, recall, and MRR versus single-query retrieval, especially where visual and semantic variability is high.
  • LITTA also offers a controllable accuracy–latency trade-off by adjusting the number of query variants, and it remains compatible with existing multimodal embedding indices without retriever retraining.

Abstract

Retrieving relevant evidence from visually rich documents such as textbooks, technical reports, and manuals is challenging due to long context, complex layouts, and weak lexical overlap between user questions and supporting pages. We propose LITTA, a query-expansion-centric retrieval framework for evidence page retrieval that improves multimodal document retrieval without retriever retraining. Given a user query, LITTA generates complementary query variants using a large language model and retrieves candidate pages for each variant using a frozen vision retriever with late-interaction scoring. Candidates from expanded queries are then aggregated through reciprocal rank fusion to improve evidence coverage and reduce sensitivity to any single phrasing. This simple test-time strategy significantly improves retrieval robustness while remaining compatible with existing multimodal embedding indices. We evaluate LITTA on visually grounded document retrieval tasks across three domains: computer science, pharmaceuticals, and industrial manuals. Multi-query retrieval consistently improves top-k accuracy, recall, and MRR compared to single-query retrieval, with particularly large gains in domains with high visual and semantic variability. Moreover, the accuracy-efficiency trade-off is directly controllable by the number of query variants, making LITTA practical for deployment under latency constraints. These results demonstrate that query expansion provides a simple yet effective mechanism for improving visually grounded multimodal retrieval.