KIRA: Knowledge-Intensive Image Retrieval and Reasoning Architecture for Specialized Visual Domains

arXiv cs.CV / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces KIRA, a five-stage framework aimed at improving retrieval-augmented generation (RAG) for specialized visual domains by addressing key visual-RAG challenges like modality bridging, visual knowledge base construction, multi-hop reasoning, and evidence grounding.
  • KIRA’s core components include hierarchical semantic chunking with DINO-based region detection, domain-adaptive contrastive encoders for rare concepts, dual-path cross-modal retrieval with chain-of-thought query expansion, and chain-of-retrieval for multi-hop reasoning with temporal/multiview support.
  • For answer quality, KIRA uses evidence-conditioned grounded generation plus post-hoc hallucination verification to ensure responses are faithful to retrieved visual evidence.
  • The authors propose DOMAINVQAR, a benchmark that evaluates visual RAG using retrieval precision, reasoning faithfulness, and domain correctness (not just recall), and report strong results across four specialized domains.
  • Experiments on medical X-ray, circuit diagrams, satellite imagery, and histopathology show high retrieval precision (0.97) and grounding (1.0), with an average domain correctness of 0.707, and ablation studies highlight tradeoffs such as precision diversity impacts from components; code is planned for release after acceptance.

Abstract

Retrieval augmented generation (RAG) has transformed text based question answering, yet its extension to visual domains remains hindered by fundamental challenges: bridging the modality gap between image queries and text heavy knowledge bases, constructing semantically meaningful visual knowledge bases, performing multihop reasoning over retrieved images, and verifying that generated answers are faithfully grounded in visual evidence. We present KIRA (Knowledge Intensive Image Retrieval and Reasoning Architecture), a unified five stage framework that addresses ten core problems in visual RAG for specialized domains. KIRA introduces: (1) hierarchical semantic chunking with DINO based region detection for multi granularity knowledge base construction, (2) domain adaptive contrastive encoders with fewshot adaptation for rare visual concepts, (3) dualpath crossmodal retrieval with chainOfThought query expansion, (4) chainOfRetrieval for multihop visual reasoning with temporal and multiview support, and (5) evidence conditioned grounded generation with posthoc hallucination verification. We also propose DOMAINVQAR, a benchmark suite that evaluates visual RAG along three axes (retrieval precision, reasoning faithfulness, and domain correctness) going beyond standard recall metrics. Experiments across four specialized domains (medical Xray, circuit diagrams, satellite imagery, and histopathology) with a progressive six variant ablation demonstrate that KIRA achieves 0.97 retrieval precision, 1.0 grounding scores, and 0.707 domain correctness averaged across domains, while the ablation reveals actionable insights about when each component helps and when components introduce precision diversity tradeoffs that must be managed. Code will be released upon acceptance.